In some ways, I think many also view it as: "If we don't do it, someone else will.". The skillsets required to develop better AIs are not particularly unique to a small group of individuals, and there are many efforts across the world to develop AIs for various purposes. I would agree that we really need to get on the worldwide discussion of how to handle human-level AIs very soon, because I think they are coming fast, and we are woefully unprepared to handle the ramifications.flybynight wrote:This lack accountability is why I view this field of study with distrust. The mindset of the people developing the technology is so wrapped around the concept of how to, they never consider the ramifications of how it affects anything else if the technology succeeds.It's not clear how big an impact this research will have on information security. George points out that Google has already moved away from text-based CAPTCHAs, using more advanced tests. As AI gets smarter, so too will the tests required to prove that a user is human.
Along the information security side:
We don't hear about it as much, and I'm sure the most advanced projects we don't even hear about at all, but the government has already been looking into how to attack/defend networks using AI as well. The highest profile series I've heard of is DARPA's Cyber Grand Challenge (https://www.darpa.mil/program/cyber-grand-challenge). There are some links to further information on the linked page, but a brief overview is this one:
The gist is that these machines will try to attack each over by discovering vulnerabilities in the other machines, while also trying to detect and patch vulnerabilities that other machines try to exploit against them. It is the current arms race that hackers and developers have been battling since the invention of the internet, but a few orders of magnitude faster.