Both stories are from NPR's website:
Elon Musk Warns Governors that AI poses Existential Risk
Is the real of intelligent machines justified?
Elon Musk Warns Governors: Artificial Intelligence Poses 'Existential Risk' July 17, 2017 By Camila Domonoske
Tesla CEO Elon Musk, speaking to U.S. governors this weekend, told the political leaders that artificial intelligence poses an "existential threat" to human civilization.
At the bipartisan National Governors Association in Rhode Island, Musk also spoke about energy sources, his own electric car company and space travel. But when Gov. Brian Sandoval of Nevada, grinning, asked if robots will take everyone's jobs in the future — Musk wasn't joking when he responded.
Yes, "robots will do everything better than us," Musk said. But he's worried about more than the job market.
"AI is a fundamental existential risk for human civilization, and I don't think people fully appreciate that," Musk said. He said he has access to cutting-edge AI technology, and that based on what he's seen, AI is "the scariest problem."
Musk told the governors that AI calls for precautionary, proactive government intervention: "I think by the time we are reactive in AI regulation, it's too late," he said.
He was clearly not thrilled to make that argument, calling regulation generally "not fun" and "irksome," but he said that in the case of AI, the risks are too high to allow AI to develop unfettered.
"I think people should be really concerned about it," Musk said. "I keep sounding the alarm bell."
It's true: For years, Musk has issued Cassandra-like cautions about the risks of artificial intelligence. In 2014, he likened AI developers to people summoning demons they think they can control. In 2015, he signed a letter warning of the risk of an AI arms race.
Musk has invested in a project designed to make AI tech open-source, which he asserts will prevent it from being controlled by one company. And earlier this year, Maureen Dowd wrote a lengthy piece for Vanity Fair about Musk's "crusade to stop the A.I. apocalypse." Dowd noted that some Silicon Valley leaders — including Google co-founder Larry Page — do not share Musk's skepticism, and describe AI as a possible force for good.
Critics "argue that Musk is interested less in saving the world than in buffing his brand," Dowd writes, and that his speeches on the threat of AI are part of a larger sales strategy.
Back at the governors conference, some politicians expressed skepticism about the wisdom of regulating a technology that's still in development. Musk said the first step would be for the government to gain "insight" into the actual status of current research.
"Once there is awareness, people will be extremely afraid," Musk said. "As they should be."
Is The Fear Of Intelligent Machines Justified? March 16, 20162:09 PM ET Commentary By Marcelo Gleiser
It's in the news everywhere, with near-apocalyptic hubris: Google's DeepMind machine beat the world champion of the game Go with a score of 4-1.
Or, according to Britain's Independent newspaper: "Google's Go-playing computer has definitely beaten the best human in the world, finishing a pioneering match at 4-1." The "best human in the world," South Korean professional Go player Lee Sedol, is actually ranked No. 5 in the world. Clearly a stellar Go player — but not the world's best.
Mind you, there are several kinds of rankings for Go and they don't always agree. In fact, there is much confusion once you start looking. The one that puts Sedol as No. 5 is known as WHR algorithm. But these are details. The fact is that a machine beat a master Go player 4-1 in a much-publicized event. As AI (artificial intelligence) expert Gary Marcus wrote in a recent essay, "DeepMind made major progress, but the Go journey is still not over...The real question is whether the technology developed there can be taken out of the game world and into the real world."
In other words, can machine game-playing prowess be applied to real-world challenges?
In their Jan. 28 Nature article about DeepMind, Google scientists state in the abstract that the machine "defeated the human European Go champion by 5 games to 0...a feat previously thought to be at least a decade away." This professional Go player was Fan Hui, three-time European champion, currently ranked at No. 507 according to this ranking. Moving from Hui to Sedol is indeed very impressive.
Google's DeepMind program uses a combination of a vamped-up machine-learning algorithm known as "value networks" that evaluates board positions ("machine learning" is the new version of neural networks, programs that try to emulate simplified neuronal activity able to learn patterns and behaviors) and "policy networks" that selects moves. Oversimplifying, one piece of the program analyses possible moves while the other chooses the optimal move for a given situation based on a statistical analysis of the best possibilities. The closing lines in the Nature article are important: "AlphaGo has finally reached a professional level in Go, providing hope that human-level performance can now be achieved in other seemingly intractable artificial intelligence domains."
To be useful in the real word, where rules are often not rigid and surprise events and behaviors continually throw wrenches into efforts to rationalize behavior — be it human, political or economic — intelligent programs need a sort of plasticity and adaptability not easily transferrable from a more focused game-playing platform. Although Google's DeepMind success moves progress in AI to a whole new level, the jump from game-playing to intelligence that mirrors anything closer to human intelligence functioning in a complex world is still a very huge one. To many, that's a very good thing.
Oxford University philosopher Nick Bostrom has been cautioning us about the dangers of a super-intelligence out in the world. And billionaire Elon Musk, physicists Stephen Hawking and Martin Rees, Bostrom himself — and more interestingly, Demis Hassabis, Shane Legg, and Mustafa Suleyman, all co-founders of DeepMind — have signed an open letter where they "recommend expanded research aimed at ensuring that increasingly capable AI systems are robust and beneficial: our AI systems must do what we want them to do."
It remains to be seen if Musk's idea of empowering as many people as possible to have access to AI will work as a sort of deterrence policy against AI domination (somewhat like the nuclear deterrence policy against global destruction) — or if, given that intelligent machines could, in principle at least, network to become a more unified autonomous entity, the nightmare is inescapable. AI is no nuclear bomb. Fortunately, even with the amazing steps of DeepMind in the game of Go, we can still sleep in peace for the foreseeable future while we find safeguards that will protect us from our own inventions.
Marcelo Gleiser is a theoretical physicist and cosmologist — and professor of natural philosophy, physics and astronomy at Dartmouth College. He is the co-founder of 13.7, a prolific author of papers and essays, and active promoter of science to the general public. His latest book is The Island of Knowledge: The Limits of Science and the Search for Meaning. You can keep up with Marcelo on Facebook and Twitter: @mgleiser.