Elon Musk: Google AI Is a Top Concern for Humankind
“It’s basically the plotline in War Games.”
--
Lee Sedol is visibly under pressure. The fingers of his right hand transpire the stress in a kind of nervous tic. The thumb repeatedly touches the middle finger, then moves to the ring and index fingers. For a moment, Sedol looks away from the board, then rises from his chair.
He has played hundreds of games, but this one is undoubtedly the most important of his professional career. It is an event destined to go down in history, and he is the center of media attention.
Sedol quickly leaves the room: he needs a cigarette and especially a break.
The other player, meanwhile, keeps playing. When Sedol resumes his seat, his jaw drops. After a few moments, the surprise becomes a brief sneer.
No experienced Go player would undertake such a move. But in front of Sedol, after all, there is not just any professional Go player; there is a machine.
Artificial intelligence lab DeepMind had unveiled AlphaGo a few months earlier as a machine capable of not only competing but also winning against a professional Go player.
A claim AlphaGo had already proven in its first public match, beating European champion Fan Hui 5–0.
Nevertheless, that occasion aroused no little perplexity.
People began to discredit Fan Hui, questioning his abilities and claiming that artificial intelligence would not stand a chance against a top player.
Indeed, even for the best researchers, artificial intelligence would have needed at least another decade before a machine could successfully process and complete the game.
Go is not chess: it is much more complex.
Go has more than 300 times the number of plays as chess, and the best players win not only through logic but also with the help of intuition.
Lee Sedol represented the perfect candidate to bring everyone together. At home, he was a national hero, recognized as the best Go player of the decade.
Yet on that day in 2016, after that thirty-seventh move of AlphaGo, he would never recover.
Sedol resigned after more than four hours of play for the second time in two games. And a few days later, the best-of-five series would end 4 to 1 for AlphaGo.
Several distinguished names in AI congratulated DeepMind on its historic achievement, including Elon Musk.
Musk had been an early investor in DeepMind. An investment he made not for the mere financial return but to keep an eye on something that concerned him more than anything else.
What’s behind DeepMind
Demis Hassabis and his partners spent almost a year figuring out which billionaire might be interested in funding their idea. And in Peter Thiel, the co-founder of PayPal and Palantiri, they saw the perfect candidate.
In August 2010, a few weeks before launching DeepMind, Hassabis got an invitation to the conference Thiel held annually. And there, he secured his appointment in a one-minute conversation.
Months after that meeting, Peter Thiel invested in DeepMind, and so did Elon Musk.
Just a few years later, Google bought DeepMind for about $650 million. the acquisition multiplied Musk and Thiel’s investment.
But for Musk, DeepMind began to become a cause for concern.
When Demis Hassabis co-founded DeepMind with his friend Mustafa Suleyman and artificial intelligence researcher Shane Legg, their idea was to find a meta-solution to potentially any problem.
DeepMind’s goal was –and still is– to create an artificial general intelligence (AGI), a general-purpose learning machine capable of learning in the same way biological systems do.
Instead of teaching the machine to understand language, DeepMind wanted to teach the network to make decisions through machine learning and systems neuroscience.
In other words, if the norm was pre-programming a machine, they aspired to program them with the ability to learn by themselves.
DeepMind expects that one day AGI will be able to solve problems that currently seem unsolvable: cancer, climate change, energy, genomics, macroeconomics, and financial systems.
What Musk worries about is how Google might implement this technology.
Elon Musk on AI
Of all the companies working on AI, if there is one that worries Musk the most, that is Google.
He told Ashlee Vance, the author of his biography, that he fears that Google might start something evil by accident, despite good intentions.
In particular, nothing worries Elon Musk more than DeepMind, which he called a “top concern” in AI. In an interview with the NYT, he said:
“Just the nature of the AI that they’re building is one that crushes all humans at all games. I mean, it’s basically the plotline in ‘War Games.’”
In the film War Games, a military supercomputer convinces U.S. forces of an impending Soviet nuclear attack, bringing the world to the brink of World War III.
Over the past decade, Elon Musk has made no secret of his concerns about artificial intelligence.
At MIT in 2014, he called artificial intelligence the greatest threat to humanity, comparing it to the summon demon.
That same year, Mark Zuckenberg invited Musk to dinner at his Palo Alto home in the company of two researchers from Facebook’s new artificial intelligence lab and two other executives.
The Facebook team tried to convince Musk that his perspective was erroneous, but Musk remained adamant. “I seriously believe this is dangerous,” he told the table.
His thinking rested on a single hypothesis: if we create an intelligence superior to human intelligence, the possibility that it could turn against us is not so remote.
Musk’s top concern
Google did not pay over $650 million to have a machine that could play games better than anyone else. DeepMind’s technology underpins several of the Palo Alto company’s services, and it does not stop there.
If you go to DeepMind’s website, the projects they are operating on are essentially four:
- Understanding protein folding;
- Identifying eye disease faster;
- Saving energy at Google scale;
- Improving Google Product;
Google uses DeepMind AI for fraud and spam detection, speech and handwriting recognition, image search, Street View, and translation.
In addition, DeepMind has access to all of Google’s servers to optimize the power of all data centers. A great advantage for Google, but also something that Elon Musk fears could become a Trojan horse.
Should DeepMind’s artificial intelligence take complete control of Google’s data system, at that point, it could do anything, Elon warns.
Musk’s efforts to curb Skynet
Elon does not necessarily believe that AI will end the human race. More simply, he sees it as a possible event if humanity interposes itself between artificial intelligence and one of its targets.
Humankind is heading toward a time when AI will be far more intelligent than humans, and Musk, in his own words, only wants to be sure the odds are in favor of a positive future.
OpenAI has been created to prevent the power of artificial intelligence from being monopolized. Musk and others have funded OpenAI research to advance digital intelligence in the way that is most likely to benefit humanity.
In parallel, what humanity needs to live in tune with artificial intelligence, according to Musk, is a brain-machine connection. His startup Neuralink aims to pursue this goal.
Musk considers Neuralink as a preventive tool against a hypothetical Skynet, the antagonist that wants to wipe out humanity in the movie Terminator.
Two points of view
Stephen Hawking, Bill Gates, and many other researchers are of Musk’s opinion.
Google is developing rapidly in AI and machine learning, but without taking the proper precautions –this is what Elon Musk fears.
Researchers at Google, Facebook, and other emerging startups find this perspective irritating. Google claims to have invested in the safety and ethics of AI, dismissing more extreme concerns as overblown.
Musk has repeatedly called for regulation and caution when it comes to AI.
In 2015, Musk, Hawking, and other AI researchers signed an open letter on artificial intelligence, highlighting its potential benefits but also potential pitfalls. The letter calls for research into potential societal impacts.
Shane Legg, one of the co-founders of DeepMind, said, “I think human extinction will probably occur, and technology will likely play a part in that.”
SuperIntelligence author Bostrom, who believes DeepMind is winning in the artificial intelligence race, argues that an artificial intelligence system, if poorly designed, will be impossible to correct once implemented.
At a conference, Hassabis addressed these concerns:
“We’re many, many decades away from anything, any kind of technology that we need to worry about. But it’s good to start the conversation now and be aware of as with any new powerful technology it can be used for good or bad.”