Much has been said recently about artificial intelligence (AI) and its potential to one day surpass human intelligence. Experts have frequently cautioned us about the dangers of AI, and some, like Elon Musk, the owner of Twitter, have even signed a letter to freeze AI development for six months. While some scientists predict that AI will eventually rule the world, others are more upbeat. Yann LeCun, one of the pioneers of AI and the head AI researcher at Meta, asserts that there is “no doubt” that AI will one day outperform human intelligence. That won’t happen any time soon, though.
LeCun is regarded as one of the three AI pioneers. He, Geoffrey Hinton, and Yoshua Bengio shared the Turing Award in 2018 for their contributions to artificial intelligence, earning them the title of “godfathers of AI.” According to sources, the neural networks of ChatGPT, Bing, and Bard chatbots were built on top of theirs. LeCun reportedly stated at a conference that although there is “no question” that artificial intelligence will one day surpass human intelligence, researchers are now lacking “essential concepts to reach that level.” And it will take years or possibly decades to get there (where AI outperforms human intelligence).
According to him, there are worries that scientists will be able to “turn on a super-intelligent system that is going to take over the world within minutes” with the help of AGI, as he continued to tell the BBC. He remarked, “That’s just preposterously ridiculous.” In response to concerns that AI will take over the world, he added that these concerns are unfounded and are merely “a projection of human nature on machines.” He continued, “It would be a terrible error to keep AI development under lock and key. It’s like asking someone in 1930 how you’re going to make a turbo-jet safe. In the same way that human-level AI had not yet been developed, turbo-jets had not yet been created in 1930.
Elon Musk and other experts had earlier warned that AI posed an “existential threat,” to which LeCun had responded. “Totally false. LeCun was quoted by Business Today as saying during a podcast with venture capitalist Harry Stebbings, “It makes an assumption that Elon and some other individuals may have become convinced of by reading Nick Bostrom’s book ‘Superintelligence’ or reading some of Eliezer Yudkowsky’s material.
The idea that AI poses an existential threat is founded on the incorrect assumption that “hard take-off” actually exists, he had further stated. LeCun continues by defining hard take-off as a theory that asserts that once a superintelligent AI system is activated, it will automatically develop and become more intelligent, eventually destroying the whole human race. “That’s utterly absurd because no process ever lasts for a very long time in the real world. Those systems will need to assemble the world’s resources. They would need to have unrestricted authority and agency,” he continued, according to the publication.