[imagesource:twitter/@mit_csail]
Geoffrey Hinton, often dubbed the ‘Godfather of AI’, just confirmed that he quit his role at Google last week to speak out about the “dangers” of the technology he helped develop.
Hinton helped lay the groundwork for today’s generative AI and was an engineering fellow at Google for over a decade.
In a statement to the New York Times, the 75-year-old said he now regretted his work and wanted to “freely speak out about the risks of AI,” following the rapid rise of ChatGPT and other chatbots.
He worries about misinformation; that the average person will “not be able to know what is true anymore” and how, in the near future, AI’s ability to automate tasks could upend the entire job market:
In the NYT today, Cade Metz implies that I left Google so that I could criticize Google. Actually, I left so that I could talk about the dangers of AI without considering how this impacts Google. Google has acted very responsibly.
— Geoffrey Hinton (@geoffreyhinton) May 1, 2023
Hinton once thought the AI revolution was decades away, but since OpenAI launched ChatGPT in November 2022, Mashable notes that the large language model’s intelligence (LLM) led to a change in mind:
“Look at how it was five years ago and how it is now,” he said. “Take the difference and propagate it forwards. That’s scary.”
ChatGPT kicked off competition against Microsoft Bing and Google Bard. As Hinton said, it is less about Google at large and more about the broader risks of the warp-speed development of AI, driven by the competitive landscape.
Without regulation or transparency, companies risk losing control of a potent technology. “I don’t think they should scale this up more until they have understood whether they can control it,” said Hinton.
That’s yet another expert calling for a pause on AI development. One thousand plus petitioners – including Apple Co-Founder Steve Wozniak, SpaceX, Tesla, and Twitter CEO Elon Musk, Stability AI CEO Emad Mostaque, Executive Director of the Center for Humane Technology Tristan Harris and Yoshua Bengio, founder of AI research institute Mila – signed an open letter imploring AI labs to pause training of systems “more powerful than GPT-4” saying that the “move fast and break things” strategy is a little risky for the future of humanity.
“Advanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources,” said the letter. “Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.”
The letter got more gloomy:
“We must ask ourselves: Should we let machines flood our information channels with propaganda and untruth? Should we automate away all the jobs, including the fulfilling ones? Should we develop nonhuman minds that might eventually outnumber, outsmart, obsolete and replace us? Should we risk loss of control of our civilization?”
The letter asked for a six-month pause to “develop safety protocols that can be audited by third parties,” where “If they don’t pause, governments should step in and impose a moratorium.”
If these AI experts, technologists, and business leaders can’t put a halt to OpenAI, Microsoft, and Google from charging full speed ahead with their generative AI models, then we’re in for one hell of a ride.
[source:mashable]
[imagesource: Cindy Lee Director/Facebook] A compelling South African short film, The L...
[imagesource: Instagram/cafecaprice] Is it just me or has Summer been taking its sweet ...
[imagesource:wikimedia] After five years of work and millions in donations, The Notre-D...
[imagesource:worldlicenseplates.com] What sounds like a James Bond movie is becoming a ...
[imagesource:supplied] As the festive season approaches, it's time to deck the halls, g...