[imagesource:pixabay]
And on the eighth day, the world had Artificial Intelligent chatbots to answer silly questions and make pretty pictures.
Now, the leaders of these programmes are freaked out by their own product, and calling on a worldwide regulatory force to ensure the bots don’t completely destroy humanity.
The progress of “superintelligent” AIs has been so immense that even the leaders of OpenAI are alarmed, which is when you know the revolution has all the potential to pivot into a full apocalypse.
To prevent this, the leaders of the ChatGPT developer OpenAI are arguing that an equivalent to the International Atomic Energy Agency is needed to protect humanity from the risk of accidentally creating something with the power to destroy it, The Guardian reports:
In a short note published to the company’s website, co-founders Greg Brockman and Ilya Sutskever and the chief executive, Sam Altman, call for an international regulator to begin working on how to “inspect systems, require audits, test for compliance with safety standards, [and] place restrictions on degrees of deployment and levels of security” in order to reduce the “existential risk” such systems could pose.
They go on with their eerie predictions:
“It’s conceivable that within the next 10 years, AI systems will exceed expert skill level in most domains, and carry out as much productive activity as one of today’s largest corporations,” they write. “In terms of both potential upsides and downsides, superintelligence will be more powerful than other technologies humanity has had to contend with in the past. We can have a dramatically more prosperous future; but we have to manage risk to get there. Given the possibility of existential risk, we can’t just be reactive.”
OpenAI’s CEO Sam Altman is already figuring out ways to distinguish between humans and robots, having invented an eyeball-scanning Orb that gives you crypto cash in exchange for your bio-data.
It sounds creepy, and likely could cause another host of problems for humanity and our dysfunctional obsession with the play between the powerful and the powerless. Then again, the OpenAI leaders do seem genuinely concerned for humanity’s safety.
Working to “reduce societal-scale risks from artificial intelligence”, the US-based Center for AI Safety (CAIS) describes eight categories of “catastrophic” and “existential” risks that AI development could pose:
While some worry about a powerful AI completely destroying humanity, accidentally or on purpose, CAIS describes other more pernicious harms. A world where AI systems are voluntarily handed ever more labour could lead to humanity “losing the ability to self-govern and becoming completely dependent on machines”, described as “enfeeblement”; and a small group of people controlling powerful systems could “make AI a centralising force”, leading to “value lock-in”, an eternal caste system between ruled and rulers.
These AI leaders are asking for “some degree of coordination” among companies working in the cutting-edge realm of AI research so as to make sure that whatever powerful computing comes about, it can integrate safely and appropriately with society.
There’s a suggestion of a government-led project, for instance, or at least a collective agreement to limit growth in AI capability in the short term:
OpenAI’s leaders say [the] risks [posed by CAIS] mean “people around the world should democratically decide on the bounds and defaults for AI systems”, but admit that “we don’t yet know how to design such a mechanism”. However, they say continued development of powerful systems is worth the risk.
While the leaders can see the benefits of AI already, particularly in areas like education, creative work, and personal productivity, there needs to be a safer way to move forward.
However, most ominously, pausing development could be detrimental, they say.
“Because the upsides are so tremendous, the cost to build it decreases each year, the number of actors building it is rapidly increasing, and it’s inherently part of the technological path we are on. Stopping it would require something like a global surveillance regime, and even that isn’t guaranteed to work. So we have to get it right.”
We’re moving forward whether we like it or not. Let’s just pray that humans aren’t dragged under the wake of this fast-moving machine.
[source:theguardian]
Hey Guys - thought I’d just give a quick reach-around and say a big thank you to our rea...
[imagesource:CapeRacing] For a unique breakfast experience combining the thrill of hors...
[imagesource:howler] If you're still stumped about what to do to ring in the new year -...
[imagesource:maxandeli/facebook] It's not just in corporate that staff parties get a li...
[imagesource:here] Imagine being born with the weight of your parents’ version of per...