[imagesource:here]
Honestly, as much as what is about to follow will sound like total sci-fi, we can’t ignore how speculative fiction can actually be helpful in our understanding of artificial superintelligence taking over humans.
Plus, artificial superintelligence can very possibly actually take over humans in reality.
George Dvorsky writes for Gizmodo that he’s confident that machine intelligence will be our “final undoing”.
I urge you to give his article a thorough read, but in the meantime, we’ll outline some of the main concerns.
We’re pretty good at ignoring the problems that threaten our existence, but we might not want to sit this one out as there is no shortage of ways for artificial superintelligence (AI) to end human civilisation as we know it.
There’s major pressure to nip this in the bud while we still can, especially considering how the takeover is unlikely to be this whole showdown, with machines taking over by brute force.’
(Sure, The Mitchells vs. The Machines is a fun movie, but that’s not how it’s likely to play out.)
Instead, and rather terrifyingly, the takeover will likely be passive, through “adaptive machine learning and self-design with enhanced situational awareness and lightning-fast computational reflexes”.
So superintelligence is very much possible, and our objection that smart computers simply won’t have the means or motivation to end humanity is deemed naive by some.
If something goes wrong with a system of greater-than-human machine intelligence (which is likely to exist through advances in computer science, cognitive science, and whole-brain emulation), our simply-human brains will make us lose control and understanding to the point that we won’t be able to contain these systems or predict the way they will respond to our requests.
“It is simply the problem of how to control an AI that is vastly smarter than us,” explains Susan Schneider, director at the Centre for Future Mind and the author of Artificial You: AI and the Future of the Mind.
Roman Yampolskiy, a professor of computer science and engineering at the University of Louisville, gives us more to think about:
“If we could predict what a superintelligence will do, we would be that intelligent ourselves,”
“By definition, superintelligence is smarter than any human and so will come up with some unknown unknown solution to achieve”
To make this more understandable, think of the old magical genie story, in which the granting of three wishes “never goes well,” said Schneider:
The general concern, here, is that we’ll tell a superintelligence to do something, and, because we didn’t get the details just quite right, it will grossly misinterpret our wishes, resulting in something we hadn’t intended.
Doom could arrive in myriad strange and unexpected ways, like a non-stop experience of a five-second loop of happiness, stretching into eternity, if we ask superintelligence to “maximize human happiness”.
I’d rather not be that happy, thanks.
So won’t coding artificial superintelligence with human-compatible moral codes be a way to avoid certain pitfalls?
The thing is, as Schneider pointed out, in order for us “to program in a moral code, we need a good moral theory, but there’s a good deal of disagreement as to this in the field of ethics”.
It’s a pity that there’s so much contestation about what we think of as right or wrong because time is of the essence.
What exists already in machine learning could evolve quicker than we can imagine, where artificial general intelligence could be used to invent superintelligence – Max Tegmark wrote in his 2017 book Life 3.0: Being Human in the Age of Artificial Intelligence.
This “intelligence explosion” could result in some seriously undesirable outcomes:
“If a group of humans manage to control an intelligence explosion, they may be able to take over the world in a matter of years,” writes Temark.
“If humans fail to control an intelligence explosion, the AI itself may take over the world even faster.”
The other scary thing is that we are often mere bystanders, where AIs are increasingly being asked to make big decisions without human intervention.
Algorithms and machine learning capabilities have already taken over vehicles, aspects of the military and the stock market, and many people’s jobs, so the trajectory is there for them to do more.
We’re also much weaker and vulnerable in comparison to machines, and so exploiting our biological weaknesses (our need for water, oxygen, and food, for example) could be the best way for an artificial superintelligence to destroy us:
In such a scenario, fleets of deliberately designed molecular machines would seek out specific resources and turn them into something else, including copies of itself.
Absolute yikes.
Okay, downloading The Matrix, Slaughterbots, The Terminator, and Blade Runner for studying right now.
[source:gizmodo]
Hey Guys - thought I’d just give a quick reach-around and say a big thank you to our rea...
[imagesource:CapeRacing] For a unique breakfast experience combining the thrill of hors...
[imagesource:howler] If you're still stumped about what to do to ring in the new year -...
[imagesource:maxandeli/facebook] It's not just in corporate that staff parties get a li...
[imagesource:here] Imagine being born with the weight of your parents’ version of per...