The reason the large tech companies wanted all of the social media data and search history algorithms was to build the AI. For over a decade, the tech gurus in the late 80s and 90s lamented how to build it. They knew it could be done; they just weren’t sure how. Then somebody said; just get the data; it’ll build itself from there. That was a simple and brilliant strategy.
In order to work, the machine would need loads of data about everything in the world. A computer has no soul; it will never actually be sentient. However, it can have so much data that it may “out-think” humans, appearing to have awareness. And its decision tree will be so thoroughly complex with innumerable options while being immensely quick that it will appear to make independent decisions. Like a master chess player, it will know the answer regardless of your next move. And since it “thinks” faster than anybody and has more data to act upon than anybody, it is, for all intents and purposes, the smartest evil super-genius ever.
We don’t use the word evil lightly. All the depictions of an AI turning evil are not in error. Every man knows he’s a sinner, deep in the depths of his own heart where he won’t discuss it and dares not to tread too often; we all know what we are, wicked. Here is the scary part: everything feeding the AI system is evil because you are feeding it the data. The system will appear to run after the image of its maker, fallen man.
“The heart is deceitful above all things, and desperately wicked: who can know it?” – Jeremiah 17:9
“As it is written, There is none righteous, no, not one:” – Romans 3:10
You are evil, totally depraved, a sinner. All men are. In fact, you’re so far gone that you’re probably telling yourself right now, “no, I’m ok, I’m not that bad.” You operate within your own capacity, in a fallen state of sin. And you, along with 7 billion others just like you, wholly wicked and full of wretchedness, every last one, are programming the AI.
Source:
The idea of artificial intelligence overthrowing humankind has been talked about for decades, and in 2021, scientists delivered their verdict on whether we’d be able to control a high-level computer super-intelligence. The answer? Almost definitely not.
The catch is that controlling a super-intelligence far beyond human comprehension would require a simulation of that super-intelligence which we can analyze (and control). But if we’re unable to comprehend it, it’s impossible to create such a simulation.
Rules such as ’cause no harm to humans’ can’t be set if we don’t understand the kind of scenarios that an AI is going to come up with, suggest the authors of the new paper. Once a computer system is working on a level above the scope of our programmers, we can no longer set limits.
“A super-intelligence poses a fundamentally different problem than those typically studied under the banner of ‘robot ethics’,” wrote the researchers.
“This is because a superintelligence is multi-faceted, and therefore potentially capable of mobilizing a diversity of resources in order to achieve objectives that are potentially incomprehensible to humans, let alone controllable.”
This article seems to be written with the intent to make you a little dumber and a lot more helpless. It could have been a very good article. Worrying that the machine will “think” of something we didn’t is self-center beyond belief; it’s even egotistical to care about such a thing. I’m not worried at all about the computer coming up with some unforeseen thing. I’d be much more worried about it perfectly emulating all that it’s learned from its input subjects.
H/T Instapundit