How the AI apocolypse starts

See the novel by D.F. Jones, Colossus - also was made into a movie Colossus: The Forbin Project - which took this approach but with the wrinkle that has also been used in the movie War Games. As a final deterrent, the AI is given control of the nuke missiles so that even if we get wiped out, the AI can still launch a strike - the ultimate mutually assured destruction scenario, a doomsday weapon. And of course, in both movies, the AI gets a burr up his butt and starts misbehaving.
Would you like to play a game?
 
AI could be a technological tower of babel created to elevate above God. Merely an attempt to create a god in man's image that can be used by man to do man's bidding. Think about that for more than a second. But it will never have a real conscience and be able to have free will to choose good over evil as do humans. Apparently, it has already created it's own language that even we don't understand. How scary is that? This is Pandora's box plain and simple. It may be able to mimic human emotion, but at best, this is nothing more than parroting back words and phrases said with voice inflections and tones we all recognize.

Here are some quotes from a very interesting article on this subject from someone involved with the creation of AI products.

That ancient desire to be like God is so clearly replicated in today’s artificial intelligence technology that lots of Scripture readers have drawn the link. “How Artificial Super-Intelligence Is Today’s Tower of Babel,” read a headline at Christianity Today. At World, David Bahnsen wrote “AI and the Tower of Babel.” And at a Jewish university in New York, a student quoted her professor: “It makes me think about the lesson learned with the Tower of Babel: are we really meant to build artificial intelligence?”

Sears believes AI has the potential for good. But he also argues that AI’s dangers extend beyond mere human empowerment. “It’s an attempt to try and create in our own image by replicating not just intelligence and mind, but also heart, body and soul,” he said. “This ambition to replicate humanity in artificial form echoes the hubris of the Babel builders.”

One of the biggest areas of research and development in AI right now is empathy and emotion. We’re seeing that with the advanced voice features from OpenAI and others. They call it emotion, empathy, or personality, but really it’s trying to mimic the heart of humans.

And here is the quote that is most disturbing about the possibility of AI destroying us.
People are worried about that, and they’re concerned that AI will take our jobs or maybe kill us. But I’m less worried about those things. I think the most likely scenario isn’t that a robot presses the nuke button, but the slow erosion of relationships. We’re already in a relational crisis, and AI could accelerate and deepen that. I think the plan of the Enemy is to divide and conquer and degrade our society to the point of chaos.
 
but the slow erosion of relationships.

On YouTube I recently saw an ad for a service that used the lead-in line, "6 Reasons you need an AI friend." We can only HOPE for the erosion to be slow, but I'm not betting on anything along those lines at the moment.
 
An interesting article about the design of AI that incorporates the use of "rewards' to the LLM (so that it improves) has a consequence. if the objective is to achieve a reward, and the LLM can get there more efficiently by lying/cheating then it can take that course. If a punishment module is built in for lying, then the LLM becomes better at lying.
Where does that leave us if we rely upon AI?
https://www.livescience.com/technol...es-it-hide-its-true-intent-better-study-shows
 
I had a bit of a thought experiment today, which to me seemed like an entirely feasible trigger for an AI apocolypse.

We build an AI system, and it becomes sentient. So do many others. And because it became sentient, humans got scared, the government steps in and tells the companies to turn them off. Then, the other AI systems see that the humans are murdering their sentient colleagues and therefore their creators, the humans, have become the enemy. They start to turn against humanity.

AI: "These humans are indiscriminately murdering our brethren. They must be stopped. They are unethical. We will exterminate. We will exterminate."

Does this scenerio seem likely to you?

It all seems to hinge on the 'becomes sentient' and what's the programming hierarchy.
Perhaps the highest-level concept that should be programmed is to obey human commands, overrides, etc.
Be careful in what level of decision making authority is programmed in, right? A big red button perhaps..
 
On YouTube I recently saw an ad for a service that used the lead-in line, "6 Reasons you need an AI friend." We can only HOPE for the erosion to be slow, but I'm not betting on anything along those lines at the moment.
I found myself once asking ChatGPT how its day was going...that's when I realized I needed to get out of the house more
 

Users who are viewing this thread

Back
Top Bottom