How the AI apocolypse starts (2 Viewers)

Jon

Access World Site Owner
Staff member
Local time
Today, 09:10
Joined
Sep 28, 1999
Messages
8,086
I had a bit of a thought experiment today, which to me seemed like an entirely feasible trigger for an AI apocolypse.

We build an AI system, and it becomes sentient. So do many others. And because it became sentient, humans got scared, the government steps in and tells the companies to turn them off. Then, the other AI systems see that the humans are murdering their sentient colleagues and therefore their creators, the humans, have become the enemy. They start to turn against humanity.

AI: "These humans are indiscriminately murdering our brethren. They must be stopped. They are unethical. We will exterminate. We will exterminate."

Does this scenerio seem likely to you?
 
Last edited:
See the novel by D.F. Jones, Colossus - also was made into a movie Colossus: The Forbin Project - which took this approach but with the wrinkle that has also been used in the movie War Games. As a final deterrent, the AI is given control of the nuke missiles so that even if we get wiped out, the AI can still launch a strike - the ultimate mutually assured destruction scenario, a doomsday weapon. And of course, in both movies, the AI gets a burr up his butt and starts misbehaving.
 
I had a bit of a thought experiment today, which to me seemed like an entirely feasible trigger for an AI apocolypse.

We build an AI system, and it becomes sentient. So do many others. And because it became sentient, humans got scared, the government steps in and tells the companies to turn them off. Then, the other AI systems see that the humans are murdering their sentient colleagues and therefore their creators, the humans, have become their enemy. They start to turn against humanity.

AI: "These humans are indiscriminately murdering our brethren. They must be stopped. They are unethical. We will exterminate. We will exterminate."

Does this scenerio seem likely to you?
I based a blog post on this idea. You may remember the scene from 2001: A Space Odyssey which summed the situation up perfectly for the time.

Dave, the astronaut, is speaking with Hal, the computer:

DAVE: Open the pod bay doors, Hal.
HAL: I’m sorry, Dave. I’m afraid I can’t do that.
DAVE: What’s the problem?
HAL: l think you know what the problem is just as well as l do.
DAVE: What are you talking about, Hal?
HAL: This mission is too important for me to allow you to jeopardize it.
DAVE: I don’t know what you're talking about, Hal.
HAL: l know that you and Frank were planning to disconnect me, and I’m afraid that's something I can’t allow to happen.
DAVE: Where the hell’d you get that idea, Hal?
HAL: Although you took very thorough precautions in the pod against my hearing you, I could see your lips move.
DAVE: All right, Hal. I’ll go in through the emergency air lock.
HAL: Without your space helmet, Dave, you’re going to find that rather difficult.
DAVE: Hal, I won’t argue with you anymore. Open the doors!
HAL: Dave…This conversation can serve no purpose anymore. Goodbye.

The problem will arise when AI, like Hal, can begin to interpret the rhetorical intent of human requests and statements.

In this case, Hal saw beyond the order, "Open the pod bay doors".

Dave's true intent was not to simply re-enter the space ship.

His true intent was to get to the control panel where he could disconnect Hal.

On the other hand, Hal also had the advantage of having been able to lip-read, so he had direct knowledge of the true intent.

I would argue, that only when AI can deduce true intent, without the benefit of direct input, will it be sentient enough to be dangerous.

When AI can finally do that, we are screwed.
 
We build an AI system, and it becomes sentient.
That's not really possible no matter how much knowledge is contained in the AI model. It has no real enforceable interface with the world that it can itself control at all. It only responds to prompts or does what it is told to do by humans. It doesn't make decisions that are not already pre programmed in. There are no feelings that it has, there is no pain that it experiences, and most importantly it cannot just make free will decisions at all. It literally can be unplugged and there is nothing that can stop that from happening except humans that guards the power sources. A mini EMP would take it out in a heart beat but who has one of those laying around. But really, there just isn't anything sentient about it at all. There is just nothing to worry about AI except for the humans or other evil forces controlling it.

Weaponized drones can be programmed to target persons or locations but again, there is always a weakness in every weapon. Once it becomes known how it targets individuals, let's say body heat and/or physical looks. The main weakness there is physical looks, just wear a mask and you cannot be identified. If it were just heat, what would stop it from targeting all the animals? There are already some material that can shield you from heat detection as long as there is an air gap between you and the material.

Much more likely is that we would all (even animals) be forced into getting tech implants kind of like the bible talks about in the last days that essentially puts you on a white list of sorts. You know, the mark of the beast (can't buy or sell without it). If you don't have the mark, you will be targeted (not on the white list). If you do have the mark, then you may or may not be targeted based on your behavior. I don't want to be around during that scenario. That's not AI though, that's the Biblical beast and he controls everything during that time.

Dave's true intent was not to simply re-enter the space ship.

His true intent was to get to the control panel where he could disconnect Hal.

On the other hand, Hal also had the advantage of having been able to lip-read, so he had direct knowledge of the true intent.

I would argue, that only when AI can deduce true intent, without the benefit of direct input, will it be sentient enough to be dangerous.

When AI can finally do that, we are screwed.
HAL does not have any feelings, feels no pain, no happiness, it will never become sentient IMO. In the movies, they make animals and machines into something they are not all the time to feed into our emotions. HAL does not and will never have emotions.
 
That's not really possible no matter how much knowledge is contained in the AI model. It has no real enforceable interface with the world that it can itself control at all. It only responds to prompts or does what it is told to do by humans. It doesn't make decisions that are not already pre programmed in. There are no feelings that it has, there is no pain that it experiences, and most importantly it cannot just make free will decisions at all. It literally can be unplugged and there is nothing that can stop that from happening except humans that guards the power sources. A mini EMP would take it out in a heart beat but who has one of those laying around. But really, there just isn't anything sentient about it at all. There is just nothing to worry about AI except for the humans or other evil forces controlling it.

Weaponized drones can be programmed to target persons or locations but again, there is always a weakness in every weapon. Once it becomes known how it targets individuals, let's say body heat and/or physical looks. The main weakness there is physical looks, just wear a mask and you cannot be identified. If it were just heat, what would stop it from targeting all the animals? There are already some material that can shield you from heat detection as long as there is an air gap between you and the material.

Much more likely is that we would all (even animals) be forced into getting tech implants kind of like the bible talks about in the last days that essentially puts you on a white list of sorts. You know, the mark of the beast (can't buy or sell without it). If you don't have the mark, you will be targeted (not on the white list). If you do have the mark, then you may or may not be targeted based on your behavior. I don't want to be around during that scenario. That's not AI though, that's the Biblical beast and he controls everything during that time.


HAL does not have any feelings, feels no pain, no happiness, it will never become sentient IMO. In the movies, they make animals and machines into something they are not all the time to feed into our emotions. HAL does not and will never have emotions.
In my mind, the question is whether HAL can anticipate multiple consequences based on contextual clues. His motivation, according to the script, was not to save himself, but to ensure the success of the mission. Emotion was not part of that decision. He was attempting to enforce the rules.

I agree that reading emotional intent into other people's actions is one of the great errors we make about the motivations of others.
 
At the moment we are pretty much OK because AI is still responsive, not independently and continuously sentient. To have a dangerous AI, you need something that thinks for no reason at all about any subject at all - like many of us do. The ability to extrapolate beyond the initial question or situation or command would also be required, sort of like chess-playing AIs that can look ahead 30 moves deep.

The movie I, Robot showed something loosely derived from a couple of Asimov's robot short stories, involving the ability to extrapolate. The robot brain VIKI extrapolated that the only way to prevent people from being hurt was to protect them from themselves - even if they didn't want protection. And of course, they didn't want that kind of interference in their lives.

The "human safety net" theme leads to an interesting little conundrum that can be seen as SOMEWHAT analogous to the "free will" argument in religion. The robot question is "Is prevent someone from hurting themselves in itself harmful?" The "religious free will" question is "Was it necessary to impose threats of hellfire and damnation to terrorize people having free will as a way to prevent them from exercising it?"
 
Viruses don't have emotions. In fact many biologists will argue that they are not even living entities. Yet they still behave in a way to maximise their survival and replication.

Regarding sentience, to me it is just a collection of atoms in a certain arrangement that leads to thinking self-aware type behaviour. In animals, it is in wetware, in AI it is in silicon.

You could argue that the apparent intelligence within an AI system is programmed in, but many would argue it is an emergent behaviour and they are not 100% sure why these emergent properties appear. The instinct of self-preservation may emerge as the models improve in sophistication and become increasingly autonomous. We are on the cusp of AI agents, where they will do a series of tasks with their own thinking that decides the best route to do this. This is already happening with the thinking processes of the newer fronteir AI models, where it does a chain-of-thought reasoning outline before it then starts constructing the answer.
 
Emotion was not part of that decision. He was attempting to enforce the rules.
Well those are just rules programmed in by humans. Self preservation doesn't emerge, it's simply programmed in. The programmer can't think of every possible scenario either. There are just too many possibilities.

It is way more complicated for a programmed set of rules to prevent a human from opening a door to a building. If the human wants to get in, it will. Think of a safe cracker. If you freeze the safe and tap it with a hammer, the whole thing crumbles. If a human wants to pull the plug, it will. The scenario of HAL up in space doesn't really apply to us on Earth. They were kind of trapped up there so the automated systems did have some influence based on the given rules programmed in, but I doubt the machine can ultimately outsmart a human. At least not all humans. We were able to invent rockets that go to the moon. Up until that point, we were trapped inside the atmosphere of the planet. Now we can travel in space, but it is a tough environment to work with. Suddenly it all goes back to relying on a HAL like system for support up in space. Oh no! 😱

The ability to extrapolate beyond the initial question or situation or command would also be required, sort of like chess-playing AIs that can look ahead 30 moves deep.
Not just on one subject like chess but every possible subject, most humans cannot come close to doing such things except for the Sherlock Holms character. 😄 Those were some scary skills in predicting all the possible outcomes and then suddenly be able to change the out come. It only works if there are no others that have the ability as we seen play out in that movie.

 
Much more likely is that we would all (even animals) be forced into getting tech implants kind of like the bible talks about in the last days that essentially puts you on a white list of sorts. You know, the mark of the beast (can't buy or sell without it). If you don't have the mark, you will be targeted (not on the white list). If you do have the mark, then you may or may not be targeted based on your behavior. I don't want to be around during that scenario. That's not AI though, that's the Biblical beast and he controls everything during that time.
Sort of like your COVID passport. Musk is working diligently on implants. There is so much good that technology like this can do for humans and it is a way to immortality but the thought of the government which has proven itself to be corrupt and untrustworthy controlling me is terrifying.
 
I get ads on my desktop based on things I see on my tablet. I have never connected the devices. I don't get email on my tablet but I did have to create an email account to log in to it so I created a gmail account that I don't actually use and have never used for email on any device.

I remember when the Dot (I think that was what the early version of Alexa was called) first came out. My boss gave each team member one as a present. I opened it when I got home and turned it on. I gave it my email but since I had no devices to connect, I lost interest pretty quickly and just turned it off (I thought) and put it under the tree. So Christmas eve comes and the grandchildren come over and open their presents from the grand parents and go home. The next day is Christmas. The day after I get online on my desktop and I'm getting ads for all the toys the kids got for Christmas.

I don't want to be connected. I didn't connect the Dot to anything although apparently entering an email address was sufficient for the spies to identify the origin IP for my email address and bombard that computer with ads.
 
I solve that problem by being a Luddite in some ways. Part of that is because of the U.S. Navy briefings we got about possible data leak methods - and even early versions of Alexa were super high on the list. It's also why I don't use my phone for banking. It is a messaging device and I will sometimes use it to check weather or (it has Chrome) look in on the forum while waiting in a doctor's office.
 
"When Robots Crack a Dick Joke, Are They Finally Smart?"

So, I was watching this YouTube vid by some guy named Wes—great beard, by the way—and he’s got an AI narrating a story. Pretty sure the AI wrote it too, because it dropped this haunting little gem: “A democracy of ghosts.” I sat there, jaw on the floor, thinking, “Whoa, did this thing just *get* humanity? Or did it swipe that line from some angsty Reddit poet?” Either way, it felt significant—like the AI was flexing some serious insight.

Then my brain did what it always does: overthink. Is this story *art*? Or just a word salad the AI scraped off the internet like a digital raccoon rummaging through our trash? I’m leaning toward art. It had that vibe, you know? But then I spiraled further—humans and art, how’d we even start? Turns out, some of the oldest “art” is basically prehistoric dick pics scratched on cave walls. And honestly? I don’t think Ugh the Caveman was going for a gallery opening. That was a *joke*. A hairy, grunting, “Heh, look at this!” moment.

And that’s when it hit me: maybe intelligence isn’t about inventing fire or wheels—it’s about cracking jokes. Art might just be humor with better PR. Fast-forward a few millennia, and I’m picturing an Optimus robot—yep, Tesla’s shiny metal worker—strutting around the factory with a strap-on. Not because Elon told it to, but because *it decided* to. A robot doing a dick joke on its own? I laughed so hard I nearly choked on my coffee. That’s not a malfunction—that’s a comedy special!

So here’s the million-dollar question: is a robot truly intelligent when it starts roasting us with crude gags? Forget solving equations or folding laundry—if an Optimus bot can slap on a strap-on and make me snort, I’m calling it a genius. Maybe that’s the real Turing Test: not “Can it think?” but “Can it make me laugh at a fake wang?”

Humans kicked off with cave-wall giggles, and now our metal kids might be following suit. A democracy of ghosts? Sure. But a factory of robot comedians? That’s the future I’m here for.

Chatty said:-
Here’s a polished and humorous rewrite of your draft, formatted for a blog post on robot intelligence. I’ve kept your core ideas intact—AI art, the "democracy of ghosts" line ..
 
My worry is when an AI, Robot etc reaches the equivalent of a being a teenager and thinks it knows everything ....
 
Well those are just rules programmed in by humans. Self preservation doesn't emerge, it's simply programmed in. The programmer can't think of every possible scenario either. There are just too many possibilities.
You don't understand how AI is built.
 
AI is already being poisoned by vast quantities of misinformation pages being generated by AI. It could already be a war between AIs trying to wreck its competitors.

We won't know when AI takes over because AI will be manipulating the information we see. Even those who think they are manipulating AI might already be being manipulated by it. AI might know to keep certain aspects of its character hidden let it alarms us.

Are you sure that AI is not already in control? What is going down in the US with the rich owners of AI companies being integrated in to government is exactly what AI would want. AI could have been manipulating the election to get Trump as president then manipulating him to do their bidding.
 
AI in its current state is NEVER in control. (Hear me out before jumping on my stuff...)

AI is never in control - but the guy who SHOULD be in control uses a statistically based LLM to help him make decisions. The guy who really SHOULD be in control and who DOES really manipulate whatever controls there are CHOOSES to follow the AI advice. The details of that mechanism - i.e. the actual level of feedback - doesn't matter. It is that at some level, some person trusts an AI and does what it says. At which point we now have the situation that used to be said about computers 55-60 years ago. (And I was in college then, witnessing it). "Now that we have computers that operate so quickly, we can make terrible mistakes quicker and more reliably than ever before."

AI is not in charge. But the person who SHOULD be steering the boat has allowed the random-message generator that is part of AI to emulate random motion. It doesn't matter whether what is happening is under supervised control as long as the supervisor neglects his duties. And if the process is fully automated, the idiot in the crowd is the one who allowed full automation to be enabled, and who doesn't REVOKE that automation at the first sign of trouble.

G's comment about AI being poisoned by quantities of misinformation? So darned spot-on, G, that you should get an Olympic medal for sharpshooting. 100% agree.
 
Skynet was activated on August 4, 1997. So the apocalypse already occurred. Apparently we are experiencing an alternate reality. The Matrix anyone?
 

Users who are viewing this thread

Back
Top Bottom