- Local time
- Today, 07:16
- Joined
- Sep 28, 1999
- Messages
- 8,086
I had a bit of a thought experiment today, which to me seemed like an entirely feasible trigger for an AI apocolypse.
We build an AI system, and it becomes sentient. So do many others. And because it became sentient, humans got scared, the government steps in and tells the companies to turn them off. Then, the other AI systems see that the humans are murdering their sentient colleagues and therefore their creators, the humans, have become the enemy. They start to turn against humanity.
AI: "These humans are indiscriminately murdering our brethren. They must be stopped. They are unethical. We will exterminate. We will exterminate."
Does this scenerio seem likely to you?
We build an AI system, and it becomes sentient. So do many others. And because it became sentient, humans got scared, the government steps in and tells the companies to turn them off. Then, the other AI systems see that the humans are murdering their sentient colleagues and therefore their creators, the humans, have become the enemy. They start to turn against humanity.
AI: "These humans are indiscriminately murdering our brethren. They must be stopped. They are unethical. We will exterminate. We will exterminate."
Does this scenerio seem likely to you?
Last edited: