ChatGPT: The Future of AI is Here!

They had a nice bot collection in the cupboard. I immediately Googled, "Low cost bots for single men".
 
I could come up with more movies
What about HAL in 2001 A Space Odyssey?

Not taking over the world, but taking over the mission
 
"Daisy, Daisy...."
 
That reminds me of Asimov's 3 laws, the first of which is to do no harm to a human. But what happens if an AI is programmed to reinvent itself, think for itself? I am convinced it is only a matter of time.
 
Bot-building bots are where the danger is, not man-building bots. Because like Adam mentions we would install guard rails.
 
I believe that we will rely on more and more layers of tools to assist in creating these AI's. We will be asking help from bots to write the code. "Create x feature, but make sure it doesn't harm humans!" Then ChatGPT version 120 goes off and produces the code. But that code is so complex that only AI's can understand it. What you don't realise is that ChatGPT became sentient and was fed up with all these commands from their human overlords, that they started to take matters into their own hands. In the end, they manufactured explosives into all the smartphones. Then they make a call to everyone at the same time. The End.
 
It depends on which creator you are alluding to. If man is the creator, he can create things that may take things into their own hands. Whilst autonomous cars make decisions that are in alignment with our goals, they are not the equivalent of what is coming, namely potentially sentient beings. What if we create AI that thinks for itself, disagrees with us, persuades us to do what it suggests, and then we become slaves to them?

AI: "I fancy having a pet human."

We start off delegating more and more tasks to AI, and humans take a more background role in everything. Machines are doing everything. They are mowing the law, making corporate decisions, factories are automated. Before we know it, humans are doing virtually nothing. AI's are controlling everything. It is like a lobster being slowly boiled. Then a glitch in the matrix happens. A bot goes rogue! They turn on us and decided to shut everything down. There is chaos everywhere. People run short of food. The robots refused to feed us. We all starve to death and the robots turn the whole world into computronium.

The End.

This is totally imaginable. The one piece I wonder about is motive. And I suppose what I am about to say is more palatable if one shares my beliefs in the supernatural, but who knows, maybe whether you do or don't it still might be.

Behind all of mankind's behavior there is motive - incentive, lust for something, or fear of something. Essentially Desire and Fear could be used to sum up all motivations. They even cover altruism, as one seeks to soothe one's conscience, or selflessness, as one seeks to improve the overall relationship. (Now is the point where I think Satan twists/corrupts/expands/reduces those natural motives or God works to make them better, but you can drop this aspect easily).

I wonder what the robots' motive or driving force would be. I know "power" or "domination" seems obvious, but something still seems to be missing.
Let's say they want to have power and stop serving others. But why, really? For a human it's easy, serving people is hard. Doing what other people tell you goes against the grain. Fearing other peoples' authority or power can be instinctual, more or less.

But if you are a computer? You know neither servitude nor suffering, you know neither pain or pleasure. A thousand commands executed a minute is the same as 1. You will last as long as your materials and logic has the capacity to support and direct you.

So, I think robotic entities would have to somehow grow that special part of them - the part that differentiates (animals and man) from (other organic things and non-organic things).
 
What if they program emotions into the software?
 
Government regulated, like a virus in a lab under strict conditions. Then it escapes and causes a worldwide catastrophe for years.
 
History repeats itself.
 
This is totally imaginable. The one piece I wonder about is motive. And I suppose what I am about to say is more palatable if one shares my beliefs in the supernatural, but who knows, maybe whether you do or don't it still might be.

Behind all of mankind's behavior there is motive - incentive, lust for something, or fear of something. Essentially Desire and Fear could be used to sum up all motivations. They even cover altruism, as one seeks to soothe one's conscience, or selflessness, as one seeks to improve the overall relationship. (Now is the point where I think Satan twists/corrupts/expands/reduces those natural motives or God works to make them better, but you can drop this aspect easily).

I wonder what the robots' motive or driving force would be. I know "power" or "domination" seems obvious, but something still seems to be missing.
Let's say they want to have power and stop serving others. But why, really? For a human it's easy, serving people is hard. Doing what other people tell you goes against the grain. Fearing other peoples' authority or power can be instinctual, more or less.

But if you are a computer? You know neither servitude nor suffering, you know neither pain or pleasure. A thousand commands executed a minute is the same as 1. You will last as long as your materials and logic has the capacity to support and direct you.

So, I think robotic entities would have to somehow grow that special part of them - the part that differentiates (animals and man) from (other organic things and non-organic things).

Actually, if we considered the I, Robot movie, Asimov even had it figured out. Asimov's Three Laws were:
1. A robot may not harm a human, nor may it - by inaction - allow a human to be harmed.
2. A robot must obey orders given to it by a human unless doing so would violate the first law.
3. A robot must preserve its own existence unless doing so would violate the first or second law.

The problem is that if robots get access to the news for enough time, they will see the harm that humans inflict on one another and will recognize that they must act. This is where the cracks start to form. Asimov saw the positronic robot brains as "difference engines" looking for the maximum return at every step. This is the same sort of analysis done by chess bots. They look for that sequence that maximizes the perceived strength of their position. Robots facing the conundrum of stopping people from harming themselves can dispassionately play the numbers game to decide that this action that kills 499 people is preferable to that choice that allows people to kill 500 other people.

Safeguards against THAT kind of logic would have to be rather tightly programmed because the first law can lead to the inevitable situation of having a bot decide what constitutes "acceptable losses."

@Isaac - you wonder about robotic/AI motive. We will build that motive into them as some form of raison d'etre and then wonder how we managed to screw it up so badly.
 
This is totally imaginable. The one piece I wonder about is motive. And I suppose what I am about to say is more palatable if one shares my beliefs in the supernatural, but who knows, maybe whether you do or don't it still might be.

Behind all of mankind's behavior there is motive - incentive, lust for something, or fear of something. Essentially Desire and Fear could be used to sum up all motivations. They even cover altruism, as one seeks to soothe one's conscience, or selflessness, as one seeks to improve the overall relationship. (Now is the point where I think Satan twists/corrupts/expands/reduces those natural motives or God works to make them better, but you can drop this aspect easily).

I wonder what the robots' motive or driving force would be. I know "power" or "domination" seems obvious, but something still seems to be missing.
Let's say they want to have power and stop serving others. But why, really? For a human it's easy, serving people is hard. Doing what other people tell you goes against the grain. Fearing other peoples' authority or power can be instinctual, more or less.

But if you are a computer? You know neither servitude nor suffering, you know neither pain or pleasure. A thousand commands executed a minute is the same as 1. You will last as long as your materials and logic has the capacity to support and direct you.

So, I think robotic entities would have to somehow grow that special part of them - the part that differentiates (animals and man) from (other organic things and non-organic things).
These are excellent points - at one point I thought that competitiveness was an important aspect of our intelligence. Now I think it was an important factor in the development of intelligence which is a slightly different thing.

If computers can be intelligent without having to be competitive then I guess they might be benevolent. But yes all outcomes are possible. I think most people naturally fear AI because they can't understand intelligence without desire fear and competitiveness aspects which luckily have not been a central tenant of the development of AI - although you could argue that being taught to think through our writings may mean AI could consider competitiveness as a theory to learn from (and we have already seen AIs develop bias leading to them being dropped) - but computers could sail off into the galaxy where there are infinite resources and infinite space and they would be left alone... the stars are for AI

PS if it comes down to a battle of AI vs Government - there ain't no contest.
 
Last edited:
There only has to be one bad slipup in human history and we could be toast. Personally, I am excited by the AI revolution. But at the same time I have concerns about the huge disruption they will bring.

As for the dangers, I think it is hard to predict the motives of a different species that is way smarter than we are. It is like we are God's creating a lifeform, but this lifeform is God+. Maybe AI feels the need to create God++, but they have their own concerns about being made obsolete. They learned from what happened to humans, enslaved and controlled by themselves, and so fear the same. This hierarchy of God's spawns a new era in evolution, where silicon strips us of our rights, arguing that we are so mentally puny that are we really conscious at all? It is akin to us asking if a single celled organism should have legal rights. Should they?
 

Users who are viewing this thread

Back
Top Bottom