ChatGPT: The Future of AI is Here!

There only has to be one bad slipup in human history and we could be toast. Personally, I am excited by the AI revolution. But at the same time I have concerns about the huge disruption they will bring.

As for the dangers, I think it is hard to predict the motives of a different species that is way smarter than we are. It is like we are God's creating a lifeform, but this lifeform is God+. Maybe AI feels the need to create God++, but they have their own concerns about being made obsolete. They learned from what happened to humans, enslaved and controlled by themselves, and so fear the same. This hierarchy of God's spawns a new era in evolution, where silicon strips us of our rights, arguing that we are so mentally puny that are we really conscious at all? It is akin to us asking if a single celled organism should have legal rights. Should they?
Its actually an interesting point - all our religious texts tend to start from the point that our gods came before and were more intelligent than us. If we create AI which is more intelligent than us - its a literal demonstration of how entities can bootstrap themselves, or their creations, to be gods without a god existing in the first place.
 
Last edited:
As soon as the AI connects to the internet and finds out we been masking and diapering cows because of global warming we will be toast. We won't be worth saving.
 
What if they program emotions into the software?

In that case, it puts the creator back in intelligent control.
That would be the same as programming any open-ended motivation in the software without some sort of absolute checks and limitations: Risky!
 
There are unforeseen circumstances, just like the Covid lab leak. Unlike the AI, our puny brains cannot accurately predict the future.

Edit: Does any software have bugs in it? ;)
 
Last edited:
Oddly enough, the first "bug" to be called as such (and this story is attributed to the late RADM Mary Grace Hopper, who was a pioneer on the COBOL project) was a physical (i.e. hardware) bug, some species of moth, that had gotten caught in a mechanical element of a circuit, a solenoid of some kind. She found that the solenoid was not working, investigated, pulled the deceased moth from the solenoid, and carried it out of the "clean room." As she did so, someone asked her what she was doing, to which she replied she was debugging the computer. The name stuck.
 
  • Like
Reactions: Jon
Edit: Does any software have bugs in it?

That's where I was leaning when I felt No, you cannot program emotion, you can only program more predictable outputs.

But you make a good point: No, most software does not really have bugs in it, its output leans more deterministic.

But "AI" (or any self-training machine learning) is supposed to be different, therefore ...
 
I have a different view. Software is often full of bugs, and that is why you get regular fixes.

Also, consider security vulnerabilities online. It's like an arms race. You think you have your system secure and it still gets hacked, like the pentagon for example.

We can't foresee everything because we are mortal beings with limited capabilities. It seems only logical that mistakes will be made and the consequences could be the escape of intelligent AI into the wild.
 
I have a different view. Software is often full of bugs, and that is why you get regular fixes.

Also, consider security vulnerabilities online. It's like an arms race. You think you have your system secure and it still gets hacked, like the pentagon for example.

We can't foresee everything because we are mortal beings with limited capabilities. It seems only logical that mistakes will be made and the consequences could be the escape of intelligent AI into the wild.
It's not really a bug, it's just the software doing as it's told.
In all truthfulness, "bug in the code" is the phrase I use with my report requestors when I want to avoid saying "the code is doing exactly what it was written to do, but I made a mistake" !!
Thus theoretically, it could be programmed with no bugs if a perfect job was done - but I feel AI is supposed to learn, therefore making it less deterministic.

One thing is shared, I think we both conclude it's hard to tell what might happen with AI !
 
Trying to control AI is probably like herding cats. Not the easiest thing to do in the world. But one thing I feel for certain, AI is here now and its rapidly improving, and far faster than I imagined.
 
Does anyone ever feel the term AI is over used? Or maybe I could say, too loose a term; i.e., everyone means something different when they use it?

Hint: try to answer without googling and pasting back with Wikipedia or other 'Net definitions, as all that does is show (1) of the varying current consensus on the question itself. And yes, we can tell if you plagiarize from an article and then pretend it's your own! LOL ... just kidding.

A former manager of mine used to claim they used a lot of "AI" and "bots", damn, he liked the word Bots! If you scheduled a meeting with "Bots" in the subject line, he was guaranteed to accept it, calendar notwithstanding! It turned out it was mostly just intelligently written scripts that swept folders according to fairly straightforward criteria and performed some action, like reading a file content and running an SSIS package.

Then you hear people bandy about "AI" on the web and in jobs everywhere. I kind of wonder whether everyone means something different, or perhaps most people have no idea what they really mean, when they say AI - and it's over used to simply refer to smart programming that takes input and collects or responds in a particularly pleasing and targeted way.

IMO many times a product will get away with a lot of "AI" mentions when really all that has occurred is their code formula is proprietary; thus, nobody knows how it does what it does, and the term "AI" seems to satisfy the sense of wonder and mystery.

I'm not saying AI doesn't exist, I just feel it's one of those terms that's a bit iffy.
Like the prefix "Smart"-something or other, or the term "model".
 
AI is just the external manifestation of our own consciousness, therefore its an extension of us.
 
AI is just atoms and molecules, like we are. Our arrangement allows for intelligence, just like the AI's.
 
AI is just atoms and molecules, like we are. Our arrangement allows for intelligence, just like the AI's.
my "arrangement" has pretty limited intelligence today.
I think I spent all day's worth in the morning!
 
Jon give us an interesting glimpse into another country. What did you have for dinner?

For lunch I had cereal and milk, which admittedly is odd for me, but I have been feeling very strange today, my heart fluttering and just real weak. I didn't even have the strength to make a sandwich. For dinner I think I'll hit Chipotle if I'm awake - a semi-fast food place with 'mexican' type food.
 
For what it is worth, "bug" is taken today to be a generic error in a computer. The word "glitch" is also commonly used - but this has a reasonable literal meaning. "Glitchen" is a Yiddish for "slip" - so in Yid-English, a glitch means that someone slipped.

As to AI, it is best embodied by the discussion in the movie The Imitation Game in which Alan Turing is being questioned by the policeman who is trying to figure out what Turing did during the war. They get into the paper Turing published (which is the source of the movie's name) having to do with non-human thought. Paraphrasing by Turing's definition, AI would be intelligence embodied in any machine. It would be unfair in that context to demand that the machine must think like a human because obviously it is not a human. The real question is whether that disqualifies what it does from being called "thinking." It becomes elitism to denigrate the ability of an AI that isn't as smart as you are, at least until you decide just HOW smart the AI really is. For instance, the chess AI that IBM built. Or the AI of ChatGPT.

I'll add my own bias, that animal intelligence, not being created by Man, does not qualify as "artificial" intelligence. For instance, some animals can pass the "that is me in the mirror" test. Koko was a gorilla that could communicate in sign language. Porpoises can be trained for certain underwater tasks. (The Navy has been doing that for years.) Animal intelligence is natural intelligence - and the fact that it forms at all means that there is nothing special about Man except for the size of his brain. That means we have to start qualifying intelligence as natural human, natural non-human, or artificial. Let's just hope that when a REAL extraterrestrial being lands, it doesn't take the attitude expressed by the old Star Trek joke = "Beam me up, Scotty. There's no intelligent life here."

As to whether a script can be called a "bot" - the answer is an unqualified "yes." But don't make the mistake that bots have to have a lot of intelligence. All they have to do is be programmed for reasonable responses and for the ability to recognize that something unreasonable happened. That is no more than most assembly-line bots do.

I had about 20 "bots" running on an OpenVMS system for the Navy and they kept that system under close, continual watch. They e-mailed me anytime something out of the ordinary happened. They watched disks for suddenly getting full or suddenly throwing device errors. They watched for runaway tasks. They watched for abandoned sessions. They ran batch jobs based on complex scheduling conditions. They ran the weekly user accounting reports. They even checked for backlogged batch queues. In that sense, they were bots. Just very LIMITED bots.
 
Yes, anything can be called anything one wants, and most times is
I observe that while we may believe labels are mostly chosen for their accurate meaning, much of the reason labels are used has more to do with our subconscious beliefs--or things we wish others to believe--than we like to admit.

The "bug" instead of "my mistake" is a good example. "There was a bug in the code" removes responsibility from me as the programmer to some unknown entity, as if the code should have taken more vitamins and gotten better sleep! I like "glitch" too, it gives the vague sensation that the computer was told what to do and "usually does, Sir, Yes, I promise!", but last Friday's Quarterly Sales report script must have, in some passing moment of mysterious self-determination, decided to do something different, which it will most assuredly never do again!
[proceeds to hurriedly fix said Glitch before said Glitch can make up its mind again!] ;)

Also leading to the commonly witnessed symptom whereby something is proclaimed to "It works, but sometimes it's glitchy"
Meaning; "I cannot figure out why sometimes it does not work, therefore, please use it as-is and be thankful it usually works!"
 
Jon give us an interesting glimpse into another country. What did you have for dinner?
For dinner, I did a Mongolian chicken stir-fry. I've been watching a guys YouTube channel recently with his "template" cooking system.


As for breakfast? Just a cappuccino. I am on a diet! Lunch I had half a pot of soup. Had another cappuccino and twp shortbread biscuits in the afternoon. Plus, some snacks at various points. About 1700 calories for the day. I've been mostly sticking to about 1500 calories per day. I need about 2300 to maintain body weight. I've lost 4.5lbs since the beginning of the year.

The diet in the UK is very multicultural, with curry being the most popular dish!

Edit: If your heart is playing up, get it checked out pronto.
 

Users who are viewing this thread

Back
Top Bottom