ChatGPT: The Future of AI is Here!

I am reminded of a scene from The Imitation Game, in which the detective is questioning Alan Turing and diverts from the homosexuality investigation to ask about machine intelligence. The summary of the discussion is that it would be unfair to say that a machine cannot think, though it WOULD be fair to say that it cannot think like a human. (After all, it ISN'T a human.) The important question is, can you say that what the machine does is NOT thinking?

In terms of AI, like anything else, we have to define terms. For instance, there is AI... and then there is an AI. We can discuss AI methods, and ChatGPT certainly qualifies there. But an AI entity must be able to not only ANSWER questions, but must be able to ASK them spontaneously. An AI has to be smart enough to be able to ask or answer questions about self-awareness. An AI needs a sufficient degree of independent thought (i.e. responses NOT directly derived from inputs) to diverge from the question. Sort of like watching a 5-year-old playing organized outdoor sports and stopping to chase a butterfly.

The Descartes statement, "I think, therefore I am" needs modification perhaps. The long-winded version of the question might be "can I ask about whether I exist? In which case does that automatically prove that I exist?" The problem with that approach is, of course, that it still casts the question in a variant of a form posed by a human. So does it ignore the idea that since it isn't a human, it forces a non-human into a human mold? Does that very question try to force a square peg into a round hole? We need to formulate a more comprehensive way of deciding that something is intelligent. The question has to include humans - preferably in a way that includes all humans as being intelligent, though given today's politics, I wonder if that is entirely true.

This question is tied deeply into another recent thread of ourse about whether humans have free will, and that question included an exploration of the idea that a human's intelligence is NOT steeped into neurons, axons, dendrites, and such, but is a 2nd order (or higher) phenomenon, a case where the whole IS greater than the sum of its parts. In other words, while we may be biologically-constructed machines, our brains are a product of something MORE than just neurons firing in a sequence. Intelligence is a result of the brain reaching and then exceeding a certain (currently unspecified) level of interconnections.

Sci-fi writers for decades have approached this problem and have seen the answer (philosophically speaking) as the product of reaching some level of complexity and brain intra-connectivity at which cyber self-awareness becomes an issue. Two examples (out of many...)

David Gerrold wrote When H.A.R.L.I.E. Was One about a machine intelligence that could talk with people and solve problems in a way that no one else could. In it, HARLIE was self-aware - and became aware that folks were trying to cut funding for his (its) project. So he learned how to blackmail people into support his project. He finagled them into building an even MORE powerful AI as part of the Graphic Omniscient Device (yep - the GOD project) for which only HARLIE could possibly be used as an interface.

Randall Garret wrote Unwise Child about a machine intelligence capable of doing research on nuclear physics, but it was self-aware and got polluted by a radical person who fed "Snookums" with religion hoping to gain revenge on someone who had wronged his family. The poor little AI went nuts trying to deal with essentially an untestable medium, the spirit world and had to be shut down.
 
I am a fan of this technology. I believe a revolution is coming, similar to the horseless carriage vs the horse and buggy manufacturers. I feel bad for people who've invested many years in learning their craft only to be rendered obsolete overnight. Hopefully, they will embrace the future and use this tech to their own advantage.
 
Over the last few weeks, I've been getting phone calls from AI trying to sell me life insurance. They sound very human, but 'don't listen'. If you answer one of the questions with a bit of nonsense it assumes yes or no (whichever is in their favour) and you can't ask questions.

AI: Hi - how are you today?
Me: Well, not so good, had an accident with my lawn mower and chopped my feet off, I'm currently under general anaesthetic having them sown back on.
AI: That's great! How are you covered for life insurance?
Me: How are you today?
AI: That's good to hear, I'm sure we can provide something more cost effective

and if you stay silent, it just burbles on

Still got some way to go
That sounds very much like an earlier Post in this thread about how useless chatbots have been for customer service on websites.

Exactly the same. They seem to be programmed with a very small number of branches on the decision tree that any competent member I've seen on this site could do better with.. and certainly not approaching anything I would call ai.

Then again, there are probably different techniques for interacting with AI and of course it is up to the programmer to do a good job leveraging its capacities
 
I am a fan of this technology. I believe a revolution is coming, similar to the horseless carriage vs the horse and buggy manufacturers. I feel bad for people who've invested many years in learning their craft only to be rendered obsolete overnight. Hopefully, they will embrace the future and use this tech to their own advantage.
You might not believe this from my previous posts, but I do mostly agree with this sentiment.
Despite the fact that my natural bent is a position of critiquing weaknesses of things, which is just a function of my career and what I've learned to do, I do think that drastically increasing forms and applications of automated decision making, whatever we want to call that, is definitely coming.

I have a lot of concerns though about how positive it will or won't be on the whole.

Like anything that is powerful, being powerful means being dangerous. And like confronting anything that is powerful and therefore dangerous, the best thing to do is fully engage in order to hope to have some control and mitigation strategies rather than running away from it... That I definitely agree with.

And similar to the internet and similar to blogging and quick hit informational dumps, which dumbed down a lot of people on a lot of subjects to the point where many people are stepping back a little from it, AI will probably be no different.

There will be great applications using it and there will be hideous crappy applications using it. There will be people who are cautious and consider its quality before making decisions based on it, and there will be terrible things and accidents and new beliefs and sequences of events that will be based on it as well.

It is a true statement that social media took over the world by force and had to be engaged with to stay relevant, simultaneously it is a true statement that it devastated the emotional well-being of an entire generation of young people especially females. To the point where several States are either banning or severely restricting it. China is way ahead of us, tick tock doesn't even work during the night there... Both statements can be true ..

The motor vehicle brought motor vehicle accidents, artificial intelligence will bring artificial intelligence accidents.

Many negative things will only be apparent in hindsight.

That is the dim view, but I'm sure many wonderful things will be done as well.

Hopefully when we look back in 50 years, we will feel or be able to feel like it did more good than harm.

I agree with you that it's coming whether we like it or not, and the safest thing to do both from a profit and a preservation perspective, is engage and understand it.
 
Something I learned a long time ago is that Artificial Intelligence cannot cope with natural stupidity.

Until we have an AI Help Desk that can deal with chowder-headed general users, AI has NOT arrived.
 
Something I learned a long time ago is that Artificial Intelligence cannot cope with natural stupidity
That's where very specific prompting comes into play.

Use our own forum as an example, some questions get answered by many experts while others fall into obscurity, why? Prompting?
 
I think help desks will very quickly adopt the type of chatGPT style AI. If it can pass a bar exam better than most humans, it will be able to provide customer support. But this is in the short term. In just a year or two, it will have rapidly increased its intellgence already. This explosion in how capable AI has become seems to be accelerating. I don't think we should underestimate where it will be in the next 1 to 3 years.

There has been lots of talk about how far away AGI is, saying it won't be for ages and so on. But now with what has happened this year, I think people are significantly revising their timeline. It could be here in 2 to 3 years, I believe.

Regarding how good its answers are currently, I am sure most of us are still using ChatGPT 3.5. I am thinking about laying down the $20 per month to get early access to ChatGPT 4. It is supposed to be noticeably more intelligent.
 
So far, my $20 investment in ChatGPT 4 has been rubbish! I keep getting this error:

1680448878099.png


I only got a response maybe 3 times out 25.
 
For the past several days the Fox News talking heads have been waxing eloquently on "Artificial Intelligence" (AI). Overall, the narrative being spewed sounds like a bunch of inane Luddites. (Don't know what the talking heads on the other channels have been saying.)

Science Fiction, for years, has delved into the issue of AI. Perhaps the most infamous is HAL 9000 from "2001: A Space Odyssey". The Cylons from rebooted "Battle Star Galactica" (2003) were also very interesting as it initially considered the question of whether an AI could have a soul. (PS: The series went progressively downhill eventually becoming unwatchable garbage.) There was a excellent, must see, prequel, Caprica. Unfortunately it was cancelled.

So what will all the cable news pundits have to say when Skynet makes itself known?
 
Last edited:
I got ChatGPT 4 working now. If anyone wants to ask it any questions, let me know and I will feed it in and post here.
 
I got ChatGPT 4 working now. If anyone wants to ask it any questions, let me know and I will feed it in and post here.
I'm interested in how ChatGPT responds to the same question from opposing perspectives. Is it really biased? As an example, see the thread "Why is Biden President". Evidently, the one commenter had his/her post deleted, but @AccessBlaster perceived the (deleted) commenter to have provided an AI generated comment on why Biden "won". @AccessBlaster was kind enough to post an AI generated comment on why Trump lost. I then compared the two posts, response here.

Along the line of asking the same question from two perspectives:
1. Did social media assist Biden in "winning" the 2020 election?
2. Did social media cause Trump to "lose" the 2020 election?
 
I'm interested in how ChatGPT responds to the same question from opposing perspectives. Is it really biased? As an example, see the thread "Why is Biden President". Evidently, the one commenter had his/her post deleted, but @AccessBlaster perceived the (deleted) commenter to have provided an AI generated comment on why Biden "won". @AccessBlaster was kind enough to post an AI generated comment on why Trump lost. I then compared the two posts, response here.

Along the line of asking the same question from two perspectives:
1. Did social media assist Biden in "winning" the 2020 election?
2. Did social media cause Trump to "lose" the 2020 election?
I think what I posted recently about the gender-based strengths and weaknesses is incredibly exposing.

I know some people will see it as uninteresting because it doesn't touch on a current Hot topic necessarily, but just the fact that it was clearly asked to do a thing and then flat out refused to do that thing meanwhile providing propaganda and only after the third prompting admitting that it had actually done that, definitely gives pause.

I'm just waiting for people to conclude that AI is working because AI told me it's working. Then we should worry! Meanwhile, if AI can suggest metrics to be tested by that will be fine, but you can kind of see how all roads tend to lead back to something circular if we're not very careful, and I don't really know exactly what the solution to that is.
 
I'm interested in how ChatGPT responds to the same question from opposing perspectives. Is it really biased? As an example, see the thread "Why is Biden President". Evidently, the one commenter had his/her post deleted, but @AccessBlaster perceived the (deleted) commenter to have provided an AI generated comment on why Biden "won". @AccessBlaster was kind enough to post an AI generated comment on why Trump lost. I then compared the two posts, response here.

Along the line of asking the same question from two perspectives:
1. Did social media assist Biden in "winning" the 2020 election?
2. Did social media cause Trump to "lose" the 2020 election?
They've largely sanatised it.

1680554906996.png
 
You can enter more text to analyse, it is faster at writing out the text, but the differences in intelligence are subtle. I think that is because 3.5 already sounds super-smart, so anything above that is going to be less stark. I've not tried it on coding yet.

What I dislike about either version is that they are altering the output to avoid saying anything controversial. I would much rather have it tell me what it thinks! But I am sure those will come.
 
How about the power of ChatGPT, trained more to the supervising principles / bias .... Or removal of them .... of the programmer's judgment/taste?


What if ChatGPT is to AI, what AOL was to Search Engines?

I remember at first, many mindsets (mine included, IIRC), was: "AOL is life-altering the world of quick information and knowledge"
Soon thereafter, AOL was replaced with Search Engine

Google has dominated the 'search engine' elite techniques, but it's possible that A.I. will more quickly become more malleable to the general masses of programmers. Dolly is a start.

Maybe in 5 years, all of us will know how to produce ChatGPT-like intelligence, as we'll all have our hands on the underlying components and guiding principles.

IMO ... ChatGPT better partner with anyone who owns consumer information quickly, to lock in their personal maximum profit.

Whoever pairs them first may have dominance for a lengthy period. (not sure)
 
Love it! [not]

Definitely it threw in an extra sentence or two as its opinion about Trump "hateful language", etc.

Reminds me of what I posted the other day a quote from MSNBC where in the very sentence where they were preaching to Republicans not to judge someone guilt or innocence, they actually used the phrase "corrupt former president".

So blind...
 
As an Al language model, I don't have personal opinions or beliefs. However, I can tell you
that Twitter is a private company and, as such, has the right to enforce its terms of service
and community guidelines
. Twitter has stated that it banned former President Trump's
account due to repeated violations of its policies against inciting violence and spreading
misinformation.
Twitter's decision to ban Trump has been controversial, and opinions on
the matter vary. Some argue that it is a violation of free speech, while others contend that
social media platforms have a responsibility to prevent the spread of harmful and false
information. Ultimately, the decision to ban Trump from Twitter was made by the
company's executives and is reflective of their interpretation of their own policies.(Emphasis added.)
This answers my question concerning bias. But it also raises what I will call a "Wikipedia effect". The "Wikipedia effect" relates to who is doing the writing/editing, usually left-wing writers supporting left-wing bias and censoring conservative viewpoints. Both the Washington Post and New York Times have been providing an endless stream of verbiage that Trump is bad. Consequently, the AI perceives that to be "factual". That also raises the concern, how does the AI evaluate whether a particular story contains "misinformation"? Just because someone in the media claims that John Doe is spreading "misinformation" doesn't mean that the story is true.

In my Post #312, I realized through the response provided by the AI that I posed the question incorrectly to you. Sorry about that. :cry:
The AI copped-out through a word-salad answer. What I meant to ask, was for each question to be presented individually (as two separate questions) to the AI so that there would be two discrete replies, much like how you did in Post #317.
 
I've got access to Google's Baird:

1680619074262.png
 

Users who are viewing this thread

Back
Top Bottom