ChatGPT: The Future of AI is Here!

Democrats be like:

1694657128033.png
 
We all know that ChatGPT can hallucinate (make something up). But here is an article that shows something good coming from a rational use of its abilities.


Basically, this is the "long story short" explanation: A mom had a kid whose condition was slowly getting worse. After several specialists whiffed on the diagnosis, she used ChatGPT and got a diagnosis of "occultus spina bifida" (hidden spine deviation). She found a specialist in that particular condition and presented both her evidence and the output of ChatGPT. The doctor did some advanced scans and confirmed the diagnosis, for which there was a surgical corrective action. Maybe AI isn't always right yet and maybe it never WILL always be right. But this time it was right enough to help a sick kid get proper treatment.

Credit where credit is due.
 
It's a great story and I'm sure we'll be hearing lots more in the future. The scary thing is that it isn't possible to tell truth from fiction or a hallucination (the politically correct word because we can't call it a lie or even mis-information). In the case of the boy's situation, an actual disease matching the symptoms was found. This is what an AI should excel at because it is simply list matching with millions/billions of individual pieces of data. It was then a matter of finding an expert to look at the results and confirm them. How many people out there have undiagnosed conditions or even diagnosed ones without specific cures who are chasing cures proposed by charlitains? Is AI going to be sending more people down that path?

Much as I'd love to love ChatGp the way Uncle does, I'm way too afraid of all the wrong and biased answers it produces that we have no good way of validating. Doesn't mean I wouldn't use it to find an answer but I would always then have to validate the answer either by trial and error if it gave me VBA code or by other research.
 
Much as I'd love to love ChatGp the way Uncle does, I'm way too afraid of all the wrong and biased answers it produces that we have no good way of validating. Doesn't mean I wouldn't use it to find an answer but I would always then have to validate the answer either by trial and error if it gave me VBA code or by other research.
I think in a very few short years we will be saying the same thing, except replacing the word ChatGPT with humans. The tech is improving by leaps and bounds. We will then be speaking to our future doctors, a black box powered by an AMD threadripper chip. "Dead in 5 days...", it spouts, in a robotic Stephen Hawkings type voice.
 
There are many internet-based videos/articles describing that the inner workings of the AI models are not well known nor easily determined.
This appears to identify an area of AI research to help explain to the user/questioner how a specific response was determined/assembled.
 
I think in a very few short years we will be saying the same thing, except replacing the word ChatGPT with humans. The tech is improving by leaps and bounds. We will then be speaking to our future doctors, a black box powered by an AMD threadripper chip. "Dead in 5 days...", it spouts, in a robotic Stephen Hawkings type voice.
Thank god that when all these things come to pass, I will be long dead. It all sounds horrible.
Col
 
What I fear is that when AI gets "smart enough" to take over a lot of things we do now, we will look like the society in the movie Idiocracy, which is a chilling (if also somewhat humorous) view of the future.
 
Just think of great, smart AI in the hands of great scammers.....
 
I need to divert this because it could really get raunchy really quick. Consider my old car... when it gets old enough the joints begin to go and the damned thing starts leaking. But if you say you want to try to exchange it for a new one, nobody wants the old one and you are stuck with it. But of course I was talking about old cars just to change the subject.
 

Users who are viewing this thread

Back
Top Bottom