benefit of GPT

When someone asks an AI "Who is the best candidate for President?", there is no way for any current AI to objectively answer that
Just like a human then.
 
I have an anecdotal story of a discussion between the head of a data extraction program and a Google Gemini engineer.

Without identifying specifics, the actual conversation included the Google employee wanting to base an AI prediction on the assumption that the outcome of a particular repeating occurrence would always be one specific result.

The program manager had to point out that the very reason they were extracting actual data points from thousands of reports was to avoid having that kind of assumption creep into their conclusions.

Due to the nature of the data and the people involved, that's more than I probably should say at all. It crystalized for me the potential dangers of relying on AI.
 
Proof of another old saying that I remember from my college days and the early research into AI: Artificial Intelligence cannot cope with natural stupidity.
 
How to avoid looking like this using ChatGPT:

Use the ChatGPT service to speed up your work. That includes using the answers to make better questions or find better sources.

Don't ever use the first response without careful analysis, I encourage you to regen multiple times to find patterns of response.

If you need to tweak a response, use a private window to avoid getting stuck in one train of thought (or don't ask things in a way that it will trigger its memory feature).

Include in your prompt the things you don't want it to do, if necessary.

Don't talk to it.

Do not let it associate your persona to your questions, use private windows.

If you need more quality in the responses, fine, just do not make questions that will trigger its memory.

The memory feature is fine for formats in the responses, if at all.

There are things it does not know, don't force it, the least it knows the more it hallucinates.

Provide desired outputs in the prompt if you see it failing miserably.

Don't talk to it, it's weird.

Take it for what it is, a compendium of scraped data passed through a politically correct filter to avoid giving trouble to businesses and processed to be read by humans as natural language.

Don't ask for political advice, it's the worst idea.

Your prompts can look like Google searches, be practical.

This reply is pretentious has hell and I hope it helps somehow.
And by the time you've memorized that list of disclaimers and warnings, you might have been better off and quicker not using it at all :)
 
And by the time you've memorized that list of disclaimers and warnings, you might have been better off and quicker not using it at all :)
Yes, many times I end up doing it myself, but it's a handy tool. In general, before I submit my prompt, I ask myself how can they use this prompt against me later?
 
I have an anecdotal story of a discussion between the head of a data extraction program and a Google Gemini engineer.

Without identifying specifics, the actual conversation included the Google employee wanting to base an AI prediction on the assumption that the outcome of a particular repeating occurrence would always be one specific result.

The program manager had to point out that the very reason they were extracting actual data points from thousands of reports was to avoid having that kind of assumption creep into their conclusions.

Due to the nature of the data and the people involved, that's more than I probably should say at all. It crystalized for me the potential dangers of relying on AI.
I probably should acknowledge that I don't think the Google person was being anything less than professional; it's just that he or she apparently failed to understand the mission. If their question had been posed as "Are there any assumptions you can make in XYZ situation?" Rather than, "Can we assume that ABC is an outcome in XYZ situations?" it might not have sounded so bad. As it was, my son-in-law was not impressed.
 

Users who are viewing this thread

Back
Top Bottom