benefit of GPT (1 Viewer)

Aside from wasting hours going down a rabbit hole, probably nothing. If you actually test the code, you are given very carefully and do not just assume it to be correct, you will be OK. The real danger with AI is when you do not have the background or subject matter knowledge to recognize a flaw in the answer.
I've asked several questions here, have been offered different solutions, which I appreciate and thank all. But there was cases that the solution sent me to a rabbit hole. In any case I had to test the solution given by experts and see if it satisfies my need or not.

If I ask AI to give me a code to do something, I test the given solution to see if it's what I need or not.
There's been several cases the solution didn't work, I give back the result and error messages, there was a complete explanation on why the error has occurred, new code was given, testing again. After several times back and forth, it was perfect. Isn't it the same as every technical post here. Someone asks, it never works with the first try. He comes back with an error and you correct your code accordingly.
So what's the difference.

I'm sorry, but I didn't read passed the third line of your #13 post and the whole #19, because it was about politic and I have nothing to do with trump or the other candidate (I don't even know her name). So I can not address anything of them.
 
Last edited:
For god's sake stop making every post political.

In the context of asking the value of AI, it is my opinion that its value is negatively affected when the answers you get to some topics show an unbalanced approach. The problem is trust. Can you trust a source to be 100% technically accurate when in other subjects it shows a form of bias and/or built-in censorship?

If the site in question doesn't answer questions correctly then there must be something wrong. There are only a few possibilities. First, it could be due to a bad program - but giving "polite" answers for one candidate and less "polite" (or outright evasive) answers for the other one doesn't seem to be the kind of error caused by faulty programming. That is the kind of question that should be unbiased. Both sides should get the same (poor) quality of answer.

Since the answers are well-formed enough that they don't betray obvious program faults, the only other possibility is the old rule of "garbage in, garbage out." We are told where the AI sites get their inputs. They are fed various articles chosen from the news sites. But from which sites? Logic would suggest that their pool of knowledge is not symmetrical, thus leading to the questionable response of which Pat more directly speaks.

Therefore, in response to the original question, "What is the value of AI?" the answer has to be "It depends on the quality of each site's "training inputs." I cannot say I am confident in the quality of the current inputs based on AI responses. My evidence HAPPENS to be in the political arena AS WELL AS the tech responses we have seen from ChatGPT in the past. Which is why I went there.

KitaYama, I'm sorry if you find my political response seems out of place, but where Chatty's responses overlap into politics and those responses show errors or bias, if I want to honestly answer the question, I have to go into that arena to offer my answer.

If you look at my last two or three days of forum posts, you would find that I have stayed totally apolitical on many of them. When you consider my responses overall, I believe I have been pretty responsive without going crazy-political.
 
How to avoid looking like this using ChatGPT:

Use the ChatGPT service to speed up your work. That includes using the answers to make better questions or find better sources.

Don't ever use the first response without careful analysis, I encourage you to regen multiple times to find patterns of response.

If you need to tweak a response, use a private window to avoid getting stuck in one train of thought (or don't ask things in a way that it will trigger its memory feature).

Include in your prompt the things you don't want it to do, if necessary.

Don't talk to it.

Do not let it associate your persona to your questions, use private windows.

If you need more quality in the responses, fine, just do not make questions that will trigger its memory.

The memory feature is fine for formats in the responses, if at all.

There are things it does not know, don't force it, the least it knows the more it hallucinates.

Provide desired outputs in the prompt if you see it failing miserably.

Don't talk to it, it's weird.

Take it for what it is, a compendium of scraped data passed through a politically correct filter to avoid giving trouble to businesses and processed to be read by humans as natural language.

Don't ask for political advice, it's the worst idea.

Your prompts can look like Google searches, be practical.

This reply is pretentious has hell and I hope it helps somehow.
 
Last edited:
In summary:
It's a great tool, use it but by all means avoid being used by it.
 
Last edited:
I'm sorry, but I didn't read passed the third line of your #13 post and the whole #19, because it was about politic and I have nothing to do with trump or the other candidate (I don't even know her name). So I can not address anything of them.
I'm sorry you allowed your own bias to miss my point. Feel free to ignore my warning that if an AI is fed garbage, it spews that garbage at you. You assumed my response was political in nature. That was your error since my reply had nothing to do with politics. Politics is a very obvious topic where bias can be easily introduced so it was a no brainer to find a real life example, but YOU SHOULD CARE about that rather than feel superior. Hiding your head in the sand doesn't eliminate the problem. It only hides it from yourself until it is too late for you to do anything to counteract the bad information. You should be aware that you are being lied to deliberately not accidentally. Your only question should be, if this is not a strictly technical question, can I actually trust the reply? And my opinion is - not at this point. This is the wild west. At the moment the hard left has all the cards and they are using them. Ignore that information at your peril.

I repeat, we are discussing whether or not the answers to questions you ask an AI can be trusted. Since the AI never admits to not knowing something as a human might, you have to take the decisive answers with a grain of salt because AI's are far from omniscient. At least if the question is technical, you can test the proffered solution. For anything else, get multiple opinions.
 
I'm sorry you allowed your own bias to miss my point.
And I'm sorry you didn't get MY point. I don't want to pull this thread into another direction, but you simply don't stop and accept. So it's my part.
Do you really think that because AI is biased to left or right, it writes a sql statement I need differently? Do you think if the AI is conservative or liberal, it gives different answer when I ask for a code to copy a file from one folder to another? ( I even don't know what those words mean, but people obsessed with political use it frequently. So I think it may fit here)
Dose AI's political data collection site has any effect on the code it offers?
Does a conservative AI gives you 2+2=4 and a liberal one tells you 2+2=5?

How a biased AI, politically or on the whole, gives you different programming code for the same task?

That was your error since my reply had nothing to do with politics. Politics is a very obvious topic where bias can be easily introduced so it was a no brainer to find a real life example, but YOU SHOULD CARE about that rather than feel superior.
I apologies for missing your point. You weren't talking politic and you wanted to show me an AI may be biased.
We're talking about Access and AI in this thread. This time I'm listening. Can you explain how a biased AI gives me different code for a specific task?

Hiding your head in the sand doesn't eliminate the problem. It only hides it from yourself until it is too late for you to do anything to counteract the bad information. You should be aware that you are being lied to deliberately not accidentally.
I really don't understand. Me hiding my head in the sand? Do you mean I'm hiding from political situation around me? or hiding from programming problems?
Do you mean, me not caring about politic, am on the wrong direction? Who's lying to me? AI? our politicians?
Is it a political warning? You said above "my reply had nothing to do with politics" Is it a Programming warning? Why should I hide my head in sand while programming?


Your only question should be, if this is not a strictly technical question, can I actually trust the reply?
I said it several times, I say it again. I only talk to AI about programing and my problems in my job (design, physics, destruction tests...)
Why shouldn't I trust AI when I ask for a thermal resistance and it gives me :

2024-09-18_09-48-07.png

Do you expect me worrying if the formula comes from a liberal biased site or not?
In case you don't know, you can always ask for the source site AI gives you the info. So you can check the formula comes from a liberal site or conservative one. (I can't stop laughing now)
If by trust you mean if it's correct or not, of course anyone checks the formula to see if it works or not. You mix biased and trust and it confuses me what you mean.


At the moment the hard left has all the cards and they are using them. Ignore that information at your peril.
One moment you tell me your reply wasn't political, and then you give me this. If it's not political, then what it is? What does left mean? It's trump side or Kamilia side?

Since the AI never admits to not knowing something as a human might
Again wrong. AI has told me multiple times it has not enough data to answer.
I also have been told in multiple occasions that the given data can not be trusted. Seems you have never used AI.
 
Last edited:
@Pat Hartman
I really hate to insist, but since you don't use AI, let me show you this :

By ChatGPT:

In general, I aim to provide accurate and unbiased information based on the data and patterns I've been trained on. However, there are a few factors that could influence the accuracy or potential bias of a response:

1. Limited Context or Ambiguity:
If a question lacks sufficient context or is ambiguous, my response might not fully address your needs or could interpret the query differently than intended. In such cases, clarification helps improve accuracy.

2. Knowledge Cutoff:
My knowledge was last updated in September 2023, so if there have been significant changes in technology or new information since then, my responses may not reflect the latest updates.

3. Complexity of the Topic:
For very complex or highly specialized topics, I aim to provide reliable answers, but errors can still happen due to the limitations of my training data. In those cases, it's helpful to verify my responses with other trusted sources or consult an expert.

4. Bias in Training Data:
Though I strive to be neutral, I am trained on vast amounts of data from diverse sources, and some of that data may contain inherent biases. I do not have personal opinions or beliefs, but the information I provide could reflect patterns seen in the training data.

5. Programming and Query-Specific Errors:
In specific programming queries (like the one you asked), my response can be accurate for the general case, but differences in your setup, environment, or context could lead to unexpected outcomes. However, the SQL query I provided is based on well-established SQL practices.

If you ever feel uncertain about an answer, double-checking or running tests is always a good approach to ensure correctness. If something seems off, feel free to ask for clarification!

 
I really hate to insist, but since you don't use AI, let me show you this :
Golly gee.
Do you really think that because AI is biased to left or right, it writes a sql statement I need differently?
Of course not. I did say "non technical question".
I said it several times, I say it again. I only talk to AI about programing and my problems in my job
You are not the only person who reads threads here.

One moment you tell me your reply wasn't political, and then you give me this. If it's not political, then what it is? What does left mean?
We agree there is no political bias when answering technical questions. However, that leaves all the non technical questions. I happened to pick a political example but it wasn't to show that the AI is politically biased only that it is biased PERIOD and a political example was much easier to produce and reproduce than one based on finance or healthcare or some other topic.

The danger of the AI is that it can make the unwary believe what it says because of the way it presents answers. I have never encountered a case where it told me that it didn't know an answer or was unsure of an answer. So, essentially by default it claims to know everything. You asked it a question that prompted it to give you an outline of where its answer might be "incorrect" but if you just go asking questions, you get what sounds like definitive answers that many will simply accept as truth and that is the danger.
 
You are not the only person who reads threads here.
At the time I replied, The thread was in Access General forum. That was why I begged to keep it technical and exclude politics.
So anyone who was reading, knew we're talking about AI in programming.

Now seems someone moved the discussion to AI thread without a warning.
Things move fast here and I can't catch up.
Now that it's out of Access, then yeah, you are correct. I'm out of here and back to Access forum.
 
Last edited:
I think this thread was meant to talk about how AI helps with programming. Some good points have been made, but it’s probably best to stay focused on the practical side, like how AI helps with tools like Access.

Pat made a good point about being careful with trusting responses, whether it’s from AI or anyone else.
 
Wes Roth does some excellent insights into AI.. In his latest video he shows where a PhD student who created some code over the past year, asked chat GPT to create the code and chatty did it in one hour!

It's at time index 3:25

Sam Altman Teases Orion (GPT-5) 🍓 o1 tests at 120 IQ 🍓 1 year of PHD work done in 1 hour...​

 
Only you are seeing politics in the thread.
Do you read others' comments at all? Can't you see how many members found your comment political?

My last post meant to be the last one on this nonsense argument, but it seems that not only you don't admit you were wrong, but you play it somehow it's others fault who missed your point. Now that YOU're going to ruin a technical thread and insist on going off track, let me help you finish the job.

I asked you several questions in #27, but you were clever enough to skip them and not reply. So I ask again. If your post wasn't political what the following meant to be?
At the moment the hard left has all the cards and they are using them. Ignore that information at your peril.
When you ask why you should vote for Kamala, you get wine and flowers and violins, and they mention her "experience" as a VP. When you ask about Trump, it used to say that it couldn't weigh in on political questions. Now it gives some tepid response, but it doesn't mention his experience as President. So, Kamala's do-nothing VP job gets more importance than Trump's actual experience as President for four years.
so you think it is OK for the AI to talk about Kamala as if she were the second coming (she isn't) but say they were not allowed to say anything about Trump. And when that hit the fan and they "corrected" their algorithm, they chose to completely ignore four years of on-the-job training and say that 3.5 years of failure was important?
Biden actually hated Kamala though so he kept giving her public tasks that he knew she would fail at. It was pretty amusing to watch. Biden had a great sense of humor and some of it still slips out.

Note : If you still don't understand, the above "Biden actually hated....." is not about AI anymore. You're only pushing your political agenda to others on who is right and who is wrong. You are absolutely taking side in above statement and yet saying it's not about politic? (and again, in a technical thread)

You not only didn't stop there, but went ahead to teach me lessons on my personal life style, trying to teach me not caring about politics is a bad choice.
Who asked you if my personal life method is right or wrong and how you allow yourself to advice on one's personal choices in a technical thread?
Hiding your head in the sand doesn't eliminate the problem. It only hides it from yourself until it is too late for you to do anything to counteract the bad information. You should be aware that you are being lied to deliberately not accidentally.

We have a saying here : You know a society is on a wrong track when the rulers don't follow the rules.

But never mind. I have a lot of posts in my ignore list. I don't care to add this one too.
Have a good day talking politics.
 
Last edited:

Users who are viewing this thread

Back
Top Bottom