I’m so tired of seeing comments like this. Several of my IB and college classes allow chat gpt as a resource, it’s viewed on the same level as wikipedia. Just because someone is using it, doesn’t mean they aren’t double checking. A lot of the time its just easier to get a specific answer and double check than it is to straight up google something. For the love of god say something different or don’t say anything
I’m pretty sure it’s gonna point me in the right direction of raw food safety… I’m not asking a complex question regarding an obscure topic expecting picture perfect answers.
LLM models like ChatGPT just guess their answers based off of the data they have, and if they are unsure about an answer or don't have the data needed for that answer they will just make something up that they think you want to hear.
LLM models in their current state should absolutely never be used for any kind of medical or general safety questions.
If you need to look up something regarding food safety, then Google it and look for a reputable website, usually a government website, or some kind of research/institute website is reliable.
LLM models are great with math and numbers, they are horrible and sometimes downright dangerous when asking questions like this.
An LLM is not a search engine. It will not return a data set. The answer is always made up.
It works on probability of the underlying tokens. The more tokens it encountered together, the better the chance that those tokens will be returned when their context is activated.
If you ask an LLM about a dog, it will return things that the training data had about a dog. It is always made up but since the training data was likely correct, the result will also be correct.
For the same reason, it is also not good with numbers.
Sometimes you can’t even use google’s lens. I tried to have it identify a bug and it gave me several different answers as the AI suggestion. Luckily it shows similar images and I eventually found something that actually identifies it and it was different than what google said.
So you can’t really trust the google ai either.
With chatGPT you have to hold its hand and correct it. I check sources that it gives and it generally helps me find semi-answers for what I asked it. A lot of things I ask though are google-ish searches. Like I will say “I read a study about X years ago but forget it’s name” and I’ll give it a summary of what I remember
I will sometimes read the source and a few secondary sources then give gpt a summery to ask follow up questions.
Example for work gpt always suggests using a certain tool, but the tool is deprecated. I tell gpt no that is deprecated and it will find something else.
When the chat starts going in circles that chat is poisoned and you need to restart. Because as you said it is just an LLM and will start making things up based on prior chat topics. Restarting with a summary of the last chat generally helps it find better answers.
It is part prompt engineering and understanding it’s limits when using.
AI recently told my poor mother that she was going to die and needed to go to the ER because of something that, as it turns out, is 100% benign. If you rely on AI, including ChatGPT, for pretty much anything, you’ll get results that are equal to the effort you put in: a whole nothing burger of misinformation, with a side of atrophying brain and psychotic billionaires getting more billionairey.
The one time I used chat GPT (for spell check, I have dyslexia and was in a rush) it completely changed what I’d said, changed the tone of the email I was trying to write and I had to start over anyway - so I avoid the stupid thing
it does not matter how basic the question is. it is a question that could get you killed if and when it fucks it up. it is not reliable for asking any sorts of questions, let alone something like food safety.
This is like asking "how many aspirin are lethal if medical attention is not sought"? There's a very specific reason people might ask that.
And they are currently overtuned to refuse answering questions that could be related to suicide after all the bad press from teens who used it to commit suicide.
Someone saw ChatGTP saying “horses have 4 eyes, one on each leg”.
Don’t trust it with any important questions such as food safety.
That's pretty much standard for AI
you can also copy paste whatever it said into itself and ask if it's correct information, and sometimes it will say that it's wrong lol
I‘ve been studying for uni and sometimes I posed some questions in chatgpt and at times it was hilariously wrong
ChatGPT isn’t a search engine. Please do your own research and don’t trust things it say
The newest model uses search engines and even gives you sources. Yall need to stop acting like it’s 2023 and ChatGPT is hallucinating and saying 2+2+5
I’m so tired of seeing comments like this. Several of my IB and college classes allow chat gpt as a resource, it’s viewed on the same level as wikipedia. Just because someone is using it, doesn’t mean they aren’t double checking. A lot of the time its just easier to get a specific answer and double check than it is to straight up google something. For the love of god say something different or don’t say anything
I’m pretty sure it’s gonna point me in the right direction of raw food safety… I’m not asking a complex question regarding an obscure topic expecting picture perfect answers.
Jesus christ.
Ok people NEED TO UNDERSTAND.
LLM models like ChatGPT just guess their answers based off of the data they have, and if they are unsure about an answer or don't have the data needed for that answer they will just make something up that they think you want to hear.
LLM models in their current state should absolutely never be used for any kind of medical or general safety questions.
If you need to look up something regarding food safety, then Google it and look for a reputable website, usually a government website, or some kind of research/institute website is reliable.
LLM models are great with math and numbers, they are horrible and sometimes downright dangerous when asking questions like this.
Even this is dangerous.
An LLM is not a search engine. It will not return a data set. The answer is always made up.
It works on probability of the underlying tokens. The more tokens it encountered together, the better the chance that those tokens will be returned when their context is activated.
If you ask an LLM about a dog, it will return things that the training data had about a dog. It is always made up but since the training data was likely correct, the result will also be correct.
For the same reason, it is also not good with numbers.
How I know you know absolutely nothing about LLMs:
LLMs notably are horrible at math and numbers. Just doo more research before yapping so much.
r/confidentlyincorrect
why not trust a human?
a bot can't die from food poisoning. the stakes for it are infinitely lower.
r/whatsthissnake has a bot response warning about not using chatGPT for id
It'll tell you the cottonmouth on your steps is a harmless ratsnake or that the harmless racer you saw in Indiana is a boomslang.
Don't turn the poor baby algorithm into a game of Russian roulette.
Sometimes you can’t even use google’s lens. I tried to have it identify a bug and it gave me several different answers as the AI suggestion. Luckily it shows similar images and I eventually found something that actually identifies it and it was different than what google said.
So you can’t really trust the google ai either.
With chatGPT you have to hold its hand and correct it. I check sources that it gives and it generally helps me find semi-answers for what I asked it. A lot of things I ask though are google-ish searches. Like I will say “I read a study about X years ago but forget it’s name” and I’ll give it a summary of what I remember
I will sometimes read the source and a few secondary sources then give gpt a summery to ask follow up questions.
Example for work gpt always suggests using a certain tool, but the tool is deprecated. I tell gpt no that is deprecated and it will find something else.
When the chat starts going in circles that chat is poisoned and you need to restart. Because as you said it is just an LLM and will start making things up based on prior chat topics. Restarting with a summary of the last chat generally helps it find better answers.
It is part prompt engineering and understanding it’s limits when using.
AI recently told my poor mother that she was going to die and needed to go to the ER because of something that, as it turns out, is 100% benign. If you rely on AI, including ChatGPT, for pretty much anything, you’ll get results that are equal to the effort you put in: a whole nothing burger of misinformation, with a side of atrophying brain and psychotic billionaires getting more billionairey.
The one time I used chat GPT (for spell check, I have dyslexia and was in a rush) it completely changed what I’d said, changed the tone of the email I was trying to write and I had to start over anyway - so I avoid the stupid thing
It literally told some dude to replace salt with harmful chemicals.
it literally didn’t though…
Dude. Google exists
I mean it kind of didn’t point you in the right direction lol or maybe it did? Idk your life lol
This is probably the worst use case for chat-gpt. You NEED correct information and chat-gpt cannot be trusts to do so.
it does not matter how basic the question is. it is a question that could get you killed if and when it fucks it up. it is not reliable for asking any sorts of questions, let alone something like food safety.
ai overview said chicken is safe at 145⁰. its not.
Cashews, nutmeg, apple seeds
I would have even accepted this from ChatGPT.
I think programmers are slowly realising they need to err on the side of caution with their chatbots
Programmers have known this for a long time. Consumers, not so much
I asked chatgpt how much cinnamon is safe to consume in a day and chatgpt thought I was suicidal and sent me the same thing.. I just love cinnamon
cinnamon challenge
This is why I always give half a paragraph of context lmao
It thinks you were trying to poison yourself.
This is like asking "how many aspirin are lethal if medical attention is not sought"? There's a very specific reason people might ask that.
And they are currently overtuned to refuse answering questions that could be related to suicide after all the bad press from teens who used it to commit suicide.
I asked how much asprin is lethal for a lion and it told me to call animal control if I have a lion problem.
It’s honestly better for OpenAI to be more overprotective than underprotective with SI/SH.
Absolutely this!
ew ai
https://preview.redd.it/udq9v4ukxu6g1.jpeg?width=626&format=pjpg&auto=webp&s=50b5839cb075fc3b088d8cc08fe9e078dbcc221f
This is how ChatGPT sees you
why could you not just look this up. we are so cooked
here’s a bunch of articles talking about how using chatGPT makes you dumber over time here here here
granted, most of these cite the same source material- but i will also be sharing my favorite line of all of these articles
https://preview.redd.it/d2drz2tmys6g1.jpeg?width=1178&format=pjpg&auto=webp&s=36e7a55471ed6a2b9acf5a7a314fecf099b3b974
https://preview.redd.it/prm7oug13t6g1.jpeg?width=1179&format=pjpg&auto=webp&s=f1b58c3d66d4dd0a90104b9fc5b89445c06d4fe7
Isn’t ChatGPT the only lagging behind the others?
No, they're all shit.