As a reminder, this subreddit strictly bans any discussion of bodily harm. Do not mention it wishfully, passively, indirectly, or even in the abstract. As these comments can be used as a pretext to shut down this subreddit, we ask all users to be vigilant and immediately report anything that violates this rule.
This is very apparent if you spend more than 10 minutes fact-checking it. AI generally parses general information and will absolutely make stuff up when it doesn't have anything to pull from to fill the gaps. Then there is the RLHF learning model, which is trained to output responses the user will favor, leading to sycophantic 'yes-man' responses. If you want to parse semi-accurate information from an LLM, you will have to use fairly long and highly descriptive command prompts specifying what it can and cannot do, as well as the sources it parses its information from.
As a reminder, this subreddit strictly bans any discussion of bodily harm. Do not mention it wishfully, passively, indirectly, or even in the abstract. As these comments can be used as a pretext to shut down this subreddit, we ask all users to be vigilant and immediately report anything that violates this rule.
I am a bot, and this action was performed automatically. Please contact the moderators of this subreddit if you have any questions or concerns.
Slice of the article
https://preview.redd.it/m7rpn1y8fj8g1.png?width=755&format=png&auto=webp&s=fc1008871b19703c9d60993d8e8ecd69900f9bec
So sick of this “AI” bullshit. Mainly how people don’t understand it’s not reliable and misuse it.
This is very apparent if you spend more than 10 minutes fact-checking it. AI generally parses general information and will absolutely make stuff up when it doesn't have anything to pull from to fill the gaps. Then there is the RLHF learning model, which is trained to output responses the user will favor, leading to sycophantic 'yes-man' responses. If you want to parse semi-accurate information from an LLM, you will have to use fairly long and highly descriptive command prompts specifying what it can and cannot do, as well as the sources it parses its information from.
Google where the word sabotage came from.
It's an interesting story