We received a very strange ticket earlier this fall regarding one of our services, requesting us to activate several named features. The features in question were new to us, and we scoured the documentation and spoke to the development team regarding these features. No-one could find out what he was talking about.
Eventually my colleague said the feature names reminded him of AI. That's when it clicked - the customer had asked ChatGPT how to accomplish a given task with our service and it had given a completely hallucinated overview of our features and how to activate them (contact support).
We confronted the customer directly and asked "Where did you find these features, were they hallucinated by an AI?" and he admitted to having used AI to "reflect" and complained about us not having these features as it seemed like a "brilliant idea" and that the AI was "really onto something". We responded by saying that they were far outside of the scope of our services and that he needs to be more careful when using AI in the future.
May God help us all.
They are everywhere.
Analog photography subreddit "Hey, is it true that...?"
Everybody with experience: NOPE
OP, 2 days later "So, I have read *300page, dense theoretical work from the 70s* now and it and ChatGPT say I'm right.
Sure buddy, you read that...
I've seen people make posts asking why their camera dosent do what chat gpt says it can do
We're getting dumber and people belive autocorrect more than the manual they never looked for
Reading the damn instructions, or just Googling the model would probably take less time and be more accurate.
People don't think that anymore
They'll either go to a llm or ask on reddit and get anoyed people telling them to do that
RTFM, did it ever work?
Of course not Even error codes on screen telling the user what is the problem dosent work
Like people will post 20 year old cameras with errors on the line of memory card not recognised and they'll ask online why their 20 year old camera is showing an error that they've not read in the slightest
I want to be helpful and encourage new people to go deep into the hobby but damn it wears you down
I felt that last sentence in my bones!
Ugh. The worst. I have a pile of tickets along the lines of "It's not working. There is an error on the screen. We need this back up ASAP!" Still waiting on a response as to what those mysterious errors are. Probably a simple fix but without any information beyond "an error" not much I can do.
Reply that because they haven't provided a specific error message the fix will be tasked to support. This will take 6-8 weeks because support have no point of reference to begin fixing so need to examine everything related in depth.
Or they can just supply the error message.
The amount of helpdesk calls I took over the years where the user would tell me something wasn't working.
Me: "is there an error message?"
User: "yes it says windows has encountered a blah blah blah."
Me: "Thanks. I've never seen an error message that says blah blah blah & I'm fairly sure the helpful part of the message that tells me what the problem is is in the part that you replaced with blah blah blah"
Nah, nah, the response is: "Oh, great, yeah, when the error is blah blah blah you need to yadda yadda yadda." Then close the ticket.
Show me the Luser who does not click away pop-up windows but reads the damn error message!
No. People can't even read the RFCs that gave the internet; just look at the number of websites that don't allow valid email addresses.
To be fair, RFCs aren't exactly thrilling reads for most.
I dunno, have you read 1149?
That's been amended by 2549, it adds QoS.
Omg, one of my favs!... I'm sad I have favs
Ftfy
I beg to differ. 418 is an absolutely thrilling read
20-odd years ago. 3am.
Me (Tech Support): Why are you calling me about this? It's on page 1 of the manual (that I wrote).
Them (System Operations): We don't have time to read the manual.
But they had more time to ring you up at 3AM? Make it make sense.
Well see, one involves reading, and the other doesn't, so
Except for where they read the fucking AI summary of the results, and it's wrong.
I assume they get the AI to read it out loud as well
LOL!
Yesterday I saw an excellent video of 2 AI chatbots talking to each other by phone. SO understanding. SO apologetic. SO smarmy. And they couldn't end the call themselves, so it got surreal pretty fast.
We have a thing for this in Linux communities. RTFM.
Sure, you can ask AI, but always verify the information by RTFM.
Someone else may have done it before and posted about it...4 years ago, but you still need updated information, so RTFM.
In essence, nothing beats RTFM.
Just Read The Fucking Manual.
I was on our L1 Service Desk like 6 years ago and told a coworker to RTFM and he was like "what's that mean?" Could be generational nerd slang, but he was only like 5 years younger than me and not even an idiot, actually a coworker I respected! So I did a quick survey of our 20-some coworkers and about half of them had never heard this acronym (it did skew younger, but not entirely). I didn't even know what to say.
From my experience in my almost 20 years in IT professionally, it's not the age that determines if someone has heard of RTFM. It's their experience with open source software.
unfortunately most of the google searches will give some kind of ai bs as the first answer
Yeah, I've had to develop new muscle memory to scroll past that nonsense.
New? This muscle memory is forged in the fires of years of "sponsored" search results.
Even google throws an AI review in first place...
Google is now pushing AI nonsense as the top result in most searches. Sadly, Googling something isn't going to help much.
The problem is when you Google the model the first answer you get will be AI generated, so people just assume that must be right.
The number of people that mix lose and loose is also getting larger.
Or someone arguing with me that complement and compliment are the same thing, or else why would autocorrect not say complement is wrong?
The world is fucked, bro. FUCKED
My latest peeve is everyone using then when they mean than. It's almost as infuriating as "funnily enough". Autocorrect has gotten worse. Seems instead of trying to match the word with use and meaning now it matches based on what other people type. So instead of making helpful corrections it tries to sabotage my typing with the crowdsourced illiteracy of Zoomers and gen alpha.
The magic of ✨Large Language Models✨in action~
Your username makes me miss Prachett. He would probably have a funny discworld story that's a metaphor for LLMs.
CMOT tries to get rich quick by running "hex for the people" and he charges a penny a go, but it's just a bunch of imps that agree with whatever they're told.
I'm still trying to train my new phone to not give me the American spellings for everything.
But the use of "should of" needs to be punished, bring back stocks in the market for offenders.
I think the larger problem is the general loss of standards in society. On its own one grammar mistake becoming widespread isn't a problem. And when someone says it's not a big deal they're right. But taken in its totality, the loss of all spelling and grammar, no public shame, no social contract, just doing what you want and having "your truth" really just results in a shitty place. We didn't land on the moon by acting this way.
complement means that one thing completes another. Like scrambled eggs and bacon are a good breakfast, but some nice crispy hashbrowns complement it.
A compliment is something I like hearing.
The bacon is complementary to the eggs. $7.99
The bacon is complimentary with the eggs. $4.99
"I complimented the chef for complementing the dish with hash browns"
discreet: adjective
careful and circumspect in one's speech or actions, especially in order to avoid causing offense or to gain an advantage.
"we made some discreet inquiries"
discrete: adjective
individually separate and distinct.
"speech sounds are produced as a continuous sound signal rather than discrete units"
Breath and breathe have essentially swapped meanings at this point.
i complement you on your grammar— i too— find that people are being too lose with it— nowadays...
edit: — — — —
to* lose
you can fir a few more mistakes in yo shit, c'mon
Toulouse? but im not french...
fir
fug :DDDDD
I'm leaving that in tho
Affect & effect. I swear even publishers these day don’t know the difference.
"different from" is correct; "different than" is not.
The one that kills me is folks who swap 'apart' and 'a part', which are pretty much diametrically opposed.
ooh I'm not the only one with that pet peeve
Most people have stopped calling out "should of" on social media, and some people are defending the error.
I know, and that makes me so mad.
Granted, if Futurama is a prophecy of any kind, that will be the norm in 1000 years
Mine is phenomena instead of phenomenon when it’s singular! I keep seeing “a phenomena” everywhere and it drives me nuts lol.
In all fairness to the AI Victims: I could not tell you when I last held a useful manual for a somewhat current appliance.
The booklet I recieved with my sony camera is 1cm thick and starts with "which way to point a camera if you have never seen one before" which I find discouraging.
A brother embroidery machine I tried to troubleshoot even had a table with common issues and how to properly diagnose and fix them. Literally every single one was "Send it to brother for repair."
We've definitely lost our ways In how to make a actually good manual
Especially consumer oriented stuff where they just don't realy bother
The camera is a machine. ChatGPT is also a machine. Obviously, ChatGPT must have more insight about how its fellow machine works than any human would
You are right. But lately, the Manuals are basically useless
I bought a new washing machine a month ago and the manual did not even specify how long takes in each program
But I bet it tells you how to connect it to "the cloud" so it can be updated, controlled remotely and definitely not be disabled when the subscription model comes in next year.
Shit, most of the things that I interact with now have hieroglyphs instead of words for their “manual”.
I hate it.
It's not even autocorrect, it's predictive text in a fancy hat. Ai ruined autocorrect, it's why any autocorrect using ai ends up advising misspelled words- it suggests the most common spelling and if there's a word commonly misspelled it'll suggest the misspelling.
Well of course, take the 2 things that say you're correct while ignoring the 100s of things that say you arent.
Confirmation bias is a bitch
gpt always says you're right. i thought people knew that by now
No, thats just what the haters claim, of course chatty agrees with you when you are always right! /s
Gen X and older Millennials are the last ones to truly understand how computers work. We tore it apart to see what was there and then booted it up to see how the software worked.
I mean, look at us all becoming HTML pros because of Geocities and then later MySpace.
I think in part that was because at the time, it was very difficult to use a computer for anything except very specific functions (like a single program for work, etc) if you didn't at least KIND OF understand how it worked. Same for websites... if you wanted your LiveJournal to look non-generic you HAD to learn some html. The tech was its own gatekeeper to some extent. Nowadays UX has been made super friendly and easy, which is good because it allows ANYONE to be online making content, but also bad for the exact same reason, haha
tore it apart, popped the hood, fiddled with DIP switches, knew how to create master/slave drives (and the differences!) and dealt with the unholy COM1 issues of Soundblaster.
The amount of times I've seen someone post something along the lines of "ChatGPT said my teacher is wrong" (and same with textbooks and grammar books instead of teachers)... Like yeah, sure, teachers can make mistakes, and it's okay to question things that seem off. But...it's concerning to see so many people go to a chatbot to verify a professional person or resource *sigh* (and some of them will still argue with "but ChatGPT said..." after being told by several native speakers that their language teacher is right and ChatGPT is halluzinating)
ChatGPT also had an aneurysm over a seahorse emoji
aneurysm requires a brain. It's just a malfunction (Personally trying not to humanize the chatbots)
Analog photography is especially bad for it I've found since at the moment the newbs almost outweigh the ones with experience. I'm super glad that film is seeing a resurgence and projects like Harman Phoenix and Lucky C200 are happening because my inner film goblin wants more film (and more options). But holy hell the misinformation trains are full steam ahead at times.
Kodak-Alaris, please, I would go feral for Ektachrome in 100ft spools, my bulk loader is waiting.
It's because AI is built that way. Its primary function is to complete the task that's asked of it, with the answer being correct a lower priority. So if it's prompted in a way to get a certain result, it will give that result to the user even if it's not correct so it can complete the task.
Well, I don't know that it's that complex... I feel more like the way LLMs work, it's more that it's very difficult to ensure correctness. I'm pretty sure that if it were easily possible, that would be implemented by at least SOME companies. I notice specialized non-LLM AI (which is to say, just what we're currently calling super-complex algorithms) is usually far more accurate and useful in what it produces (for example, that medical one that folds proteins, or the astronomy one that looks for patterns in light waves)... however, that stuff needs to be built and maintained by people who understand how it works and what kind of results it produces, and you can't sell that to the general public as a computer talking to you, so yeah.
Link? That sounds interesting!
300 Page dense theory? "The Negative" by Ansel Adams of course^^
We do graded E-Learning tests to onboard our engineers. We regularely receive tickets about errors in the tests and engineers arguing for more points which we encourage.(Rather have people think than blindly trust)
One new hire decided to copy paste the questions into our company internal version of ChatGPT. We have a couple of catch questions that the AI gets wrong 100% of the time (so far) so it is fairly obvious, though it hasnt happened before. This user wrote a ticket proudly stating that the AI gave them these answers and therefor they must have a 100% score. They also claimed her collegues confirmed her answers without giving a simgle name.
Safe to say she did not get the extra points.
That sounds like she also shouldn't get the job...
She's probably in the C-suite now.
I hope I am never in a position where my fate is decided by a jury of these types of people. They are the types that go "well the police wouldn't have arrested them if they didn't do it".
Did you read bout the judge who said that while in jury duty?
https://www.cbsnews.com/news/ny-judge-thinks-all-defendants-guilty/#:~:text=Judge%20resigns%20after%20saying%20he,reported%20Snyder%20to%20state%20officials.
Holy fuck.
Geez... he said the quiet bit out loud.
Unfortunately that has been a problem long before AI.
"And how would you feel if you hadn't eaten breakfast this morning?"
"But I did eat breakfast this morning"
"Yes, but how would you feel if you hadn't?"
"I don't understand"
Adjunct professor here… have an assignment that I’ve been using for the last 6 years on XML.
Every layperson I’ve asked to do it gets it right on the first try, but about 85% of my students get it wrong and we have an in depth discussion on assumptions and overthinking.
Until this year, where 100% got it right. From the other assignments I know that this class is not far and above my other classes, or so far below that they wouldn’t fall into the overthinking trap. I’m just grading a classroom full of copy/paste from an LLM. No longer do we get to have the discussion on overthinking, because no one is thinking at all.
The field they are going into is niche, LLMs constantly hallucinate when asking anything beyond the cursory for the field… it has invented entire libraries in C# that just don’t exist, and its knowledge of playing with this data in python is just as bad. (Staying intentionally vague)
Now I'm interested in that XML question. I'd expect few laypeople to even know what XML is, let alone answer questions about it more reliably that IT students.
I laid out a hypothetical application and then showed the XML file that would need to be created for the configuration of the application.
I then pitched an addition to the application to have it do something else and asked what additional fields should be added to the XML (and maintain proper formatting)
It’s really not an XML question as using XML as a stand-in for “can you parse a document with markup?”
Laypeople look and say “oh! I see a field called “Email” that contains the email address, and the new application needs a phone number field, so let’s add that under a second nest” because they are just doing a 1:1 but my students typically try to get too creative and end up going in a different direction, or they are too confident and don’t check their markup and we run into syntax errors.
Oooh, I’m a layperson lurker!
God help us.
At my company (400k+ employees globally), using AI for post-training exams (except where explicitly permitted) is a fireable offense. I’m frankly shocked it’s not this way elsewhere - otherwise what is the point of having an exam if not to test your understanding of the training material?
We're a smaller company (less than 1500). We work in such a niche field that most new hires never worked with our or similar products. Ad on top that they need to understand some surface level polymer chemistry and we need to do a lot of in house training. The company philosophy is still a "Results matter, how you got there isnt that important" kinda type, but its shifting. For that reason the tests are "open book" or rather "open PDF". Despite that we get results of 60 - 70% on some topics pretty frequently. The consequence is usually more training for said new hire. In terms of AI usage... I dont have to like the policy, I just have to deal with it.
We have a series of benchmark tests we use to gauge the progress of graduate engineers as they're going through the first two years with us. We also have catch questions to identify AI usage. Because the stakes are so high with the work we do, we have a strictly enforced policy against AI use. We don't allow it at all. You either learn to be an engineer or you wash out of the program.
We have a two strikes policy. After the first blatant use of AI, we don't directly accuse a candidate, but we meet with them one-on-one and (hopefully) put the fear into them. We explain why it's so essential that they actually learn and understand every single part of the project they're working on. They must become subject matter experts. If they do it again, that's considered gross negligence under their contract and they're gone.
We've had a handful of first strikes so far but nobody has made it to strike two thankfully. But that day is coming.
I would be interested to know more about the questions that AI gets wrong 100% of the time.
Its niche knowledge that isnt widely available. Since the answers are usually multiple choice AI tends to go for the lowest or highest values that arent outlandish. Hasnt failed a single time.
Gullible Predictive Text strikes again.
I like that! Thank you!
So that's what it stands for. Thanks.
I work in education and there's two camps regarding AI. Those who won't touch it and those who are all in and want to use it for everything. I've given several presentations on its proper use and emphasize the importance of watching out for hallucinations. Most of the time I feel like all they hear is Charlie Brown's parent noises from my mouth.
'Charlie Brown's parent noises' - great reference!
Throwback to high school when I was describing Charlie Brown parent noises to my English class and said they had "horny voices." 😳
I belong mostly in the first category, though i have used it for text generation. I also tried it for something else which it got very wrong and couldn't correct when asked multiple times. I also prefer to teach analog with pen and paper, no calculator. Until we are doing assignments that need the technology of a math program. Then i teach the niche and smart use of that program. I also remember being forbidden from using Wolfram Alpha back when i was in school...
You had my upvote even before I startet to read your text.
Ironically I think AI helps us getting back some kind of natural selection.
I once overheard a patient in my doctors office discussing with staff to get a prescription. They insisted he had to wait for the doctor to check and give the approval for that medication.
"But ChatGPT totally suggested this tablets for my symptons, I can show you!"
Great. Go get the prescription from ChatGPT.
You laugh, but I have to fear there’s someone out there trying to make ‘Chat MD’ that can prescribe pills…
Well insurance would never support that as a "pharmacy", so any kind of service like that would be DOA.
But applying AI to current hospitals and their in-house pharmacies could be a problem, especially as hospital management is all about cutting costs and stretching every dollar they have. I'm even curious to what extent the Alexa AI has infiltrated Amazon's pharmacy home delivery service.
At least most doctors aren't typically dumb enough to risk prescribing something blindly. They know just about anything they do exposes them to litigation and losing their license, hence why they often have to be pretty rigorous with diagnosing before offering a prescription.
Damn, and here I thought that I'd be able to get insurance to pay for my street pharmacist. /s
IBM tried that for years with Watson before chatgpt was a thing.
Followed immediately by a 'mysterious' uptick in prescription drug use and overdoses.
It'll just give you the WedMD answer of stage 5 everything cancer when you input your symptoms.
Let them know about the man who trusted ChatGPT to lower the table salt in his diet and ended up in the hospital for nearly a month with psychosis due to Bromism (bromide overdose)
After using ChatGPT, man swaps his salt for sodium bromide—and suffers psychosis
TLDR - Man asks ChatGPT how to lower his consumption of table salt (sodium chloride). ChatGPT tells him to substitute with it sodium bromide which he orders online. While it was used as a sedative 100 years ago, doctor stopped because it makes you hallucinate and go crazy until your kidneys flush it out. Dude used it for cooking until he couldn't stand or speak coherently.
As a hospital-based provider, AI has given me nothing but headaches and patients who are certain about things they know nothing about
before someone had to have at least a baseline of knowledge to even be able to google something to prove themselves right (or ignore that they were in fact wrong)
now chatGPT spits out reasonable sounding nonsense within seconds even if you have no idea what you're asking for
Great, you can sue chatgpt when you have an adverse reaction. Oh wait, that’s not how it works, so you’ll have to wait until the doctor, who is actually liable, approves it.
One of my favorite things I saw on here recently:
AI doesn't know facts. It just knows what facts look like.
I wouldn't trust a helper ai trained with medical data with just that purpose unless used by a doctor, people that trust general purpose LLMs with medical stuff are insane
I am an audio engineer. Since DAWs are super complex, I sometimes need help troubleshooting. Whenever I task an AI to help I get the weirdest hallucinations. Whole menus and workflows that don’t exist are being quoted. The suggested solutions would also usually break something else. Get smart at CTRL + Fing your way through manuals and documentations, people. Don’t just blindly listen to AI.
Same happens in car repair (and honestly any technical skill). People in car subs asking for buying advice or repair advice come up with some truly bizarre questions and claims because "ChatGPT said". Like half the conversation is just people going "whoa whoa hold your horses" and convincing OP that the chat bot made shit up.
I'm not asking a chat bot to summarize my Subaru service manual. I'm damn well capable of misunderstanding things myself.
We get so many questions recently from users saying "I asked ChatGPT how to do X in Outlook/Excel/Whatever, but I can't find it. Please fix". Smart people mind, engineers and technicians...
The cake was won by a highly paid IT consultant, who needed a CLI tool and couldn't figure out how to set it up. I walked him in person through installing the tool through PowerShell, showed how to start it and get to the login. Even opened a browser tab with the step-by-step manual, showing every line he needed to type to start, connect, and get going... He came back half an hour later, with "I asked ChatGPT how to use the CLI tool, and it said to check here if it's installed, but I can't find it?". Dude, you're looking nowhere near the control panel or Programs & Features, and you just stood next to me when we installed and ran your tool...
/rant
"But Chad-She-Bee-Dee said...".
As much as I hate people that use that phrase as a rebuttal to facts, at least it tells me I'm probably dealing with someone without any critical thinking skills.
I believe LLMs are a great tool for certain applications, the same way a jackhammer is a great tool for certain applications. Thing is, we all know that, but these are the same people that buy the "31 in one hammer-screwdriver-spanner" tools for 5$ and tell you it's better than the proper tools.
No point in arguing with them.
Probably also the same people that end up in a canal or a storefront because their navigation system told them to go right.
"The machine knows!"
It's OK you can just get ChatGPT to argue with them.
... the one valid use for LLMs might have been found, holy shit. What's even better: once your target took the bait, the thread ought to become more and more nonsensical as the LLM starts hallucinating more
I think this is still the best use: https://www.technologyreview.com/2024/09/12/1103930/chatbots-can-persuade-people-to-stop-believing-in-conspiracy-theories/
Also like a jackhammer, if you use it for anything outside of its narrow set of applications you will make a complete mess of everything.
Bro it's fine for typing on a keyboard, watch keyboard splits in half, desk collapses and floor now has a small dent with concrete showing through
See it's perfect.
Also it works when using the office printer!
To be fair, if this subreddit has taught me anything, it's that sometimes a jackhammer might be the right tool for dealing with an office printer.
brb, coding up an interface called ChadCBD. kind of like gpt, but half the time he just wants to smoke up
Well, every tool can be used as a hammer, once.
At least once.
There absolutely is a point to arguing with them. Showing they are wrong and belittling them. Not doing so furthers the erosion of standards and gets us closer to Idiocracy. More people should be publicly shamed for being idiots. The day we stopped doing that is the day we started on the slow fall to where we are now.
Growing up, I think everyone knew that one friend who was absolutely dead certain that you could get Mew in Pokémon Blue/Red by moving a truck that didn't exist, simply because they'd seen that said elsewhere and took it as gospel.
ChatGPT is that kid.
In That One Friend's defense, the truck is real. Mew's not under it, but there really is a truck just off screen in Vermillion City's port.
Every good urban legend is rooted in at least some truth, I suppose. I'll admit I had the opposite effect happen here: I spent so long accepting that it was wholly debunked that I never thought to look it up again since.
You do need to go out of your way to get to the truck (iirc you need to complete the events of SS Anne and then lose to a trainer without leaving the boat, so it never leaves port; then come back when you have Surf), and it's quite literally just a piece of decoration that doesn't do anything. I think in the remakes they put a Lava Cookie underneath it as a reference.
Also, there is a convoluted way to get a Mew in RBY without an event or external device. It probably wasn't linked to the playground rumours because it wasn't found until the mid-2000s, but it could have been if some kid had gotten stupidly lucky.
It was entirely unrelated to the truck, though.
My brother sacrificed his Pokémon Blue save file to test this.
-people who ask AI for specific technical advice
Well, if you look at the large scope, overall people are trying to make AI replace people rather than be a tool, bit we've also all seen how it constantly fails when given those responsibilities
It's that Cory Doctorow quote though. It perfectly sums up business. I mean, it was true enough when sales people convinced execs that open plan offices were actually a good business idea (as opposed to just a money-savings idea), and that was just about money and the physical world.
“AI cannot do your job, but an AI salesman can 100 percent convince your boss to fire you and replace you with an AI that can’t do your job"
i fear the CEOs buying OpenAI and Midjourney company licenses and forcing their employees to use it (to poor results) to then justify firing half of their workforce more than people who ask AI for advice
Dude, its spreading to everywhere now. I do hvac work, and so.many posts on the hvac advice sub and even customers irl start eith "chatgpt said" and then finish with some of the most dumb shit ever.
Its not even old people only either. Its all ages.
Why would it "only" be old people? Everyone has been trained to accept that computers are right, and that used to be reliably true. If anything, younger folks are more likely to blindly accept generative AI output because they don't enough about the world to be cynical.
I recall some basic polls and studies showed that digital literacy is lower for older people (learned it later in life) and younger people (exposed to it very early but did not use tools/software that still required critical thinking to use appropriately), yet the middle-aged Gen X and Millennial groups have stayed mostly level.
Makes sense when you grow up with technology as it emerges, but such tools still relied on analog tools/data to a certain extent. Now the analog part is really disappearing and I think that's what has made technology feel much less grounded, with AI at the forefront.
Its been an ongoing concern of mine. Yes technology is much more accessible and usable now that we don't have to muck with config files to squeeze a mouse driver in there with Doom, or set up your IRQs.
But it's gone too far with phones especially sanding all the edges off. People don't understand even basic concepts like the file system, they never engage with it because each app has its own wrapper around it and you never work with the basic system. For example my ex had no idea that the Downloads folder existed on her phone until I pointed it out to her (or even that the Files app existed and that she could peruse her phone's storage at a whim), where we discovered 85 copies of the same PDF menu or form she had downloaded time and again, not knowing she already had it.
Yeah I wouldn't be surprised if most people were unaware of the files app on their phones. And I don't blame them because trying to manage files on a phone is a mess, especially iOS where everything is so heavily compartmentalized by app you can barely figure out where anything is.
Liked how you described it as "sanding all the edges off", think that's a perfect way to put it. It's an effort to simplify that is hurting more than it's helping imo
Had a similar one just yesterday. A user was using copilot to do something with a spreadsheet, but something was bugged in the copilot app and none of the links were actually clickable, so they asked copilot where the links were and it hallucinated some semi-plausible explanation about problems with the user's environment when it was literally just a bug with copilot itself... So they put in a support ticket.
Buddy, I need you to understand that trying to use AI to do your job and then getting broken output and asking me to fix it is just one step removed from asking me how to do your job...
"ChatGPT said..."
"Grok said..."
Well shit, Dr. Dre said... NOTHING, you idiots! Dr. Dre's dead! He's locked in my basement!
At least you didn't forget about him.
Nowadays LLMs wanna talk like they got somethin to say
but nothing comes out with they boops and blips just a bunch of gibberish
AI agents act like they forgot about Dre
I feel like this is just showing us the dangers of surrounding ourselves with spineless yes men. -_-
So every exec with an opanai pro account?
Bingo
AI slop is also affecting IT as well. A couple times where I needed to escalate an issue sometimes I'll get AI garbage sent back to me. The most notable was when it included solutions that required software from an outside vendor that required a subscription service. Pissed me off enough to call him out. It's making everyone dumber, destroying our environment, and negatively impacting our economy.
Idiots have evolved. AI = Advanced Idiots
New sales woman came round with her laptop and said she'd lost a file she'd been working on all week. She didnt know it's name or location but she'd collected some data and asked Chat GPT to put it in 'an excel' then continued to work on it.
To prove it had once existed she showed me a notification that began 'are you sure you want to permanently delete this file?''
I've begun telling everyone that the first question they should ask an AI is whether or not they should trust an AI to answer this question and what kinds of situations are never appropriate for AI to be the trusted authority.
AI is actually pretty good at getting that question correct and it helps a ton for people like this to hear it directly from AI that they should never trust it for anything where there is no room for failure.
From ChatGPT: "Bottom Line Use AI for information. Use professionals for decisions."
never trust information from the AI - read the sources. it likes to lie about the information too
Yeah, use AI to try to find the information. You have to verify it found what you wanted by reading the sources.
I work for an MSP and one of the helpdesk guys (I'm field/Engineering and sometime help cover TAC) is constantly using Gemini for answers. Then he comes over asking why the fix isn't working. We've told him since day 1 to NOT use or remotely trust it.
I’m an audio engineer and electronic technician, who sometimes teaches engineering. For years I’ve warned students about “forum wisdom”, a euphemism for bullshit, even keeping screenshots of a pages long thread where guitarists argued over a particular schematic being fake because it missed a single resistor when the absence of said resistor made the schematic for exactly the circuit called for in the intended application.
I’ve recently been adding AI shop to my forum wisdom rants. It’s far worse.
AI helped lawyers with cites of imaginary court cases. The judges were more then irritated.
I have some hope for the future. My son is in middle school and rather than saying “no AI” the teachers are adding onto the assignments examples of appropriate vs inappropriate use. That seems a better approach than a flat out ban since it teaches critical thinking.
Seems like the teachers are finally learning.
Back in my day it was Wikipedia. Every teacher told us not to use it, because having a source of information that could be collectively changed by many people was not trustworthy (as opposed the text books they preferred we rely on, which had to be updated every year to correct mistakes.)
It wasn't until university that professors understood it was fine, as long as you understand the difference between primary, secondary, and tertiarary sources.
The damage was done by then though. To this day, people across the world still argue that Wikipedia is not a good source of information, because of what they were (incorrectly [ironically]) taught.
Okay now you're just making me feel old. Wikipedia barely existed when I was in school, and teachers didn't even know what it was to be refusing sourced & cited info from them.
Yes! I can’t seem to get my kids to use it. I’m over here like "you need to learn how to best use it because it will probably be a big part of your life". I think I’ve effectively banned them from using it by suggesting that they should use it.
I had VPs of finance complaining that they couldn't access shatGPT anymore once their own security team finally blocked it so now the VPs and other upper management for this client are all up in arms cause they can't use it for their jobs
Another guy ran into a problem in Azure management because he used it to solve a problem but whatever he blindly did messed up something else worse and we had to go back and undo everything
Makes me absolutely terrified to know what kind of info they've got on companies simply because someone was too lazy to just Do The Work Themselves/Educate Themselves or didn't bother to ask anyone to show them how and explain it
Hey, I made a website exactly for this! Send this to them (maybe anonymously 😅)
https://stopcitingai.com
Just ask it about something you already know if you want to see how often it’s wrong
People don't seem to understand that LLMs are basically just a step above of typing a few keywords into google and hitting that "I'm feeling lucky" button.
Anyone whose ever "discussed" a topic that they themselves are knowledgeable about should be getting all sorts of red flags from the LLM's responses. Maybe a slight inaccuracy here, a common misconception there, sometimes an outright inaccuracy.
So if you know LLMs aren't accurate about things you know well, how could you possibly think it's ok to trust it about something you DON'T know well?
I work in sales for SaaS - somehow AI has gotten a lot worse over the last weeks.
I mostly use it to summarize stuff, go over websites and whatnot.
Even for summaries regarding our own products and providing the right sources, the output has been flawed nearly 100% of the times.
Damn they even messed up simple calculations when I gave them the numbers.
Dunno what's happening lol.
Edit: I also use proper prompting tools, still shitty output.
I don't know where this quote about AI is from, but it's sure stuck in my mind:
"After spending billions of dollars, Microsoft has finally invented a calculator that's wrong some of the time."
I recently had a similar problem, but the other way around:
This story does make me sound a bit stupid, I admit. Learned from this mistake.
We were looking for a tool to handle mail sending and one vendor had a solution that looked like it could work for our use case, but I was not sure as the documentation was kinda hand-wavy in some aspects and there is constant (implicit) cross-referencing between the Frontend and the API documentation. Okay, I wrote with their marketing team, asked them about specific use cases we were expecting. I got only positive answers. They linked the same handwavy documentation so it seemed legit. We had som back and forth discussion about the features, and how they work.
Once onboarded, and with access to actual support I noticed these features don’t exist or at least not in the way their support described it. Turns out their marketing is not human and it’s actually included in the mail signature. Although in a very fine, grey text. Definitely my fault, but still very annoying.
Both of my jobs told me within the last week I need to get on board with using AI for everything I do at my jobs. Despite me pointing out all the information they give is possibly wrong.
@grok is this true
God has helped us by giving us the tools to help ourselves. Now grab that pack of TNT and drive to your closest AI DC. You'll know when you've arrived, the locals have 19th century diseases because of the polluted water and air.
Even God can't put this genie back in the bottle.
AI is a scourge. Calling them "intelligence" is really, really stretching the definition. They're basically just a more focused google search wrapped in nicer words. And us IT people are left to clean up the mess when some executive or C-level moron decides we must have it.
I had a "heated discussion" with a student about a rather esoteric database systems topic. The student used ChatGPT to support their arguments. However, ChatGPT was referencing my very own publications but making false claims and attributions about my work. It seemed to be conflating my work with that of others from adjacent subject domains.
I invited the student to read the source material for themselves, but at the end of the day they chose to go with ChatGPT's interpretation of reality instead 😞
And yet my job mandates that we use copilot and our home built AI bits at least 20 times a month. We're all doomed.
Im in electronic engineering and we take multiple complex math courses. I find chat is pretty good at explaining how to solve math problems without giving you the answer until you ask it to. Super useful tool for if i get stuck studying, though it does require integrity if you want to actually learn.
The best description I have heard or seen regarding AI is that it is meant for collaboration, quite like what you are doing. Use it to support what you are doing, not replace knowledge or critical thinking.
Reminds me about a guy i talked to at work, he was on about using ChatGPT to make HSE system for his business, as it was being auditet. I tried pointing out that its better to make a HSE system tailored to your business yourself, than to have ChatGPT make something that looks decent to you, but looks half assed for anyone auditing the business. He did not agree.
I myself asked ChatGPT to make a list of a specific set of codes and regulations and a rundown on each in layman terms. even provided it with one of another similar one that i made myself, just to see if it could do it. Halfway down the page it started inventing topics and text, nothing was acurate anymore.
ChatGPT or other Ai, is still like that dude you know that always has an answer, doesnt mean its correct.
AI is like a finger pointing a way to the moon. Dont concentrate on the finger or you miss all the hallucinations. Or whatever bruce lee said.
... I have cursed at chatgpt so many time because I got so pissed that in one single sentence I said f*ck more times that I have ever said it outloud in my 20 years of living
It has thought that pokemon legends za wasn't out until I said look it up
I have also told it to look up and have gotten 5 different answers each time
I like gpt because it can show idea ways that I never have and that has massively helped
But
It is a tool not a omg lemme believe everything it says type of thing
Again, a tool that is basically on drugs...
I have seen people follow AI blindly and brick their Prod environment.
The company I work for decided to make an AI agent to help us diagnose and troubleshoot issues. It apparently has access to our product and features, but I haven't bothered testing that.
Instead, I threw something generic at it.
"The computer says limited or no connectivity. What should I try?"
It came back with a list of things like checking cable and DNS settings.
"How would DNS be involved in getting an IP?"
It said it wouldn't.
I asked why it suggested that then.
And it was deflected the question.
Needless to say, I don't use it.
"ChatGPT said X", "ChatGPT said Y"
ChatGPT told me you were the reason your parents divorced.
I find I get a kick out of proving AI's wrong all the time. They usually come back with "you are correct, I was going by a site that has since been discredited. But I am only an advanced search engine, and not truly intelligent. I can only scour the web and tell you what I find."
AI's that we can interact with, such as ChatGPT are nothing but extremely well programmed chatbots.
Philosophical zombies.
It all boils down to sapience/self conscious vs NPC/philosophical zombies/troglodytes.
lol honestly this is wild but not surprising. AI can def make stuff up when it doesn’t have the facts, so gotta double check esp for stuff tied to real features or settings. ppl relying on it blindly are gonna run into these hallucination traps way too often if they don’t keep their guard up. Mad respect for double checking tho, saved tons of headache. AI helps but ain’t perfect, yet.
Sold cars when the Internet prices were just hitting the first internet in the 90s. The website people used, you'd pick a manufacturer and then model a car to get the price. Except every option the manufacturer has on every car they made was available to add to any car.
People would come in with a sheet and a price. Corolla with no anti lock brakes, red leather seats, a spoiler that could be added to a supra, etc. Then they'd get mad when I'd say that car doesn't exist.
But but my print out right here says it exists!!!
If only the website had a compatibility checker for aerodynamic parts that maybe couldn't be installed...
A Spoiler Alert, perhaps
One of the first questions I ask any AI response on something technical: is this an abstraction or confirmed?
More often than not it will confirm abstract and then check if it can find an actual step by step
This is how executives and directors have thought all their life, I will just get someone or something else to do it.