More seriously, people need police reports for the sake of various other processes in society, like insurance claims and civil lawsuits.
This threatens the integrity of a solid chunk of social law if appellants now have to prove that police reports supporting their case weren't AI generated.
I remember being a witness for an (insanely minor) criminal court case a few years back and the copper that was “investigating” had somehow entirely forgotten that he might actually be asked questions about the crime he’d investigated and left his notes behind.
Well, yes, ideally. But as long as the officer signing off on them is proofing and editing I don’t care if they use AI to try and save time. Most police officers spend more time than we would like doing paperwork.
But I don't understand how a police report makes sense to be AI generated. It's like using AI to generate your daily journal. The whole point is to record the events as they happened from your perspective.
There is no 'beta testing' for LLMs in the same way as there is for conventional software.
Regular software is deterministic and can be properly debugged. We have to accept that complex real-time systems will probably still have some bugs, but good testing can reduce that to a very low rate. For example, games from great development teams like at id software (see Doom 2016/Eternal/TDA) release with very few bugs.
But LLMs always leave a lot up to chance, and there is currently absolutely no way to make them 'reliable enough' for this kind of application. Their entire reason to exist is for tasks that we can't reasonably solve with a conventional algorithmic solution.
The main options for current 'AI' tool developments are:
Test your ChatGPT wrapper very extensively and tune it to at least somewhat reduce error rate, but accept that the error rate will still be high.
Release your ChatGPT wrapper in a barely tested or untested state.
Don't release LLM-based software.
They currently only make sense for either very precisely defined tasks (astronomers use neural networks to classify objects on large collections of telescope data for example), where errors are non-critical (like subtitles/transcripts/translations of entertainment videos that normally don't get professional translations), or the AI can sensibly be used to generate suggestions that a human actually will review and/or that can be logically verified (like coding assistants that work like a 'glorified autocomplete' to generate individual classes or functions).
Once those reports make it into the system, that information is ironclad. Sucks to be a person wrongfully accused in those reports because AI fucked it up and no one cared to check it
The issue is that these systems have no concept of context. That means that if there are two or more officers present, it can attribute statements another officer makes to someone else, or in a crowded environment it cobbles together what multiple people are saying. Officers can get around some of this by becoming narrators, but they don’t always have time to do so.
I just want them to have all the incentives in the world to make their actions public and available. The more transparency the better. We can fix the tech.
When AI Gets an Innocent Man Arrested -- body cam that shows how regular patrol officers currently behave when told that AI flagged something: complete, unquestioning belief.
tl;dr: A facial recognition system said that a man played at a casino was with 99?9% certainty a banned patron. Even when he showed them his ID, which could be easily verified in multiple databases they still thought it might be a fake, despite the banned patron being recorded as obviously taller and of significantly different waid, , as well as not having the various CDL endorsements that the innocent man had.
Why are they allowed to use AI in the first place? This was a funny situation but what if makes up something that's more plausible and not as easy to pick out. This needs to be banned immediately.
That’s when we learned the importance of correcting these AI-generated reports.
Your teachers told you need to proofread your writing, you should have someone else proofread. They marked you down when you handed in papers with mistakes.
Did he get better?
No, it's not like he was transformed into a newt.
"BUUUUUURRRRNNNN THE WIIIITCH!!"
After a nearby meth addict kissed him, he turned into a prince. Everyone immediately applauded and they lived happily ever after.
THE END
No, he turned into a cop.
From the makers of Cocaine Bear:
COMING SOON....
FROG COP.
A Ribbiting Experience - Some Critic
I mean he was already a pig…
He's still waiting for the DA to kiss him
Yeah they were able to revert him back to a pig
I’m kinda bummed, both the article and the Fox News report it mentions do not give the juicy frog related details I needed.
Journalism today is sorely lacking smh my head
Like, can’t you give us the deets?
Like... journalism ?
Probably both also written by AI
plot twist - there never was a police report. ai "journalists" hallucinated the entire story.
“That’s when we learned the importance of correcting these AI-generated reports.”
Right? I’m sorry they didn’t think that a fucking POLICE REPORT should be proof read and edited by a human?
"And thats when we learned the importance of not murdering people."
More seriously, people need police reports for the sake of various other processes in society, like insurance claims and civil lawsuits.
This threatens the integrity of a solid chunk of social law if appellants now have to prove that police reports supporting their case weren't AI generated.
Having read police reports before, they haven’t done so in the past, so this isn’t really surprising.
I remember being a witness for an (insanely minor) criminal court case a few years back and the copper that was “investigating” had somehow entirely forgotten that he might actually be asked questions about the crime he’d investigated and left his notes behind.
If reading comprehension was their strong suit, they wouldn't be working forces.
That would require having humans in the department who can read.
Or, how about they should just be written by a human?
Well, yes, ideally. But as long as the officer signing off on them is proofing and editing I don’t care if they use AI to try and save time. Most police officers spend more time than we would like doing paperwork.
But I don't understand how a police report makes sense to be AI generated. It's like using AI to generate your daily journal. The whole point is to record the events as they happened from your perspective.
It sounds like they’re using it to turn bodycam footage into an account of what happened.
Or, and bear with me here, perhaps they shouldn't be using AI to generate reports at all.
Nah, might as well get used to it I guess. It'll be baked into Word before the year is out anyhow.
SCAF - Some Cops Are Frogs
Sending this to my mom who doesn't understand why asking AI for medical advice is a problem.
I already know the answerDid it work?Another terrible product released to production before beta testing was done.
What the PD's are sold on is something that isn't being delivered.
There is no 'beta testing' for LLMs in the same way as there is for conventional software.
Regular software is deterministic and can be properly debugged. We have to accept that complex real-time systems will probably still have some bugs, but good testing can reduce that to a very low rate. For example, games from great development teams like at id software (see Doom 2016/Eternal/TDA) release with very few bugs.
But LLMs always leave a lot up to chance, and there is currently absolutely no way to make them 'reliable enough' for this kind of application. Their entire reason to exist is for tasks that we can't reasonably solve with a conventional algorithmic solution.
The main options for current 'AI' tool developments are:
Test your ChatGPT wrapper very extensively and tune it to at least somewhat reduce error rate, but accept that the error rate will still be high.
Release your ChatGPT wrapper in a barely tested or untested state.
Don't release LLM-based software.
They currently only make sense for either very precisely defined tasks (astronomers use neural networks to classify objects on large collections of telescope data for example), where errors are non-critical (like subtitles/transcripts/translations of entertainment videos that normally don't get professional translations), or the AI can sensibly be used to generate suggestions that a human actually will review and/or that can be logically verified (like coding assistants that work like a 'glorified autocomplete' to generate individual classes or functions).
Operating government with unreliable, buggy software is just a bad idea.
Nah, they were sold on not having to work. That's being delivered.
He got better
"...and that, my liege, is how we established probable cause to weigh the witch."
Once those reports make it into the system, that information is ironclad. Sucks to be a person wrongfully accused in those reports because AI fucked it up and no one cared to check it
Equally so, a lot of guilty people will get off the hook because these "ironclad" reports can be proven to be unreliable.
You're saying you don't trust the legal system? You think the officer didn't turn into a frog? I'm not sure I appreciate your tone.
Now they just have to spend 12 hours weekly proofreading.
Annnnnd then all of his cases were dismissed.
Having AI ease the paperwork burden for police seems like even more incentive to keep body cams running and help with transparency.
The issue is that these systems have no concept of context. That means that if there are two or more officers present, it can attribute statements another officer makes to someone else, or in a crowded environment it cobbles together what multiple people are saying. Officers can get around some of this by becoming narrators, but they don’t always have time to do so.
I just want them to have all the incentives in the world to make their actions public and available. The more transparency the better. We can fix the tech.
Real “I’m not a cat your honor” vibes here
First the chemicals turned the frogs gay now AI hallucinations are turning cops into frogs.
The amount of male on male frog sex those poor hallucinatinated cops turned frogs will endure is no joke.
It may suck as a police report, but that's a very good writing prompt.
We thought you was a toad
Do not seek the treasure.
For something less offbeat:
When AI Gets an Innocent Man Arrested -- body cam that shows how regular patrol officers currently behave when told that AI flagged something: complete, unquestioning belief.
tl;dr: A facial recognition system said that a man played at a casino was with 99?9% certainty a banned patron. Even when he showed them his ID, which could be easily verified in multiple databases they still thought it might be a fake, despite the banned patron being recorded as obviously taller and of significantly different waid, , as well as not having the various CDL endorsements that the innocent man had.
Why are they allowed to use AI in the first place? This was a funny situation but what if makes up something that's more plausible and not as easy to pick out. This needs to be banned immediately.
Did he croak?
r/brandnewsentence
Did they find a princess to kiss him?
If you kiss him/her they might transform into a pig
Your teachers told you need to proofread your writing, you should have someone else proofread. They marked you down when you handed in papers with mistakes.
In the real world, we mock you online.
Amphibian Task Force, obviously.
Fuck, these guys are going to get stupider
All Cops Are Amphibians?
🐷
According to Police Reports, Harrison Ford's wife was killed by a Six-Fingered Man.
I was going to say, was a princess involved?
Of course I read about this on Rrrrreditt....