I’ve been using ChatGPT for bouncing ideas off of, cheaper than an attorney fees. I’ve got some useful information from it. I have seen where people got trouble for AI, not sure which, giving false court cases as reference, but I’ve had good luck using it. Has any one else had good luck? Useful ideas? Also, wanted to share in case others could use the idea.
As I've said before, AI is okay to bounce ideas off of, but it's incredibly risky to use it for actual legal advice.
Yeah, that’s been my rule of thumb with it. I still consult my attorney and verify anything legal, but it’s nice to bounce the ideas or to crunch numbers.
Crunching numbers with AI can be risky. If you ask it to do calculations it will often hallucinate the results.
I’ll be the outlier here… I use chatgpt for all of my communications with my high conflict ex. It knows about my case and write messages as if the court will see them some day. I have copied every message she has sent in to it and told it basically want to say. It has kept me sane.
Same here! I was gaslit (confabulation) by my ex all the time. I have ADHD, so I am not sharp on all the details. She used this to make anything my fault, accusing me of making an error or forgetting. Now that all communication is documented and fed into a model, I can call her out on her false accusations, demeaning language, failed agreements, and intentionally disruptive communication habits. I use direct quotes, referencing the message number. I don't even read her vitriol. I just read the summary and give it some guidance on crafting a response.
This has really taken the sting out of the communication part. She was mean and filled with provocative lies I couldn't resist refuting. Now she will spend hours crafting a manifesto about all these things I should have or didn't do or just bad opinions of me. I'll spend 9 seconds getting AI to craft a detailed response that can be very long. Then she can spend the rest of the night twisting in the wind going over my responses and realizing I'm right or coming up with some other response that will meet AI as well
You do have to be careful because sometimes the model will hallucinate. I haven't found it to be a great way to store and retrieve information from text records, though Google Notebooks is pretty close and will cite the page numbers of what you give it.
I mainly use it to quickly format messages to have a BIFF tone.
Something to remember is that the judge is not going to be interested in reading through 1000 pages of email correspondence. If you act like a jerk they will highlight it and try to make it out like you're difficult to communicate with (giving her an excuse to communicate poorly). Mostly judges don't care though unless someone is lying repeatedly or just being a real jerk who is hostile and obviously difficult to coparent with.
Good advice. I am fortunate in that my ex is an evil goblin who can't help but spew angry vitriol and put up barriers to successful co-parenting. If someone reviews our communication, I'm going to be the happiest man alive
Sorry to hear it man. If she can’t be bothered to act like an adult, then she can talk to the robot. My experience has been that people/judges sort of don’t care. The biggest thing is maintaining that BIFF shell because if you become emotional she will flip it on you and use it to accuse you of being “dangerous” or being a reason that she is “scared” of you.
What will turn people to your side is when they look at things and go, “what are you scared of? Look at how polite he’s being?” And she won’t have any answer that makes sense. I had a mediator that this exact thing happened with.
It helps if the mediator actually has a background working with real victims of domestic violence because they can tell pretty quickly when someone is trying to act like there was without there being any of the tells that they expect to find.
I want to beat my Ex so badly at the game of "who can be more polite"
Oh, neat. Glad to hear :)
How long have you been using it? I find these LLMs get overwhelmed passed a certain amount of text record and lose quality.
I use it but you’ve got really be careful. My ex uses it and it’s just reinforced her most insane takes. She uses it to sane wash her language when she’s engaging with me but it’s so obvious what’s happening and she doesn’t realize that sanitized language only goes so far when your underlying communication isn’t based in reality or cooperation.
My point is remember it’s a sycophant first and foremost and will literally hallucinate. Useful tool? Sure. Total replacement for legal advice? Use at your own risk.
It’s a terrible idea. It hallucinates cases that don’t exist. It is absolutely not a substitute for legal advice. I’m a lawyer and will ask it questions I know the answers to and it will give incorrect answers.
Oh! Wow. Thank you for that. I haven’t used it legit advice, and always consult my attorney before anything real.
Its best use is to organize data, not give advice or find information. Sources matter and ChatGPT is lousy at sourcing and recognizing proper sources.
Totally agree. Perplexity is a little better because it will at least cite places on the web it gets information from that it says. Google Notebook is useful because you can feed it multiple documents and it will cite for you the page numbers for what it answers with when it responds to a question, allowing you to verify. I have found that helpful when going through interrogatory responses and RPDs.
A way to get around the hallucination is to tell it to give references for everything in your prompt.
I use ChatGPT for brainstorming and things like that but LLM are very poor paralegals. Hallucinating citations and cases, ignoring case law, missing related statute/regulations, etc.
I would never in a million years RELY on it for legal stuff, but it’s a useful tool in other applications. I use it like Wikipedia - a good place to start, but then confirm citations and refer to an actual expert if necessary.
I wish it were reliable enough for this purpose. Would be a great tool to help navigate some uncertain times. But its not and it will confidently tell you the wrong thing repeatedly
I've used it as a lawyer and it makes up fake law just to agree and support your view. It distorts reality. What is good for is crafting an argument or brainstorming. You still need to verify cases and legal principles. I recommend getting access to your local public law library and reading a practice area book on family law. They are dense but accurate and for someone with a college education or higher can reasonably get close to the right answer
That’s a good idea. Thank you.
I work with a sales team and some of them are starting to use AI to write emails. I am very clear with them when I see an AI generated response (it is super easy once you’ve seen a few) so be careful there. You can load an email into GPT and ask “did you write this” and it will self evaluate and give you a score.
AI tools also tend to give responses that are favourable to the person who asked the question. Meaning it can (and does) ignore answers that do not favour your ask.
Use it for details and to explore topics but do not trust it.
AI is trash, and is only good for automating certain rote processes- paperwork review, mass data processing, etc.
It’s great when you want to save yourself time googling (ie, an AI summary of what’s on the internet) but overall it’s not something you can trust with anything.
The issue is that AI is only as good as the info feeding it; since the dawn of the internet the digital information pool has been poisoned by misinformation, misconceptions, outdated facts, and just general stupidity.
At a certain point AI stops being efficient because a human, realizing the AI is hallucinating, then has to go back with a fine-tooth comb to figure out where the AI screwed up.
AI is great for enhancing search engine results and automating low-level customer service inquiries in which the problem and corresponding solution are simple, but for the majority of tasks it’s crap.
I used AI for the entirety of the divorce process and my ex allowed me to just do the whole process for us as that trust is still there.
Most county court has self-service clerks who help fix any mistakes but I found AI to be mostly accurate and only a few mistakes here and there. But my divorce costing me $0.00 I’ll take a few mistakes any day.
I think blending AI with other online divorce resources or seeking support on Reddit are good ways to make sure no insane mistakes are made. AI in 2023/2024 is not the same as AI in 2025 when it comes to accuracy. But, as someone stated you have to keep in mind it’s a consumer product, and if you give it a scenario where you’re in conflict with a person, unless you tell it to be objective, it will side with you.
With the consent of my ex, we recorded a moment of conflict and I had Chat GPT assess us both and I had to be very thorough in my prompt to get a neutral result and both me and my ex learned about each other from the output. Like most tools, AI is only gonna be maximized most by those who know how to wield it.
You can use AI to write correspondence to your coparent using the principles of BIFF (brief, informative, friendly, firm) fairly easily. You can also take something you have written and ask AI to rewrite it for you using BIFF which can sometimes help the quality and tone of what you write. I sometimes feed it the email thread and just say, "Please help me write an email to my coparent using the principles of BIFF, key points to address include..."
Make sure you review everything before you send, because you are ultimately responsible for whatever you send out. I'll remove the em-dashes (--) which are a sure sign of AI usage.
It's also really useful for drafting interrogatory questions or RPDs. You can just tell it what you want to ask and it will draft for you in the right language. Then you have a real lawyer read it and sign off on it.
Never trust an AI for legal advice or to do any actual "thinking" for you.