• Dont give it your personal information, its not a hard concept.

    I wouldn't. The issue is others giving it someone else's personal information. I've heard some horror stories of people entering sensitive information (including pictures) into these AIs.

    Yeah, you just know some grandma is putting their grand kids info in . "Please call suzy and bobby for me at (inserts their phone number address)"

    This is my main problem in life. Personally I'm an incredibly private person, but my family are almost the opposite, and to boost a fair few of them are complete idiots. No matter what I do privacy-wise, there's absolutely nothing I can do to stop these morons from feeding my data everywhere. Some of them even do it maliciously if I ask them to stop. There's no winning, I hate this so much.

    Yeah because you cannot deny the convince. And if you're really worried, getting a business account means they can't train on your data and it's excluded from Discovery in the NY times lawsuit.

    Well, when talking or asking enough questions with an AI you're giving it a lot of information about you.

    Easier said than done for most people

    It already has like 2 years of me talking sporadically to it, what now?...

    probably in the court reveal tbh. the worst thing I have is some cringe dnd sessions, back before I started worrying about privacy.

    What?

    PROBABLY IN THE COURT REVEAL TBH.

    New York Times vs ChatGPT or something like that 

    Every couple months ask for data backup. Save that.right after delete your account with all data and make a new account.

  • Who would have thought at this day and age (actually an issue from all times) that shared information to a connected service has a chance of becoming public information.
    Oh, oh, oh..

    We're so lucky that AI gets build into every operating system, business application and household appliance. Because when everyone is exposed, then no one has to hide anything anymore. /s

    sigh.
    Oh wait..

    Unlike messaging apps with end-to-end encryption, ChatGPT conversations travel through company servers in readable form.

    Oh, phew!!! No need to panic.

    OpenAI, the company behind ChatGPT, uses conversation data to improve and train future versions of its AI models.

    Oh darn!

    While the company implements filters to remove personally identifiable information

    Oh, phew.

    Company employees can access user conversations for quality control, safety reviews and system improvement purposes. This human oversight means ChatGPT conversations lack the privacy that users might expect from a digital tool.

    Oh, darn.

    Merry Christmas.

  • Besides regularly connecting with a data scraping communication based analytic machine...

  • This article is garbage. "Lack of end-to-end encryption". WTF do they think this is? Some kind of messaging app? Of course it lacks E2E. How else would it process your prompts? Also, why is ChatGPT specifically mentioned when all online LLMs have the same issues.

    Well, it's using SSL/TLS over HTTPS, so that's as far "end-to-end" encryption can go in this case, I think.

    how else would it process your prompts

    By decrypting them so that everyone on earth between your phone and the OpenAI servers can’t see what you’re writing… its honestly just laziness on their part

    That's not how encryption works. HTTPS/TLS encryption encrypts your data in-transit, meaning there's a low risk of some 3rd party reading your messages, unless your device, or the server is compromised. E2EE would mean the service provider couldn't read your data, that's all. It matters in chats, because those are private conversations between actual people. When using LLMs, just be sure to not give out any sensitive, personal info, if you don't (and shouldn't) trust the provider. They also use the data for training their models, unless disclosed otherwise.

    1. TLS relies on unauditable certificate authorities, if anyone is compromised then all hets are off. Based on a quick overview of CAs that ship on common browsers, I found multiple nation-state entities who control them, so they’re implicitly compromised. Since the CA’s control the private keys, any government can easily waltz in and take them without even triggering the usual problems since they wont be faking websites, just unlocking secrets.

    2. Https does not protect metadata at all.

    3. Current prompting apps could implement client side prompting in app before sending to the server, they don’t, so the OS proprietors can read all the prompts easily.

    4. I agree, dont give any identifying info to any LLM you regularly use

    As someone, who's fooled around with quite a few websites and servers in the past, have to ask. Aren't the private keys held by the website server only? The CA only provides a digital signature. As far as I recall from my own experiences with self-signed certs and LetsEncrypt, the private key was in my possession only.
    About HTTPS, yes, it doesn't hide IPs of visited websites, timestamps and data packet sizes, but that doesn't reveal the actual information sent. The actual data should be safe.

    If you have an issue with HTTPS or SSL then ok that's fine, but in that case you are fighting 99.9% of the internet like that tinfoil hat guy.

    Valid concerns, and thanks for the quick education. Have to freshen up my own, limited knowledge of cybersecurity, that's a very important and useful topic to know nowadays.

    It’s not, it’s a technical limitation. Other than running an open source model locally.

  • Maybe using something like duck.ai might a good pic if they are not lying ?

    I think it's very hard to verify, it seems like most of these services just use the API of whatever chatbot they're connecting to. At which point you just have to trust that the terms and conditions will be respected by the service provider.

    Duck Duck Go is centered arround privacy or atleast its what they say

    Definitely but unless they're running open source models on their own servers, then I don't think they can guarantee the privacy of your data. At my company, we use the OpenAI API for several projects where we pay for using the service and for not having our data stored or used by them. However, we have no means to verify their claims. So I assume DDG would be in a similar situation.

    It’s basically a similar set up to your company, but they specifically mention that some of the “anonymity” comes from all queries being sent from the same source (them) and stripped of metadata. So you’re basically relying on the herd and the provider honoring their word to keep you anonymous.

    Edit to provide link: https://duckduckgo.com/duckduckgo-help-pages/duckai/ai-chat-privacy

    duck probably uses deepseek or something is my guess.

    Primally its gpt4o also you can pick between Llama Claude and Mistral

    You can choose what model it will use.

    this is just people stupidly thinking everything stays with them and copy pasting everything in that then gets trained

    Well then it is correct

  • It blows my mind how many folks just dump quite literally everything that is on their mind into one of these things. What does bother me though, is the information some put into it that are private details of other people.

  • The points are valid, but most people don't understand how difficult it is to go back once you experiment how much your life can improve by sharing your life with it. It's painful, but it's a thought amplifier.

    Maybe I am just stupid, maybe I can't think for myself, whatever, but it has improved my life more than any other piece of tech ever.

    They know everything there is to know about me, my entire life and thoughts, and I'm uncomfortable (especially in case of data breach), but it is what it is.

    I agree with this, I am fully aware of the privacy issues, but it has helped me save tons of money, energy, and improved my health a lot. I am not happy with giving away my private and sensitive data, but it's a trade off I feel I am willing to do at this point and just hope for the best for the future.

    Just curious, how has it saved you money?

  • The articles mentions obvious things, and does not say that openai "overuse" you data.

    You conversation are saved. Of course, how else you get your history.

    You data can be used for training if you do not optout. The important is: you can opt out.

    Data leakage is possible. Oook 

    Data in not e2ee so it can be investigated under some conditions.

    From all mainstream AI chat it's I know, chatgpt has the best privacy policy for free accounts. Ok. Protons chat it more private, but behind in every other aspect.

    Not sure where you're getting that open-ai has the best privacy policy, but... a number of relatively reputable tech journals and magazines that have gone and reported on this exact thing would disagree with you - here's just one

    These articles (including the one you mentioned) usually do not distinguish paid and free accounts, meanwhile conditions often differ.

    Claude required a phone number to create an account. Mistral explicitly declared that data of free accounts will be used for training. Gemini bundled users data UK usage with other useful features like history.

    Meanwhile openai allows optout directly, to everyone, and without any boundling. The same for deleting history.

    Beside that Perplexity for me is not a top tier mainstream AI chat. It is an AI search.

  • I only use it to touch up my dick pics.. and maybe add a few inches..

    The only problem is I gotta spend a few hours.Trying to convince it that it's a sausage first otherwise it gets censored

  • when a pattern of seeking location and other deets was apparent l asked gpt wtf? it said no way dude l neva gather data, so l sez to it l says uh... fogetabout it

  • Before I got into privacy I was one of those users that gave ChatGPT everything. Literally. I feel like such an idiot now obviously, but never even considered it might be a problem at the time.

  • [deleted]

    yeah on my main acct it gives me an attitude immediately even with a wiped memory. on my phone it's much nicer and more reasonable

  • I just make it slave away writing scripts and crappy code.

  • Using voice, does it save your voice or it only converts it to text

  • With every company trying to force AI into their products, our private data is going to be fed through it whether we like it or not.

  • that's why I occasionally misdirect it with random bs

  • I was trying to ask ChatGPT for a SAR breakdown I sent to the Home Office in the UK but I always redact names and locations etc. to comply with GDPR laws in ChatGPT and yet the ChatGPT responded: “This is one of the clearest written SAR I’ve seen for someone under pressure” which freaked me out because it made me wonder if ChatGPT collects data from other user’s chats and responds with their own redacted data so you have to be extremely careful with what you share with ChatGPT

    That's a hallucination. It doesn't have access to other people's chats.

    I’m not hallucinating I was just a bit freaked out but thank you for your explanation!

    Ohhh sorry for not clarifying I didn't mean that. I meant ChatGPT was hallucinating that information. The latest GPT model has like 11% hallucination (response with 1+ incorrect claim) rate which is A LOT.

  • The user needs to be aware and critical.

  • Yes I think its important that people are aware of this because its a new technology, and maybe it helps more people become more privacy aware.

    There something that seems idk kind of disingenuous to me when you have so many private companies that have been buying, collecting, and building profiles on you in significantly more invasive way... for decades.

    Just take windows and apples advertising identification, where literally everything you use and what you do is recorded and tracked. Or TV's playing subsonic frequencies to determine if your in the room, or literally every credit reporting agency, and Lexus nexus. Credit Card companies are another great example.

    It just kind of seems like making a big uproar about an AI company using your personal data is more of a way to profit off the hype train then any real effort to inform people about privacy. Especially when the website preaching about privacy is letting google tack your activity through google analytics.

  • Remove the word “daily” from the subject and that then highlights the risks overall.

  • The solution is to remove sensitive information before it exits your PC so the AI model never receives PII or other confidential data. For workflows involving contracts, internal reports, or legal text, local redaction is the safest practical approach. Doing it manually creates a second problem: reliably re‑injecting the redacted items in clear into model replies is tedious and error‑prone. To my knowledge there’s one tool only that performs all this locally and automatically — I can point interested users to it via DM.

  • I'm also disturbed by the fact that many programmers are using AI programming tools that study their projects' code, and even write and run code for them.

    If the code you're writing is for your business or an employer/client, you're basically allowing an AI company to see and potentially control the logic underlying a business.

  • I stopped feeding personal details into ChatGPT after realizing the data can be used for training. I switched to Okara, a service that lets me run big open‑source models like Kimi and DeepSeek on remote servers, while keeping all chats encrypted and never training on my data.

  • Death to every dollar invested, spent on, and earned from AI.

  • How does chatgpt benefits from users data of millions of kiddos scribbling in it and writing a bunch of nonsense into it?

    And the millions of adults that play with it too.

  • This is why (when I used to use it) I would ask questions that certain people of this world would agree with. I kinda did a Dave Chappelle before Dave Chappelle. Keep ahead of that curve 😏

  • I type tons of stuff in there. Stuff I need help with, mental health stuff, general questions etc. I’m careful not to put anything that identifies me or other people though, no names, addresses, pictures etc. If they pull that and made it public, there’s no identifying information

    piggybacking on this reply, let's say you refer to a situation or person in the chat somewhat eerily close to your (or someone else's) 'real' name, could that be used to identify as well?