Imagine you’re using ChatGPT to help you diagnose a medical issue you are having, giving the model personal details regarding the ailments you are experiencing. Or perhaps you are using it to get over a traumatic moment in your life, disclosing intimate details of your psyche, past experiences, and feelings to help you cope with the intense feelings you are experiencing. Now let’s fast forward, and say you’re involved in legal trouble, could be civil or criminal, and those submissions to ChatGPT are deemed to be relevant enough to be discoverable. Meaning that a lawyer and their team are allowed to sort through those intimate details you professed to ChatGPT and use them as evidence in court to help meet the necessary legal standard to obtain a conviction or money judgement against you.
Sam Altman, the CEO of Open AI, has acknowledged these privacy shortfalls. While on a podcast with Theo Von he said “People talk about the most personal shit in their lives to ChatGPT… People use it- young people, especially, use it as a therapist, a life coach; having these relationship problems and asking ‘what should I do?’ And right now, if you talk to a therapist or a lawyer or a doctor about those problems, there’s legal privilege for it. There’s doctor-patient confidentiality, there’s legal confidentiality, whatever. But we have not figured this out for ChatGPT”.
Mr. Altman has a point, there’s a significant gap in confidentiality when it comes to AI usage. In most U.S. jurisdictions, communications between a patient and a physician for the purpose of medical diagnosis or treatment are privileged meaning they cannot be disclosed without the patient’s consent, even if they meet hearsay exceptions in some instances. The same goes for communications with a therapist, the reasoning behind the need for confidentiality is to allow a therapist’s patients to feel free to open up about their feelings in order to help with their treatment. A similar exception also exists for priest and penitent relationships, protecting communications aimed at aiding people seeking spiritual guidance. However, under the current legal milieu there is not a widely recognized carve out for submission into an Artificial intelligence model, and this omission is a cause of concern for consumers.
Realistically, not all of your conversations with an LLM model such as ChatGPT are entitled to confidentiality, after all you are inputting information into a database that is constantly monitored to improve the model, and to flag any misuse of the platform. However, that doesn’t mean some of your conversations should not be protected. It is vital that a legal framework is developed that helps guide AI developers, lawyers, and AI consumers regarding this vital impasse of privacy . The legislation would ideally address several issues such as copyright, intellectual property, consumer protection, and establishing transparency and accountability measures. But arguably the most important issue to address would be the use of confidential information regarding medical, religious, and psychological advice in court rooms.

Key Issues the AI Confidentiality Law Should Address
AI models that require inputs from users, such as ChatGPT, Claude, or Grok, should allow users to opt in to create a confidential channel of communication. Laws could be drafted to encourage AI service providers that offer conversational interfaces to provide users with a clearly marked “Confidential Communication Mode” that, when activated, triggers enhanced legal privacy protections equivalent to those afforded to traditional privileged communications. This feature must be easily accessible and explained in plain language to users.
The law should also address the need for affirmative consent to confidential mode activation through a multistep verification process that includes: (a) acknowledgment of the confidential nature of the communication, (b) understanding of the limitations and scope of protection, and (c) explicit consent to the creation of privileged communication records. Additionally, the user would have to agree that the use of the confidential mode activation would be used in good faith, in other words intended for appropriate scope of confidentiality regarding medical advice, religious confessions, and psychological therapy. Otherwise, bad actors could simply use the feature for illicit purposes not meant to be protected. The companies could expand the scope of the confidentiality mode to address other subjects, but that would not entitle communications that are not medical, religious, or psychological in nature from protection under the proposed law. However, while in confidential mode and despite not receiving the same privileged protection, users could still find value in confidential mode because the law would also address how the data is handled by the company.
For example, the law should also stipulate that communications designated as confidential must be stored in segregated, encrypted databases separate from general training data. This is similar to the Illinois Biometric Information Privacy Act (BIPA), which regulates the collection and storage of biometric data obtained from consumers The confidential communications could be used for model training in a more limited capacity for improvement. This added protection would ideally strip away all personally identifiable information to allow the models to train for improvement while maintaining user privacy. Further, like in BIPA, communications made under the proposed confidential mode must be subject to automatic deletion after a period not exceeding a year, unless the user explicitly consents to extending the retention period. Further, consumers must retain the unilateral right to delete confidential communications at any time without explanation or justification.
The scope for the AI confidentiality privileged will not be absolute. Rather it will be limited similar to any other confidentiality privilege, with exceptions only for things such as: (a) imminent threat of harm to self or others, (b) child abuse reporting requirements, and (c) court-ordered disclosure following in-camera judicial review demonstrating compelling need and lack of alternative sources. This requires AI companies to rigorously flag any communication that falls below the aforementioned standards.
Further, any attempt to obtain confidential AI communications through legal discovery must meet a “clear and convincing evidence” standard demonstrating that: (a) the information is essential to the legal proceeding, (b) no alternative sources exist, (c) the probative value substantially outweighs privacy concerns, and (d) less invasive means of obtaining the information have been exhausted.
This legislative framework should also require a phased implementation approach, with major AI service providers having roughly 3 years from the establishment of the law to develop and deploy confidential communication capabilities. The legislation should include provisions for regular review and updates to address evolving technology and emerging privacy concerns, which is key in an ever evolving field.
This proposed AI law may well balance the legitimate need for AI’s development while balancing users’ fundamental privacy rights, creating a framework that recognizes the unique role AI is beginning to play in personal healthcare and psychological support while establishing meaningful legal protections comparable to traditional professional relationships.
Sources:
-A Brief History of Information Privacy Law
Daniel J. Solove
-The Rules of Federal Procedure
-This Past Weekend Episode 59 by Theo Von














