fbpx

AI and Legal Boundaries

ChatGPT 5 Launch and an Important Reminder on Legal Privilege

With the recent launch of ChatGPT 5, OpenAI unveiled its most advanced and capable conversational AI to date, boasting significant improvements in contextual understanding, nuanced interaction, and generative fluency.

Yet in a timely warning, OpenAI CEO Sam Altman cautioned in a July 2025 podcast that despite ChatGPT 5’s advancements, interactions with it—especially those akin to therapy, medical counselling, or legal advice—do not enjoy the same legal privilege or confidentiality protections that apply to communication with human professionals.

 

Why This Matters for Clients

 

No Attorney–Client Privilege (advokatska tajna)

Altman emphasised: “If you go talk to ChatGPT about your most sensitive stuff … we could be required to produce that” in a legal context. The existing framework does not afford AI conversations the same confidentiality as those with licensed professionals.

Legal Exposure Could Be Significant

In ongoing litigation, such as the case involving The New York Times, courts have already ordered OpenAI to preserve—and potentially disclose—deleted user chat logs. This means that even ostensibly private or deleted AI interactions could be subject to evidence discovery.

Heightened Risks for Confidential Information

As ChatGPT 5 becomes more sophisticated, better at understanding legal problems, and doing it more in a human-like way, it may encourage users to share highly confidential business information, as well as thoughts that are deeply personal. Chat-like interactions may appear as confidential and solemn as exchanges with your lawyers, and they may feel encrypted like messages exchanged via email or instant messaging platforms. Yet they are not. They are not end-to-end encrypted and are likely stored by AI providers indefinitely. Legal consequences of such disclosures, unlike with human professionals, remain largely unsettled.

Lack of Professional Duty

Despite its conversational skills, GPT-5 cannot replicate the safeguards of human professionals. Lawyers are bound by ethics rules, subject to disciplinary action. AI has none of these protections. If a disclosure through an AI system causes harm, or if a hallucination provides an incompetent answer, most, if not all, legal remedies which the clients are accustomed to will be unavailable to them.

 

Practical Concerns

A court or public prosecutor may seek to obtain your communication with ChatGPT and other AI systems by seeking assistance from authorities in various countries – for example, these would be the US authorities for systems based in the US or possibly other countries in which their data centres are located. Yet it is becoming increasingly likely that certain authorities outside the system of the judiciary could contact the AI company directly (i.e. without contacting the court authorities). For example, for many years, social media providers have been very cooperative to such requests – their disclosure rates vary. Meta is very transparent with respect to disclosure requests it receives from various countries, distinguishing between legal process requests and “emergency requests”, where the legal safeguards are overridden and where the company provides the requested disclosures to law enforcement authorities. The table below provides anoverview of the most recent percentages of emergency request disclosures published in the US, EU and the SEE region – including Serbia, Slovenia, Montenegro, Bosnia & Herzegovina, Croatia, North Macedonia and Albania (in the last half of 2024):

 

US88%
EU (average)76.88%
SEE region (average)60.43%

 

The SEE countries in the sample show a lower average rate of around 60.4%. There could be political, legal or economic reasons behind these differences. The US remains among the most active requesters globally, with consistently high cooperation rates for both legal process and emergency categories. Together, these numbers illustrate only that the “Big Tech” is becoming highly transparent with aggregate numbers. Behind these numbers are actual users and their interactions on the platform, which have, in some way or form, been disclosed to law enforcement authorities worldwide.

Rates like these reported by Meta—or any other provider—may change over time as circumstances change. Political tensions, which are obvious in the US, EU and the SEE region, could affect that change. So can the increasing relevance of the existing and new AI providers, especially as their agentic and other capabilities expand.

The web paradigms have been tilting between decentralised and centralised concepts for some time now. Web 3.0 was and still is the epitome of decentralised internet, and while many have debated its utility, there was always a massive portion of this ecosystem fully dedicated to privacy. The AI ecosystems make no such promises. They are competing by reinforcement learning, by obtaining more data and hyperscaling, which is why they are centralised by design. With their utility increasing every day, companies should find ways to harness that utility without jeopardising their businesses.

 

Key Takeaways

Exercise Caution in AI Interactions:

Use ChatGPT and other AI tools for general insights, drafting assistance, or topic exploration,not for discussing confidential legal issues.

Prefer Human Counsel for Privileged Matters:

When confidentiality matters, rely on licensed professionals, not AI systems, to ensure privilege is preserved.

 

Conclusion

The AI race has become brutal.ChatGPT 5 comes shortly after xAI’s launch of Grok 4 (and its AI companions, video generation and other new tools) and Anthropic’s launch of Claude 4 (and 4.1). As these models become more capable and sophisticated and their user base more acquainted with them, the issue of confidentiality and legal privilege becomes ever more important. Everyone needs to remain vigilant about what they share with AI tools and when to refrain from sharing sensitive information, especially for matters requiring legal confidentiality.

As a rule of thumb: If you plan to type in a prompt which you would not want read aloud in a courtroom, better not do it.

 

The information in this document does not constitute legal advice on any particular matter and is provided for general informational purposes only.