ChatGPT's Privacy Features and AI's Controversial Role in Legal Proceedings

BigGo Editorial Team
ChatGPT's Privacy Features and AI's Controversial Role in Legal Proceedings

As artificial intelligence increasingly permeates our daily lives, users face important decisions about how their personal data is handled and the ethical boundaries of AI applications. Recent developments highlight both practical privacy solutions for ChatGPT users and controversial new applications of AI in sensitive legal contexts.

ChatGPT's Temporary Chat Feature Offers Privacy Solution

For users concerned about their personal data being used to train AI models, OpenAI's ChatGPT offers a simple but effective privacy feature. The Temporary Chat button, prominently located in the top-right corner of ChatGPT's interface, functions similarly to a browser's incognito mode. This feature prevents your conversations from being used to train the AI model, though OpenAI notes they may keep copies for up to 30 days for safety purposes.

The feature comes with certain limitations—conversations aren't saved for future reference and will disappear upon refreshing the page. Additionally, any information shared in Temporary Chat won't contribute to ChatGPT's Memory feature, which personalizes responses based on your past interactions. For users seeking more permanent privacy controls, ChatGPT also offers a global setting under Data controls where users can disable the Improve the model for everyone option while still maintaining their conversation history.

ChatGPT Privacy Options:

  • Temporary Chat: Prevents conversations from being used for model training
  • Data controls settings: Option to globally disable "Improve the model for everyone" feature
  • Retention policy: For safety purposes, temporary chats may be kept up to 30 days

Practical Applications of AI for Personal Data Analysis

Despite privacy concerns, ChatGPT offers valuable assistance with personal tasks when used thoughtfully. The AI can help explain complex documents like medical diagnoses, analyze confusing bills, review contracts, or provide guidance on interpersonal communications. Users are advised to redact identifying information such as account numbers or patient IDs before sharing sensitive documents with the AI.

For those with heightened privacy concerns, alternatives like Anthropic's Claude may offer stronger privacy protections. Claude reportedly does not automatically use user data for model training unless explicitly opted into or flagged for safety reviews.

Practical ChatGPT Applications with Personal Data:

  • Explaining complex medical diagnoses
  • Analyzing confusing bills or contracts
  • Drafting difficult communications
  • Creating budgets from expense data
  • Analyzing symptoms and suggesting possible causes

Best practice: Redact identifying information like account numbers and patient IDs

AI-Generated Avatar Speaks in Court, Raising Ethical Questions

In a striking development that pushes the boundaries of AI in legal proceedings, an Arizona courtroom recently permitted the presentation of an AI-generated avatar of a deceased man during a sentencing hearing. The family of Christopher Pelkey, a 37-year-old Army veteran killed in a 2021 road-rage incident, created the simulation to address his assailant before sentencing.

The digital recreation of Pelkey appeared in a video wearing a green sweatshirt with a full beard, acknowledging its AI nature through the avatar itself. The message, written by Pelkey's sister but delivered through the AI simulation, was intended to humanize the victim and express the family's grief in a way they found difficult to articulate personally.

Legal and Ethical Implications of AI in Courtrooms

The introduction of AI-generated content in legal proceedings raises significant ethical questions. Harry Surden, a law professor at the University of Colorado, noted that simulated content can bypass critical thinking processes and appeal directly to emotions, potentially making it more problematic than standard evidence.

The court allowed the AI presentation specifically because it wasn't being used as evidence—Gabriel Paul Horcasitas had already been found guilty of manslaughter and endangerment. He received a sentence of ten and a half years in state prison. Nevertheless, this case represents a novel application of generative AI in the legal system, adding new complexity to ongoing discussions about AI's appropriate role in sensitive contexts.

The Growing Accessibility of AI Video Creation

The tools to create AI-generated videos like the one presented in court are becoming increasingly accessible to the general public. The process typically involves generating a script (potentially using tools like ChatGPT), selecting a text-to-video platform, customizing elements like voiceovers and visuals, and then exporting the finished product for sharing.

As these technologies become more widespread and refined, society faces important questions about how to balance their potential benefits with concerns about authenticity, privacy, and emotional manipulation—particularly in high-stakes contexts like legal proceedings.