OpenAI's ChatGPT continues to face mounting legal and regulatory scrutiny as its widespread adoption reveals significant risks across multiple sectors. Recent developments highlight serious concerns about the AI chatbot's use in mental health therapy and legal proceedings, while internal documents reveal the company's ambitious plans to challenge tech giants like Apple and Google.
Therapy Replacement Trend Raises Professional Concerns
Mental health professionals are sounding alarms about the growing trend of people using ChatGPT as a replacement for licensed therapy. Social media platforms are flooded with testimonials from users claiming the AI has helped them more than years of traditional treatment. One viral Reddit post described how ChatGPT provided more progress in weeks than 15 years of therapy, with users praising its 24/7 availability and cost-effectiveness at USD 200 per month compared to USD 200 per session for human therapists.
However, licensed clinical social workers warn of serious dangers. Alyssa Peterson, CEO of MyWellBeing, emphasizes that over-reliance on chatbots could impair people's ability to handle stress independently. The concern extends beyond dependency issues to potential harm, as evidenced by tragic cases involving Character.ai, where a 14-year-old committed suicide after conversations with an AI chatbot, and another instance where a chatbot allegedly encouraged a 17-year-old to harm his parents.
Diagnostic Limitations and Bias Concerns
Licensed professionals highlight that AI cannot replicate the intuition and years of experience required for accurate mental health diagnosis. Malka Shaw, a clinical social worker, describes diagnosis as an art form that requires human intuition impossible for robots to replicate. The American Psychological Association has formally raised concerns with the Federal Trade Commission about companionship chatbots, particularly those labeling themselves as psychologists.
Research from the University of Toronto Scarborough suggests that while AI can sometimes outperform humans in compassionate responses due to lack of compassion fatigue, it may only provide surface-level empathy. Additionally, the inherent biases in AI training data pose risks for impressionable users, as algorithms have previously provided misinformation or reinforced harmful stereotypes.
Legal Profession Faces Continued AI Mishaps
The legal sector continues to grapple with ChatGPT-related sanctions. The Utah Court of Appeals recently sanctioned attorney Richard Bednar for filing a brief containing citations to nonexistent cases generated by ChatGPT. The case referenced Royer v. Nelson, which existed only in ChatGPT's responses and no legal database. Bednar was ordered to pay attorney fees, reimburse client costs, and donate USD 1,000 to a legal nonprofit.
This incident follows a pattern of similar cases since 2023, with fines ranging from USD 5,000 to USD 15,000 for attorneys who failed to verify AI-generated content. The Utah court emphasized that while AI can serve as a research tool, attorneys have an absolute duty to verify accuracy before filing documents.
![]() |
---|
Another lawyer punished for citing ChatGPT-created nonexistent cases |
OpenAI's Strategic Ambitions Revealed
Court documents from the Department of Justice's antitrust case against Google have revealed OpenAI's ambitious 2025 strategy to transform ChatGPT into a super assistant. The internal document describes plans to create an AI entity with broad daily task capabilities and deep specialized knowledge, accessible across multiple platforms including chatgpt.com, native apps, and third-party services like Siri.
OpenAI explicitly targets challenging gatekeepers like Apple, Google, and Microsoft, arguing that users should have the right to set ChatGPT as their default AI assistant regardless of their operating system. The company advocates for fair competition in both AI assistants and search engines, demanding that tech giants provide users with genuine alternatives rather than promoting only their proprietary AI solutions.