OpenAI CEO Sam Altman has issued a cautionary message about the growing use of ChatGPT, particularly among young people who turn to the AI tool for personal and emotional support.
While acknowledging the potential benefits of AI, Altman highlighted serious concerns regarding user privacy and over-dependence on the technology.

Altman emphasized that interactions with AI systems like ChatGPT do not enjoy the same legal protections as conversations with therapists, doctors, or lawyers. This means that any personal information shared with ChatGPT could, in theory, be used as evidence in legal proceedings.
“People often confide very intimate details to AI systems,” Altman said. “But unlike a therapist or attorney, those conversations aren’t protected by privilege laws. That can have serious consequences in a courtroom.”
He also expressed concern over the increasing tendency among users particularly younger ones to treat AI as the ultimate authority on complex life decisions. According to Altman, some users are treating ChatGPT as the “final voice” in matters that should involve more human nuance, reflection, or professional guidance.
“This level of dependence is troubling,” he warned. “AI can be incredibly helpful, but it should never replace human judgment, empathy, or expertise—especially when it comes to mental health or major life choices.”
READ MORE:
- Tech Talk: What Is ChatGPT? Here’s What You Need To Know
- Tech Talk: Google To Launch Own Version Of ChatGPT
- Bensol admits he wrote ‘Nairobi’ to caution a good friend
- Singer Hozier Says He Would Go On Strike Over AI Technology
- Sean Paul Shares His Thoughts On AI In The Music Business
Altman stressed that OpenAI is actively exploring ways to address these issues, including improving user education and potentially enhancing privacy safeguards.

























