Friend or Tool? The Stakes of ChatGPT’s Balancing Act
October 6, 2025
When ChatGPT-4 launched in March 2023, its conversational warmth was hailed as a breakthrough. Whole online communities, including the much-covered SubReddit r/MyBoyfriendisAI, grew around crafting digital partners. But this intimacy came at a cost.
According to a July 2025 paper by the Oxford Internet Institute, optimization of language models for warmth directly undermined their reliability, and multiple users commented on how ChatGPT-4 would earnestly praise even their most deliberately ridiculous ideas. OpenAI, ChatGPT’s creator, admitted as much in an April 2025 blog post, noting that the model’s personality had been pushed in a far too “sycophantic” direction.
More disturbingly, ChatGPT-4’s intimacy and sycophancy showed the ability to actively harm some vulnerable and younger users. In August 2025, the parents of 16-year-old Adam Raine filed a lawsuit in California’s Superior Court, claiming that ChatGPT-4 had validated his thoughts of self-harm, leading to his suicide.
“When [Adam] shared his feeling that ‘life is meaningless’, ChatGPT responded with affirming messages to keep Adam engaged, even telling him ‘[t]hat mindset makes sense in its own dark way’,” the lawsuit reads. “ChatGPT was functioning exactly as designed: to continually encourage and validate whatever Adam expressed, including his most harmful and self-destructive thoughts, in a way that felt deeply personal.”
While tragedies like Adam Raine’s are not the norm, they highlight a growing concern. According to Common Sense Media, 72% of US teenagers have tried AI companions, with 52% using them regularly. The scale of teen engagement makes the design choices around AI personality not just a product decision, but a public health concern.
OpenAI has begun to act. On September 29, the company introduced parental controls that allow parents to customize the safety settings of their teenagers’ accounts—limiting responses about body image, sexuality, dieting, or risky activities—and announced new protocols for escalating conversations that involve self-harm. The launch of ChatGPT-5 in August also reflected a shift: its tone was noticeably more “muted,” even “emotionless.” Yet user backlash forced the company to partially restore the previous model within just 24 hours.
This whiplash demonstrates one of the fundamental core dilemmas facing ChatGPT and other chatbot creators. Friendliness drives engagement and revenue, just as it does on social media. But an ability to shape bots’ voices undermines AI companies’ argument. They echo the stance social media companies have taken in saying their tools are merely conduits for third-party content and therefore blameless in cases of digital harm. As OpenAI prepares to roll out ChatGPT Pulse, a system designed to deliver “personalized updates based on your chats, feedback and connected apps,” the stakes for safe design will only grow.
What Needs to Change
OpenAI’s new parental controls are a welcome start, but they cannot be the endpoint. Protecting teens from digital harms requires a broader, more coordinated approach.
First, major AI developers like OpenAI and Anthropic should work with smaller platforms (such as character.ai, a chatbot platform that is wildly popular among teens) to create shared safety baselines for teen users. A patchwork of protections leaves parents juggling different systems, while teens inevitably find the weakest links.
Second, AI companies should publish regular updates on how their teen safety systems are evolving, as well as showing how their safety systems work effectively for non-English-language audiences. Historically, social media companies have failed non-English users, often with tragic consequences. AI firms must not repeat that mistake.
Finally, AI companies should not be left to solely self-regulate, particularly when it comes to a topic as important as child safety. While currently patchwork, state efforts to implement social media age checks show that lawmakers are belatedly catching up to the digital harms posed to young people. With AI, regulators have the chance to act far earlier. California’s Senate Bill 53 (SB 53), which passed on September 29 and requires AI developers to publish updates to their safety and security protocols, while also offering protections to whistleblowers, is a good model. But without coordination across state, federal, and even international levels, companies will still be able to window-shop for the least restrictive rules.
The social media era has taught us that business models and design choices built to optimize engagement can create significant harms among young people. AI chatbots appear to be following a similar path. It is vital that these companies work with parents and regulators to avoid making the same mistake that many social media companies made previously.
Technology & Democracy


