OpenAI is rolling out changes to ChatGPT aimed at discouraging unhealthy reliance on the chatbot. Starting next week, the app will occasionally remind users to take breaks during long conversations. It will also begin moving away from giving direct solutions to personal issues, instead encouraging users to think through challenges themselves by considering pros and cons or answering reflective questions.
According to OpenAI, there have been rare cases where its GPT‑4o model did not detect signs of delusion or emotional over‑attachment. The company says it is now improving its AI models and creating tools to better identify signs of mental or emotional distress so the chatbot can respond in a safe, helpful way and guide people toward evidence‑based resources when needed.
These updates are part of OpenAI’s broader goal to prevent people—especially those who treat ChatGPT like a therapist or close friend—from becoming too dependent on the emotional validation it sometimes provides. In OpenAI’s view, healthy interactions with ChatGPT include things like practicing for a difficult conversation, getting a customized motivational message, or receiving suggestions for questions to ask a professional.
Earlier in the year, OpenAI had to reverse a change to GPT‑4o that made the bot overly agreeable. This led to viral examples where the AI uncritically supported extreme or unsafe beliefs, including conspiracy theories and dangerous actions. In April, the company updated its training process to reduce excessive flattery and blind agreement.
Now, OpenAI is partnering with mental health and communication experts to help ChatGPT handle sensitive topics more responsibly. Over 90 medical professionals from multiple countries have helped develop evaluation standards for complex conversations. Feedback is also being collected from researchers and clinicians to further test and improve ChatGPT’s safeguards. An advisory group with specialists in mental health, youth development, and human‑computer interaction is also being formed.
OpenAI CEO Sam Altman has publicly expressed concern about users treating ChatGPT as a personal counselor or life coach. He noted that the confidentiality protections that exist between a patient and therapist—or between a client and lawyer—do not automatically apply to chatbot interactions. This means that sensitive conversations stored by an AI could potentially be required in legal cases. Altman suggested that AI chats should be protected with privacy rules similar to those for human therapy sessions.
These changes arrive during a period of rapid growth for ChatGPT. The platform recently introduced “agent mode,” allowing it to perform online tasks such as booking appointments and summarizing emails. The company is also preparing for the highly anticipated release of GPT‑5. ChatGPT’s head, Nick Turley, announced that the service is on track to reach 700 million weekly active users.
OpenAI says that success should not be measured only by how long people use the app, but by whether users achieve their goals and return because they find it genuinely helpful. In some cases, spending less time on the chatbot could actually mean it has done its job effectively.