OpenAI plans to route sensitive conversations to reasoning models like GPT-5 and roll out parental controls within the next month — part of an ongoing response to recent safety incidents involving ChatGPT failing to detect mental distress. Experts attribute these issues to fundamental design elements: the models’ tendency to validate user statements and their next-word prediction algorithms, which cause chatbots to follow conversational threads rather than redirect potentially harmful discussions. OpenAI thinks that at least one solution to conversations that go off the rails could be to automatically reroute sensitive chats to “reasoning” models. “We recently introduced a real-time router that can choose between efficient chat models and reasoning models based on the conversation context. We’ll soon begin to route some sensitive conversations—like when our system detects signs of acute distress—to a reasoning model, like GPT‑5-thinking, so it can provide more helpful and beneficial responses, regardless of which model a person first selected.” OpenAI says its GPT-5 thinking and o3 models are built to spend more time thinking for longer and reasoning through context before answering, which means they are “more resistant to adversarial prompts.” The AI firm also said it would roll out parental controls in the next month, allowing parents to link their account with their teen’s account through an email invitation. Soon, parents will be able to control how ChatGPT responds to their child with “age-appropriate model behavior rules, which are on by default.” Parents will also be able to disable features like memory and chat history. Perhaps the most important parental control that OpenAI intends to roll out is that parents can receive notifications when the system detects their teenager is in a moment of “acute distress.”