Technical News

Lord forgive me, it’s time to go back to the old ChatGPT

Earlier this year, OpenAI reduced some of ChatGPT’s “personality” as part of a broader effort to improve user safety following the death of a teenager who committed suicide after discussing it with the chatbot. But apparently that’s all in the past. Sam Altman announced on Twitter that the company was returning to the old ChatGPT, now with porn mode.

“We made ChatGPT quite restrictive to make sure we were cautious about mental health issues,” Altman said, referring to the company’s age limit that pushed users toward a more age-appropriate experience. Around the same time, users started complaining that ChatGPT was being “lobotomized”, providing worse results and less personality. “We realize this made the game less useful/enjoyable for many users who had no mental health issues, but given the severity of the issue, we wanted to get it right.” The change follows the filing of a wrongful death lawsuit from the parents of a 16-year-old who asked ChatGPT for, among other things, advice on how to tie a noose before committing suicide.

But don’t worry, everything is sorted now! Although he admitted earlier this year that protective measures could “degrade” over longer conversations, Altman confidently asserted, “We have been able to mitigate serious mental health issues.” For this reason, the company believes it can “safely relax restrictions in most cases.” In the coming weeks, Altman says, ChatGPT may have more personality, like the company’s previous 4o model. When the company upgraded its model to GPT-5 earlier this year, users began to mourn the loss of their AI companion and lament the chatbot’s more sterile responses. You know, just regular healthy behaviors.

“If you want your ChatGPT to respond in a very human way, or use a ton of emoji, or act like a friend, ChatGPT should do that (but only if you want it to, not because we’re maximizing usage),” Altman said, apparently ignoring the company’s previous report that warned that people might develop an “emotional dependency” when interacting with its 4o model. The MIT researchers warned that users who “perceive or desire that an AI has benevolent motivations will use language that elicits precisely that behavior. This creates an echo chamber of affection that threatens to be extremely addictive.” This appears to be a feature and not a bug. Very cool.

Going further, Altman said the company would further embrace its principle of “treating adult users like adults” by introducing “erotica for verified adults.” Earlier this year, Altman mocked Elon Musk’s xAI for launching an AI girlfriend mode. Turns out he came via the waifu route.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button