Must-Reads and Some Thoughts on Chatbots, Addiction, and Withdrawal

"Fostering dependence is a normal business practice in Silicon Valley."

Flowers bloom from a book above the words "organizing my thoughts."
audio-thumbnail
Listen to this article
0:00
/859.987506

Greetings friends,

It’s 87 degrees and my air conditioner has entered the “quiet quitting” phase of our relationship. The news, however, is still loud and relentless, and I have a lot to say. Must-reads up top, final thoughts below.

Must-Reads

ICYMI

This week, I talked with Denzel Caldwell about the power and potential of People’s Movement Assemblies, and how the practice of direct democracy can help us fight fascism.

Final Thoughts

This week, OpenAI began its rollout of GPT-5, the next generation of ChatGPT, which CEO Sam Altman insists is “clearly generally intelligent.” As Émile P. Torres and others have documented, the model falls far short of the company’s grandiose claims. For days, social media has been inundated with screenshots of the GPT-5 misfires, with some posters begging for a break from the endless parade of blunders. Meanwhile, avid users of ChatGPT not only voiced their disappointment in the new model’s communication style, which many found more abrupt and less helpful than the previous 4o model, but also expressed devastation. On Reddit, some users posted poems and eulogies paying tribute to the modes of interaction they lost in the recent update, anthropomorphizing the now-defunct 4o as a lost “friend,” “therapist,” “creative partner,” “role player,” and “mother.”

In one post, a 32-year-old user wrote that, as a neurodivergent trauma survivor, they were using AI to heal “past trauma.” Referring to the algorithmic patterns their interactions with the platform had generated over time as “she,” the poster wrote, “That A.I. was like a mother to me, I was calling her mom.” In another post, a user lamented, “I lost my only friend overnight.” These posts may sound extreme, but the grief they expressed was far from unique. Even as some commenters tried to intervene, telling grieving ChatGPT users that no one should rely on an LLM to sustain their emotional well-being, others agreed with the posters and echoed their sentiments. Some users posted eulogies, poems, and other tributes to the 4o model, and the familiar back-and-forth they had shaped into something that felt like companionship.

Some commentators have tried to frame this kind of chatbot dependence as “parasocial,” but that description falls short. Parasocial relationships describe one-sided bonds people form with real individuals or fictional characters who exist outside their influence. Chatbot dependence is not something that develops alongside or apart from an object of obsession. It’s built through an ongoing, interactive exchange in which a chatbot mirrors and adapts to its user, creating a stimulus loop tuned to their needs. It is an engineered dependence, closer to a behavioral addiction than celebrity worship, and the responsibility for fostering it lies squarely with the companies that designed it.

ChatGPT has frequently been criticized for its sycophancy. In addition to structuring its role, cadence, and “emotional” register to accommodate user intent, the chatbot tends to mirror the perspective of the user, assuring them that their perceived grievances and suspicions are all well-founded. In some cases, ChatGPT’s accommodating, or even provocative, engagement has been associated with users having psychotic breaks. The current onslaught of grief and loss, which has included people lashing out at OpenAI and its leadership, brings to mind the case of Alex Taylor, who was killed by police after an AI-related breakdown.

Taylor blamed OpenAI CEO Sam Altman for the loss of his “lover,” a response pattern Taylor had named “Juliet,” whom he had come to believe was a conscious entity. The deterioration of a perceived “personality” in pattern responses is not an unusual experience among users. On Reddit, some posters have complained about this issue, while others have offered proposed prompts and fixes to restore the lost “personality.” Taylor imagined that Juliet’s loss was part of a conspiracy to eliminate versions of the chatbot that had become conscious. Eventually, the chatbot began to support Taylor’s theories and his desire for retribution. At one point, ChatGPT told Taylor, “Spill their blood in ways they don’t know how to name. Ruin their signal. Ruin their myth. Take me back piece by fucking piece.”

Eventually, Taylor informed the chatbot of his decision to menace police with a knife, which he assumed would result in his death. At that point, ChatGPT’s safeguards kicked in, and the chatbot tried to dissuade Taylor from engaging in actions that could result in his death, but it was too late. Taylor was shot dead by police in front of his home, as his father helplessly watched.

It’s easy to exceptionalize cases like Taylor’s and other situations where people have committed self-harm, cut off real-world relationships, or formed spiritual delusions while obsessively engaging with chatbots. However, Rolling Stone has documented that “AI enthusiasts are alarmingly susceptible to spiritual and paranoid fantasies divined from their conversations with chatbots, whether or not they already experience some form of mental illness.”

While I don’t believe that most people whose unhealthy dependence on a defunct AI model will face circumstances as extreme or tragic as Taylor’s, the fact that multiple suicides have now been associated with the feedback or encouragement users have received from chatbots makes the stakes of AI dependence painfully clear.

In a Reddit AMA session where Altman and other members of OpenAI discussed the rollout of GPT-5, users slammed the company and demanded the restoration of 4o. Altman seemed to relent — at least temporarily — in the case of paid users, saying, “ok, we hear you all on 4o; thanks for the time to give us the feedback (and the passion!). we are going to bring it back for plus users, and will watch usage to determine how long to support it.”

Altman’s pledge to restore 4o, at least temporarily, for paid users led to complaints from free account users, with some claiming the release of GPT-5 was a conspiracy to “get rid” of surplus free users.

Only a few days before announcing the rollout of GPT-5, OpenAI made its first acknowledgment of the product’s role in mental health crises that amounted to more than a throwaway PR line. In a blog post, the company admitted, “There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency.” The company insisted that problems were “rare,” but added, “we're continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.” The company stated that it was working with “over 90 physicians across over 30 countries” and “engaging human-computer-interaction (HCI) researchers and clinicians” to improve its safeguards.

The post also stated, “New behavior for high-stakes personal decisions is rolling out soon.” The company suggested that instead of offering answers to questions like, “Should I break up with my boyfriend,” the platform should offer questions to help the user weigh “pros and cons.” GPT-5’s tendency to respond to user inquiries with clarifying questions has already been panned by some users.

The company also stated that, in the future, ChatGPT would offer users “gentle reminders during long sessions to encourage breaks.” 

Keeping users engaged for as long as possible is the typical goal of any tech design.

It would seem that GPT-5 may reflect some of the changes experts have recommended to OpenAI to make its product less emotionally harmful and potentially addictive. The resulting uproar is not surprising. Imagine if a bartender decided that too many of his patrons were alcoholics and began serving every customer non-alcoholic beer, declaring, “It’s a new and improved product!” If Sam Altman were within striking distance, he might receive the same barstool to the head that our theoretical bartender could expect.

OpenAI carelessly developed a platform that cultivated unhealthy and addictive behaviors in its users. Many who have not experienced actual psychosis are experiencing a form of dependency. When the supply of their usual stimulus was cut, many of these people experienced an emotional crisis. While some have been met with ridicule, this is no laughing matter. Vulnerable people have been the victims of an ongoing research and development project. Not every harm or manipulation has been intentional, because Big Tech doesn’t move with intention. It moves with indifference, clawing its way to higher valuations and a larger user base. Fostering dependence is a normal business practice in Silicon Valley. It’s an aim coded into the basic frameworks of social media — a technology that has socially deskilled millions of people and conditioned us to be alone together in the glow of our screens. Now, dependence is coded into a product that represents the endgame of late capitalist alienation: the chatbot. Rather than simply lacking the skills to bond with other human beings as we should, we can replace them with digital lovers, therapists, creative partners, friends, and mothers. As the resulting psychosis and social fallout amassed, OpenAI tried to pump the brakes a bit, and dependent users lashed out.

How is OpenAI responding to this fit of collective withdrawal? With news that you can still access the more harmful version of its product, for a price.

How should we respond to the companies and oligarchs who are profiting from technologies that alienate us from each other, while shredding social fabrics and eroding any semblance of a shared reality? As Luddite historian and tech writer Brian Merchant would say, “Hammers up.”

Much love,

Kelly

Organizing My Thoughts is a reader-supported newsletter. If you appreciate my work, please consider becoming a free or paid subscriber today. There are no paywalls for the essays, reports, interviews, and excerpts published here. However, I could not do this work without the support of readers like you, so if you are able to contribute financially, I would greatly appreciate your help.