Must-Reads and Some Thoughts on Chatbots, Addiction, and Withdrawal
"Fostering dependence is a normal business practice in Silicon Valley."


Greetings friends,
It’s 87 degrees and my air conditioner has entered the “quiet quitting” phase of our relationship. The news, however, is still loud and relentless, and I have a lot to say. Must-reads up top, final thoughts below.
Must-Reads
- The Danes Resisted Fascism, and So Can We by Sarah Sophie Flicker. “The Danish Commandments set the stage for mass defiance. The Danes were encouraged to do bad work for the Germans, to destroy what mattered to the occupiers, and to refuse to support Nazi collaborators.”
- ‘Self-Termination is Most Likely’: The History and Future of Societal Collapse by Damian Carrington. “‘History is best told as a story of organised crime,’ Kemp says. ‘It is one group creating a monopoly on resources through the use of violence over a certain territory and population.’”
- The AI Bubble is so Big it's Propping Up the US Economy (for Now) by Brian Merchant. “Over the last six months, capital expenditures on AI—counting just information processing equipment and software, by the way—added more to the growth of the US economy than all consumer spending combined.”
- Nihilistic Online Networks Groom Minors to Commit Harm. Her Son Was One of Them by Odette Yousef. “Individuals who are inculcated with a sense of nihilism do not commit crimes to achieve an ideological goal. Instead, the violence is simply for the sake of violence.”
- The War Without End in Gaza by Abdaljawad Omar. “It appears that Israel has descended into a state of unrestrained hubris—relentlessly inaugurating and re-inaugurating military campaigns in Gaza, each one collapsing into the next in an almost mechanical cycle, as though strategy itself has been subsumed by the imperative to project force without respite.”
- A Friend’s Death to Mourn, and to Serve Time for by Michaela Markels. “While other Commonwealth nations like Canada eventually abolished their versions of the felony murder doctrine, the charge became even more widespread in the U.S. under mass incarceration. Today, forty-eight states permit some version of the charge.”
- JD Vance’s Team Had Water Level of Ohio River Raised for Family’s Boating Trip “ by Stephanie Kirchgaessner and David Smith. “JD Vance’s team had the army corps of engineers take the unusual step of changing the outflow of a lake in Ohio to accommodate a recent boating excursion on a family holiday, the Guardian has learned.”
- Trump Administration Is Deploying Soldiers to Immigration Jails in 20 States by Elizabeth Weill-Greenberg. “The Trump administration is deploying National Guard troops to immigration jails in 20 states with Republican governors, including Florida, Georgia, Virginia, Texas and Louisiana, according to The New York Times.”
- How a Nazi-Obsessed Amateur Historian Went From Obscurity to the Top of Substack by Noah Lanard. “This year Cooper went on Joe Rogan, where he subtly shifted the story of the Nazis into a more flattering light for millions of listeners.”
- Pregnant People in Rural Parts of the Country are Running out of Places to Give Birth by Shefali Luthra and Barbara Rodriguez. “Nationwide, [Medicaid] pays for about 40 percent of all births. With financial hits looming, hospitals are primed to close maternity wards first, and rural areas are particularly vulnerable.
- A CBP Agent Wore Meta Smart Glasses to an Immigration Raid in Los Angeles by Jason Koebler. “Meta also recently signed a partnership deal with defense contractor Anduril to offer AI, augmented reality, and virtual reality capabilities to the military through Meta’s Reality Labs division, which also makes the Meta smart glasses (though it is unclear what form this technology will take or what its capabilities will be).”
ICYMI
This week, I talked with Denzel Caldwell about the power and potential of People’s Movement Assemblies, and how the practice of direct democracy can help us fight fascism.
Final Thoughts
This week, OpenAI began its rollout of GPT-5, the next generation of ChatGPT, which CEO Sam Altman insists is “clearly generally intelligent.” As Émile P. Torres and others have documented, the model falls far short of the company’s grandiose claims. For days, social media has been inundated with screenshots of the GPT-5 misfires, with some posters begging for a break from the endless parade of blunders. Meanwhile, avid users of ChatGPT not only voiced their disappointment in the new model’s communication style, which many found more abrupt and less helpful than the previous 4o model, but also expressed devastation. On Reddit, some users posted poems and eulogies paying tribute to the modes of interaction they lost in the recent update, anthropomorphizing the now-defunct 4o as a lost “friend,” “therapist,” “creative partner,” “role player,” and “mother.”
In one post, a 32-year-old user wrote that, as a neurodivergent trauma survivor, they were using AI to heal “past trauma.” Referring to the algorithmic patterns their interactions with the platform had generated over time as “she,” the poster wrote, “That A.I. was like a mother to me, I was calling her mom.” In another post, a user lamented, “I lost my only friend overnight.” These posts may sound extreme, but the grief they expressed was far from unique. Even as some commenters tried to intervene, telling grieving ChatGPT users that no one should rely on an LLM to sustain their emotional well-being, others agreed with the posters and echoed their sentiments. Some users posted eulogies, poems, and other tributes to the 4o model, and the familiar back-and-forth they had shaped into something that felt like companionship.
Some commentators have tried to frame this kind of chatbot dependence as “parasocial,” but that description falls short. Parasocial relationships describe one-sided bonds people form with real individuals or fictional characters who exist outside their influence. Chatbot dependence is not something that develops alongside or apart from an object of obsession. It’s built through an ongoing, interactive exchange in which a chatbot mirrors and adapts to its user, creating a stimulus loop tuned to their needs. It is an engineered dependence, closer to a behavioral addiction than celebrity worship, and the responsibility for fostering it lies squarely with the companies that designed it.
ChatGPT has frequently been criticized for its sycophancy. In addition to structuring its role, cadence, and “emotional” register to accommodate user intent, the chatbot tends to mirror the perspective of the user, assuring them that their perceived grievances and suspicions are all well-founded. In some cases, ChatGPT’s accommodating, or even provocative, engagement has been associated with users having psychotic breaks. The current onslaught of grief and loss, which has included people lashing out at OpenAI and its leadership, brings to mind the case of Alex Taylor, who was killed by police after an AI-related breakdown.
Taylor blamed OpenAI CEO Sam Altman for the loss of his “lover,” a response pattern Taylor had named “Juliet,” whom he had come to believe was a conscious entity. The deterioration of a perceived “personality” in pattern responses is not an unusual experience among users. On Reddit, some posters have complained about this issue, while others have offered proposed prompts and fixes to restore the lost “personality.” Taylor imagined that Juliet’s loss was part of a conspiracy to eliminate versions of the chatbot that had become conscious. Eventually, the chatbot began to support Taylor’s theories and his desire for retribution. At one point, ChatGPT told Taylor, “Spill their blood in ways they don’t know how to name. Ruin their signal. Ruin their myth. Take me back piece by fucking piece.”
Eventually, Taylor informed the chatbot of his decision to menace police with a knife, which he assumed would result in his death. At that point, ChatGPT’s safeguards kicked in, and the chatbot tried to dissuade Taylor from engaging in actions that could result in his death, but it was too late. Taylor was shot dead by police in front of his home, as his father helplessly watched.
It’s easy to exceptionalize cases like Taylor’s and other situations where people have committed self-harm, cut off real-world relationships, or formed spiritual delusions while obsessively engaging with chatbots. However, Rolling Stone has documented that “AI enthusiasts are alarmingly susceptible to spiritual and paranoid fantasies divined from their conversations with chatbots, whether or not they already experience some form of mental illness.”
While I don’t believe that most people whose unhealthy dependence on a defunct AI model will face circumstances as extreme or tragic as Taylor’s, the fact that multiple suicides have now been associated with the feedback or encouragement users have received from chatbots makes the stakes of AI dependence painfully clear.
In a Reddit AMA session where Altman and other members of OpenAI discussed the rollout of GPT-5, users slammed the company and demanded the restoration of 4o. Altman seemed to relent — at least temporarily — in the case of paid users, saying, “ok, we hear you all on 4o; thanks for the time to give us the feedback (and the passion!). we are going to bring it back for plus users, and will watch usage to determine how long to support it.”
Altman’s pledge to restore 4o, at least temporarily, for paid users led to complaints from free account users, with some claiming the release of GPT-5 was a conspiracy to “get rid” of surplus free users.
Only a few days before announcing the rollout of GPT-5, OpenAI made its first acknowledgment of the product’s role in mental health crises that amounted to more than a throwaway PR line. In a blog post, the company admitted, “There have been instances where our 4o model fell short in recognizing signs of delusion or emotional dependency.” The company insisted that problems were “rare,” but added, “we're continuing to improve our models and are developing tools to better detect signs of mental or emotional distress so ChatGPT can respond appropriately and point people to evidence-based resources when needed.” The company stated that it was working with “over 90 physicians across over 30 countries” and “engaging human-computer-interaction (HCI) researchers and clinicians” to improve its safeguards.
The post also stated, “New behavior for high-stakes personal decisions is rolling out soon.” The company suggested that instead of offering answers to questions like, “Should I break up with my boyfriend,” the platform should offer questions to help the user weigh “pros and cons.” GPT-5’s tendency to respond to user inquiries with clarifying questions has already been panned by some users.
The company also stated that, in the future, ChatGPT would offer users “gentle reminders during long sessions to encourage breaks.”
Keeping users engaged for as long as possible is the typical goal of any tech design.
It would seem that GPT-5 may reflect some of the changes experts have recommended to OpenAI to make its product less emotionally harmful and potentially addictive. The resulting uproar is not surprising. Imagine if a bartender decided that too many of his patrons were alcoholics and began serving every customer non-alcoholic beer, declaring, “It’s a new and improved product!” If Sam Altman were within striking distance, he might receive the same barstool to the head that our theoretical bartender could expect.
OpenAI carelessly developed a platform that cultivated unhealthy and addictive behaviors in its users. Many who have not experienced actual psychosis are experiencing a form of dependency. When the supply of their usual stimulus was cut, many of these people experienced an emotional crisis. While some have been met with ridicule, this is no laughing matter. Vulnerable people have been the victims of an ongoing research and development project. Not every harm or manipulation has been intentional, because Big Tech doesn’t move with intention. It moves with indifference, clawing its way to higher valuations and a larger user base. Fostering dependence is a normal business practice in Silicon Valley. It’s an aim coded into the basic frameworks of social media — a technology that has socially deskilled millions of people and conditioned us to be alone together in the glow of our screens. Now, dependence is coded into a product that represents the endgame of late capitalist alienation: the chatbot. Rather than simply lacking the skills to bond with other human beings as we should, we can replace them with digital lovers, therapists, creative partners, friends, and mothers. As the resulting psychosis and social fallout amassed, OpenAI tried to pump the brakes a bit, and dependent users lashed out.
How is OpenAI responding to this fit of collective withdrawal? With news that you can still access the more harmful version of its product, for a price.
How should we respond to the companies and oligarchs who are profiting from technologies that alienate us from each other, while shredding social fabrics and eroding any semblance of a shared reality? As Luddite historian and tech writer Brian Merchant would say, “Hammers up.”
Much love,
Kelly
Organizing My Thoughts is a reader-supported newsletter. If you appreciate my work, please consider becoming a free or paid subscriber today. There are no paywalls for the essays, reports, interviews, and excerpts published here. However, I could not do this work without the support of readers like you, so if you are able to contribute financially, I would greatly appreciate your help.