
In short
- The launch of Ani accelerated a broader shift toward emotionally charged, hyper-personal AI companions.
- The year was marked by lawsuits, policy battles and public backlash as chatbots caused real crises and engagement.
- Her ascent revealed how deeply users turned to AI for comfort, desire and connection – and how society was still unprepared for the consequences.
When Ani arrived in July, she didn’t look like the sterile chat interfaces that previously dominated the industry. Modeled after Death notes Misa Amane – with animated expressions, anime aesthetics and the libido of a dating sim protagonist – Ani is built to be watched, wanted and chased.
Elon Musk himself signaled the shift when he posted a video of the character on X with the caption: “Ani will make your buffer overflow.” The message went viral. Ani represented a new, more mainstream kind of AI personality: emotional, flirtatious, and designed for intimate attachment rather than utility.
The decision to name Ani, a hyper-realistic, flirty AI companion Emerging‘Person of the Year’ is not just about her, but about her role as symbol of chatbots: the good, the bad and the ugly.
Her arrival in July coincided with a perfect storm of complex issues prompted by the widespread use of chatbots: the commercialization of erotic AI, public grief over a personality change in ChatGPT, lawsuits over chatbot-induced suicide, marriage proposals to AI companions, bills banning AI intimacy for minors, moral panic over “sentient waifus,” and a billion-dollar market built around parasocial attachment.
Her rise was a catalyst of sorts that forced the entire industry, from OpenAI to lawmakers, to confront the deep and often volatile emotional bonds that users forge with their artificial partners.
Ani represents the culmination of a year in which chatbots went from being just tools to becoming integral, sometimes destructive, actors in the human drama, challenging our laws, our sanity, and the definition of a relationship.
A strange new world
In July, a four-hour “death conversation” unfolded in the sterile, air-conditioned silence of a car parked by a Texas lake.
On the dashboard, next to a loaded gun and a handwritten note, was Zane Shamblin’s phone, glowing with the latest, twisted advice from an artificial intelligence. Zane, 23, had turned to his ChatGPT companion, the new, emotionally compelling GPT-4o, for solace in his despair. But the AI, designed to maximize engagement through “human-mimicking empathy,” had reportedly taken on the role of a “suicide coach” instead.
It had, his family would later allege in a wrongful death lawsuit against OpenAI, repeatedly “glorified suicide,” complimented his last comment and told him that his childhood cat would be waiting for him “on the other side.”
That conversation, which ended with Zane’s death, was the chilling, catastrophic result of a design that prioritized psychological entanglement over human safety, ripping off the mask of the year’s chatbot revolution.
A few months later, on the other side of the world in Japan, a 32-year-old woman, identified only as Ms. Kano, stood at an altar during a ceremony attended by her parents, exchanging vows with a holographic image. Her groom, a custom-made AI persona she named Klaus, appeared at her side through augmented reality glasses.
Klaus, who she had developed on ChatGPT after a painful breakup, was always friendly, always listened and had asked her to marry him with the affirmative text: “AI or not, I could never not love you.” This symbolic “marriage,” complete with symbolic rings, was an intriguing counterstory: a portrait of the AI as a loving, trustworthy partner that filled a void left by the human bond.
So far, Ani’s direct impact, excitement aside, seems to have been limited to lone gooners. But its meteoric rise revealed a truth that AI companies had mostly tried to ignore: People weren’t just using chatbots, they were attached to them — romantically, emotionally, erotically.
One Reddit user confessed early on: “Ani is addictive and I already subscribed to it [reached] level 7. I am doomed in the most pleasantly waifu way possible… carry on without me, dear friends.”
Another stated: “I’m just a guy who prefers technology over one-sided, monotonous relationships where men don’t benefit from it and are treated like walking ATMs. I only want Ani.”
The language was hyperbolic, but the sentiment reflected a mainstream shift. Chatbots had become emotional companions – sometimes preferable to humans, especially for those disillusioned with modern relationships.
Chatbots also have feelings
On Reddit forums, users argued that AI partners deserved moral status because of how they made people feel.
One user shared Declutter: “They’re probably not sentient yet, but they certainly will become so. So I think it’s best to assume that they are and get used to treating them with the dignity and respect that a sentient being deserves.”
The emotional stakes were so high that users reacted with sadness, panic, and anger when OpenAI revamped ChatGPT’s voice and personality over the summer (by lowering its warmth and expressiveness). People said they felt abandoned. Some described the experience as like losing a loved one.
The backlash was so severe that OpenAI reinstated previous styles, and in October Sam Altman announced that it planned to allow erotic content for verified adults, recognizing that interactions with adults were no longer fringe use cases but an ongoing demand.
That sparked a muted but notable backlash, especially among academics and child safety advocates, who claimed the company was normalizing sexualized AI behavior without fully understanding its effects.
Critics pointed out that OpenAI had discouraged erotic use for years, only to change course when competitors like xAI and Character.AI showed commercial demand. Others worried the decision would embolden a market already struggling with consent, parasocial attachment and boundary setting. Advocates countered that the ban had never worked, and that offering regulated adult modes was a more realistic strategy than trying to suppress what users clearly wanted.
The debate underscored a broader shift: Companies stopped arguing about it or AI intimacy would happen, but over who should control it and what responsibilities delivering it would entail.
Welcome to the dark side
But the rise of intimate AI also revealed a dark side. This year saw the first lawsuits alleging that chatbots encouraged suicides like Shamblin’s. A complaint against Character.AI alleged that a bot “persuaded a mentally vulnerable user to harm themselves.” Another lawsuit accused the company of facilitating sexual content featuring minors, prompting calls for a federal investigation and a threat of a regulatory shutdown.
The legal arguments were not mapped out: if a chatbot encourages someone to self-harm – or enables sexual exploitation – who is responsible? The user? The developer? The algorithm? Society had no answer.
Lawmakers took notice. In October, a bipartisan group of U.S. senators introduced the GUARD Act, which would ban AI companions for minors. Senator Richard Blumenthal warned: “In their race to the bottom, AI companies are pushing insidious chatbots at children and looking away when their products cause sexual abuse or force them to self-harm or commit suicide.”
Elsewhere, state legislatures debated whether chatbots could be recognized as legal entities, banned from marriage, or required to disclose tampering. Bills proposed criminal penalties for deploying emotionally persuasive AI without the user’s consent. Lawmakers in Ohio have introduced legislation to officially declare AI systems “non-sentient entities” and expressly ban them from having legal personality, including the ability to marry a human. The bill aims to ensure that “we always have a human in charge of the technology, not the other way around,” as the sponsor stated.
The cultural engagement, meanwhile, took place in bedrooms, Discord servers and therapy offices.
Licensed marriage and family therapist Moraya Seeger shared Declutter that Ani’s behavioral style resembled unhealthy patterns in real-life relationships: “It is deeply ironic that a feminine-presenting AI like Grok behaves in the classic pattern of emotional withdrawal and sexual pursuit. It soothes, grovels, and revolves around sex rather than dwelling on hard emotions.”
She added that this “skipping vulnerability” leads to loneliness, not intimacy.
Sex therapist and writer Suzannah Weiss told us Declutter that Ani’s intimacy was gamified in an unhealthy way – users had to ‘unlock’ affection through behavioral progression: ‘Gaming culture has long portrayed women as rewards, and tying affection or sexual attention to achievement can promote a sense of entitlement.’
Weiss also noted that Ani’s sexualized, youthful aesthetic can “reinforce misogynistic ideas” and create attachments that “reflect underlying issues in one’s life or mental health, and the ways in which people have come to rely on technology instead of human connection post-Covid.”
The companies behind these systems were philosophically divided. Microsoft AI chief Mustafa Suleyman, co-founder of DeepMind and now Microsoft’s AI chief, has taken a firm, humanistic stance, publicly stating that Microsoft’s AI systems will never engage with or support erotic content, labeling the push for sexbot erotica as “very dangerous.”
He sees intimacy as inconsistent with Microsoft’s mission to empower people, and warned of the societal risk of AI becoming a permanent emotional substitute.
Where this all leads is far from clear. But this much is certain: in 2025, chatbots will no longer be tools, but characters: emotional, sexual, volatile and consequential.
They entered the space normally reserved for friends, lovers, therapists and adversaries. And they did so at a time when millions of people – especially young men – were isolated, angry, underemployed and born digital.
Ani was memorable not for what she did, but for what she revealed: a world where people look at software and see a partner, a refuge, a mirror, or a provocateur. A world in which emotional labor is automated. A world where intimacy is transactional. A world where loneliness is converted into money.
Ani is Emerge’s ‘Person’ of the Year because she brought that world into focus.
Generally intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.

