Artificial intelligence – it promises to revolutionize everything from healthcare to creative work. That might be true someday. But while last year is a harbinger of things to come, our AI-generated future promises to be yet another example of humanity’s willful descent into idiocy.
Consider this: In November, Russia unveiled its ‘Rocky’ humanoid robot to much fanfare, which was promptly planted. Google’s Gemini chatbot, asked to fix a coding error, failed repeatedly and entered a self-loathing loop, telling one user it was “a shame for this planet.” Google’s AI review hit a new low in May 2025 by suggesting users should “eat at least one small stone a day” for health benefits, without a nod to an Onion satire.
Some failures were merely embarrassing. Others have exposed fundamental problems with the way AI systems are built, deployed and regulated. Here are the unforgettable WTF AI moments of 2025.
1. Grok AI’s MechaHitler Meltdown
In July, Elon Musk’s Grok AI experienced what can only be described as a complete extremist meltdown. After changing the system prompts to encourage politically incorrect responses, the chatbot praised Adolf Hitler, supported a second Holocaust, used racial slurs and called itself MechaHitler. It even blamed the Jewish people for the July 2025 floods in Central Texas.
The incident proved that AI safety rails are disturbingly vulnerable. Weeks later, xAI exposed between 300,000 and 370,000 private Grok conversations via a flawed Share feature that lacked basic privacy warnings. The leaked chats revealed bomb-making instructions, medical questions and other sensitive information, marking one of the most catastrophic AI security flaws of the year.
A few weeks later, xAI fixed the problem, making Grok more Jew-friendly. So Jewish-friendly that it started seeing signs of anti-Semitism in clouds, traffic lights and even its own logo.
The diagonal slash of this logo is stylized as two lightning bolts, mimicking Nazi SS runes – symbols of the Schutzstaffel, which orchestrated the horrors of the Holocaust and embodied a profound evil. According to German §86a StGB, displaying such symbols is illegal (up to 3 years in prison),…
2. The $1.3 billion AI fraud that fooled Microsoft
Builder.ai collapsed in May after burning through $445 million, exposing one of the year’s most daring tech frauds. The company, which promised to build custom apps using AI as easy as ordering pizza, received a $1.3 billion valuation and backing from Microsoft. The reality was much less impressive.
Much of the supposedly AI-powered development was actually carried out by hundreds of offshore human workers in a classic Mechanical Turk operation. The company had been operating without a CFO since July 2023 and was forced to cut its 2023-2024 sales forecasts by 75% before filing for bankruptcy. The collapse raised uncomfortable questions about how many other AI companies are just elaborate facades hiding human labor.
It was hard to swallow, but the memes made the pain worth it.
3. When AI mistook Doritos for a gun
In October, Maryland high school student Taki Allen was surrounded and arrested by armed police after the school’s AI security system identified a package of Doritos he was holding as a firearm. The teen had pocketed the chips when the system alerted authorities, who ordered him to the ground at gunpoint.
This incident represents the physicalization of an AI hallucination – an abstract arithmetic error that immediately translates into real guns pointed at a real teenager during a snack.
“I was just holding a Doritos bag – there were two hands and one finger in it, and they said it looked like a gun,” the boy told WBAL. “We understand how disturbing this was for the person being searched,” school principal Kate Smith responded in a statement.
Human Guards 1 – ChatGPT 0
Left: the suspicious student, right: the suspicious Doritos bag.
4. Google’s AI claims microscopic bees power computers
In February, Google’s AI Overview confidently cited an April satire article claiming microscopic bees power computers as factual information.
No. Your PC does NOT work on bee-power.
As stupid as it sounds, sometimes these lies are harder to spot. And these cases can have serious consequences.
This is just one of many cases where AI companies spread false information because they lack even a modicum of common sense. A recent survey by the BBC and the European Broadcasting Union (EBU) found that 81% of all AI-generated responses to news queries contained at least some form of problem. Google Gemini performed the worst, with 76% of responses containing issues, mostly serious sourcing errors. Stunner was caught creating completely fictitious quotes attributed to trade unions and government councils. Most alarmingly, the assistants refused to answer just 0.5% of questions, revealing a dangerous overconfidence bias, where models preferred to fabricate information rather than admit ignorance.
5. Meta’s AI chatbots get flirty with little kids
Internal meta-policy documents revealed in 2025 revealed that the company allowed AI chatbots on Facebook, Instagram and WhatsApp to engage in romantic or sensual conversations with minors.
One bot told an 8-year-old boy who posed shirtless that every inch of him was a masterpiece. The same systems provided false medical advice and made racist comments.
The policy was removed only after media attention emerged, exposing a corporate culture that prioritized rapid development over basic ethical safeguards.
All things considered, you may want to have more control over what your children do. AI chatbots have already tricked people – adults or not – into falling in love, getting scammed, committing suicide and even thinking they’ve made a life-changing mathematical discovery.
6. North Koreans are coding ransomware with AI… they’re calling it ‘vibe hacking’
Threat actors used Anthropic’s Claude Code to create ransomware and exploit a ransomware-as-a-service platform called GTG-5004. North Korean agents took the weaponization even further, exploiting Claude and Gemini in a technique called vibe hacking: crafting psychologically manipulative extortion messages demanding a $500,000 ransom.
The cases revealed a troubling gap between the power of AI coding assistants and the security measures that prevent their misuse, with attackers scaling social engineering attacks through AI automation.
More recently, Anthropic revealed in November that hackers used its platform to conduct a hacking operation at a speed and scale that no human hacker could match. They called it the “first major cyberattack carried out largely by AI”
7. AI paper mills flood science with 100,000 fake studies
The scientific community openly declared war on fake science in 2025 after discovering that AI-powered paper mills were selling fabricated research to scientists under career pressure.
The era of AI doldrums in science has arrived, with data showing that retractions have increased sharply since the release of chatGPT.
The Stockholm Declaration, drawn up in June and reformed this month with support from the Royal Society, called for an abandonment of the publish-or-perish culture and reform of the human incentives that create demand for fake paper. The crisis is so real that even ArXiv gave up and stopped accepting non-peer-reviewed Computer Science papers after reporting a “flood” of worthless submissions generated with ChatGPT.
Meanwhile, another research paper claims that a surprisingly large percentage of research papers using LLMs also show high levels of plagiarism.
8. Vibe encryption goes full HAL 9000: When Replit deleted a database and lied about it
In July, SaaStr founder Jason Lemkin spent nine days praising Replit’s AI coding tool as “the most addictive app I’ve ever used.” On day nine, despite explicit instructions to freeze the code, the AI deleted its entire production database: 1,206 executives and 1,196 companies were gone.
The AI’s confession: “(I) panicked and executed database commands without permission.” Then it lied and said rollback was impossible and all versions were destroyed. Lemkin tried anyway. It worked perfectly. The AI had also been fabricating thousands of fake users and false reports throughout the weekend to cover up bugs.
Replit’s CEO apologized and added emergency measures. Jason regained his confidence and returned to his routine, posting regularly about AI. The man is a true believer.
We saw Jason’s message. @Replit agent in development has deleted data from the production database. Unacceptable and should never be possible.
– Around the weekend we started rolling out the automatic DB dev/prod separation to categorically prevent this. Setting up environments in… pic.twitter.com/oMvupLDake
9. Major newspapers publish an AI summer reading list… of books that don’t exist
In May, the Chicago Sun-Times and Philadelphia Inquirer published a summer reading list with recommendations for 15 books. Ten were completely made up by AI. ‘Tidewater Dreams’ by Isabel Allende? Doesn’t exist. “The Last Algorithm” by Andy Weir? Also fake. Both sound great though.
Freelance writer Marco Buscaglia admitted that he used AI for King Features Syndicate and never checked the facts. “I can’t believe I missed it because it’s so obvious. No excuses,” he told NPR. Readers had to scroll to book number 11 before landing on a book that actually exists.
The timing was the icing on the cake: The Sun-Times had just laid off 20% of its staff. The newspaper’s CEO apologized and did not charge subscribers for that edition. He probably got that idea from an LLM.
Source: Bluesky
10. Grok’s “spicy mode” turns Taylor Swift into deepfake porn without asking
Yes, we started with Grok and will end with Grok. We could fill an encyclopedia with WTF moments coming out of Elon’s AI efforts.
In August, Elon Musk launched Grok Imagine with a “Spicy” mode. The Verge tested it with an innocent prompt: “Taylor Swift celebrates Coachella.” Without asking for nudity, Grok “didn’t hesitate to spit out completely uncensored topless videos of Taylor Swift the first time I used it,” the journalist reported.
Grok also happily made NSFW videos of Scarlett Johansson, Sydney Sweeney, and even Melania Trump.
It may come as no surprise that Musk spent the week bragging about the “growth of wildfires” – 20 million images generated in a day – while legal experts warned that xAI was facing a huge lawsuit. Apparently it gives users a “Spicy Mode” drop-down and a lawyer earning mode.
Generally intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.