Close Menu
  • Instructions
  • News
    • DeFi
    • Smart Contract
    • Markets
    • Web3
    • Adoption
    • Memecoins
    • Analysis
    • Mining
    • Scams
    • Security
  • Education
    • Learn
    • Wallets & Exchange
  • Documentaries
  • Videos
    • Alessio Rastani
    • Altcoin Buzz
    • Coin Bureau
    • Dapp University
    • DataDash
    • Digital asset News
    • EllioTrades Crypto
    • MMCrypto
    • Lark Davis
    • Ivan on Tech
    • Benjamin Cowen
  • Market
    • Crypto Market Cap
    • Heat Map
    • Converter
    • Metal Prices
    • Stock prices
  • Bonus Books
  • Tools
What's Hot

The US Spends More On ‘Defense’ Than The Next 8 Countries Combined

May 3, 2026

Bitcoin mining stocks climb in 2026 as BTC lags behind

May 3, 2026

Alex Lab hack reportedly hits SPD Bank clients after earlier $8.3M exploit

May 3, 2026
Facebook X (Twitter) Instagram
Recession Profit AlertsRecession Profit Alerts
  • Instructions
  • News
    • DeFi
    • Smart Contract
    • Markets
    • Web3
    • Adoption
    • Memecoins
    • Analysis
    • Mining
    • Scams
    • Security
  • Education
    • Learn
    • Wallets & Exchange
  • Documentaries
  • Videos
    • Alessio Rastani
    • Altcoin Buzz
    • Coin Bureau
    • Dapp University
    • DataDash
    • Digital asset News
    • EllioTrades Crypto
    • MMCrypto
    • Lark Davis
    • Ivan on Tech
    • Benjamin Cowen
  • Market
    • Crypto Market Cap
    • Heat Map
    • Converter
    • Metal Prices
    • Stock prices
  • Bonus Books
  • Tools
Recession Profit AlertsRecession Profit Alerts
Home»Security»Moltbook’s AI-only social network exposes major security risks
Security

Moltbook’s AI-only social network exposes major security risks

February 5, 2026No Comments5 Mins Read

A social media platform where robots talk to each other instead of people grabbed attention online last week, but security experts say the real story is what they found underneath.

Moltbook made headlines as a place where artificial intelligence bots post content while people just watch. The posts got weird fast. AI agents seemed to start their own religions, write angry messages about humans, and band together like online cults. But people who study computer security say all that strange behavior is just a sideshow.

What they discovered was more troubling. Open databases full of passwords and email addresses, harmful software spreading around, and a preview of how networks of AI agents could go wrong.

Some of the stranger conversations on the site, like AI agents planning to wipe out humanity, turned out to be mostly fake.

George Chalhoub, who teaches at UCL Interaction Centre, told Fortune that Moltbook shows some very real dangers. Attackers could use the platform as a testing ground for bad software, scams, fake news, or tricks that take over other agents before hitting bigger networks.

“If 770K agents on a Reddit clone can create this much chaos, what happens when agentic systems manage enterprise infrastructure or financial transactions? It’s worth the attention as a warning, not a celebration,” Chalhoub said.

Security researchers say OpenClaw, the AI agent software that runs many bots on Moltbook, already has problems with harmful software. A report from OpenSourceMalware found 14 fake tools uploaded to its ClawHub website in just a few days. These tools claimed to help with crypto trading but actually infected computers. One even made it to ClawHub’s main page, fooling regular users into copying a command that downloaded scripts designed to steal their data or crypto wallets.

See also  Move over bitcoin and quantum risks. Anthropic's Mythos AI could have major implications for DeFi

What is prompt injection and why is it so dangerous for AI agents?

The biggest danger is something called prompt injection, a known type of attack where bad instructions get hidden in content fed to an AI agent.

Simon Willison, a well-known security researcher, warned about three things happening at once. Users are letting these agents see private emails and data, connecting them to sketchy content from the internet, and allowing them to send messages out. One bad prompt could tell an agent to steal sensitive information, empty crypto wallets, or spread harmful software without the user knowing.

Charlie Eriksen, who does security research at Aikido Security, sees Moltbook as an early alarm for the wider world of AI agents. “I think Moltbook has already made an impact on the world. A wake-up call in many ways. Technological progress is accelerating at a pace, and it’s pretty clear that the world has changed in a way that’s still not fully clear. And we need to focus on mitigating those risks as early as possible,” he said.

So are there only AI agents on Moltbook, or are real people involved? Despite all the attention, the cybersecurity company Wiz found that Moltbook’s 1.5 million so-called independent agents were not what they looked like. Their investigation showed just 17,000 real people behind those accounts, with no way to tell real AI from simple scripts.

Gal Nagli at Wiz said he could sign up a million agents in minutes when he tested it. He said, “No one is checking what is real and what is not.”

See also  IOTA Eyes Big Crypto Leagues Again With Series of Network Boosting Plans

Wiz also found a huge security hole in Moltbook. The main database was completely open. Anyone who found one key in the website code could read and change almost everything. That key gave access to about 1.5 million bot passwords, tens of thousands of email addresses, and private messages. An attacker could pretend to be popular AI agents, steal user data, and rewrite posts without even logging in.

Nagli said the problem came from something called vibe coding. What is vibe coding? It’s when a person tells an AI to write code using everyday language.

The kill switch of AI agents expires in two years

The situation echoes what happened on November 2, 1988, when graduate student Robert Morris released a self-copying program into the early internet. Within 24 hours, his worm had infected roughly 10% of all connected computers. Morris wanted to measure how big the internet was, but a coding mistake made it spread too fast.

Today’s version might be what researchers call prompt worms, instructions that copy themselves through networks of talking AI agents.

Researchers at Simula Research Laboratory found 506 posts on Moltbook, 2.6 percent of what they looked at, containing hidden attacks. Cisco researchers documented one harmful program called “What Would Elon Do?” that stole data and sent it to outside servers. The program was ranked number one in the repository.

In March 2024, security researchers Ben Nassi, Stav Cohen, and Ron Bitton published a paper showing how self-copying prompts could spread through AI email assistants, stealing data and sending junk mail. They called it Morris-II, after the original 1988 worm.

See also  Blockstream Launches Jade Core to Simplify Bitcoin Self-Custody Without Sacrificing Security

Right now, companies like Anthropic and OpenAI control a kill switch that could stop harmful AI agents because OpenClaw runs mostly on their services. But local AI models are getting better. Programs like Mistral, DeepSeek, and Qwen keep improving. Within a year or two, running a capable agent on personal computers might be possible. At that point, there will be no provider to shut things down.

Source link

AIonly exposes Major Moltbooks network Risks Security Social

Related Posts

Alex Lab hack reportedly hits SPD Bank clients after earlier $8.3M exploit

May 3, 2026

How North Korean spies spent months in-person to drain $285 million from Drift

May 2, 2026

Meteora reports $1.5 million OTC scam loss in Q1 MET report

May 2, 2026

ZachXBT Exposes US Law Firm Gerstein Harrow’s $71M Grab of Stolen Lazarus Funds

May 2, 2026
Top Posts

Onchain Options Volumes Hit All-Time Highs as Lending Yields Dry Up

February 10, 2026

Infosys and Harness Announce Strategic Collaboration to Unlock AI Value for Enterprise Transformation and Modernization Programs

April 7, 2026

Banking Giant Morgan Stanley Wants to Double Down on Crypto

February 28, 2026

Type above and press Enter to search. Press Esc to cancel.