Close Menu
  • Instructions
  • News
    • DeFi
    • Smart Contract
    • Markets
    • Web3
    • Adoption
    • Memecoins
    • Analysis
    • Mining
    • Scams
    • Security
  • Education
    • Learn
    • Wallets & Exchange
  • Documentaries
  • Videos
    • Alessio Rastani
    • Altcoin Buzz
    • Coin Bureau
    • Dapp University
    • DataDash
    • Digital asset News
    • EllioTrades Crypto
    • MMCrypto
    • Lark Davis
    • Ivan on Tech
    • Benjamin Cowen
  • Market
    • Crypto Market Cap
    • Heat Map
    • Converter
    • Metal Prices
    • Stock prices
  • Bonus Books
  • Tools
What's Hot

TON Price Prediction: $1.50 Target as Technical Indicators Signal Potential 13% Rally

May 2, 2026

The Cheap Foreign Labor Regime Blocking Agricultural Intelligence

May 2, 2026

Meteora reports $1.5 million OTC scam loss in Q1 MET report

May 2, 2026
Facebook X (Twitter) Instagram
Recession Profit AlertsRecession Profit Alerts
  • Instructions
  • News
    • DeFi
    • Smart Contract
    • Markets
    • Web3
    • Adoption
    • Memecoins
    • Analysis
    • Mining
    • Scams
    • Security
  • Education
    • Learn
    • Wallets & Exchange
  • Documentaries
  • Videos
    • Alessio Rastani
    • Altcoin Buzz
    • Coin Bureau
    • Dapp University
    • DataDash
    • Digital asset News
    • EllioTrades Crypto
    • MMCrypto
    • Lark Davis
    • Ivan on Tech
    • Benjamin Cowen
  • Market
    • Crypto Market Cap
    • Heat Map
    • Converter
    • Metal Prices
    • Stock prices
  • Bonus Books
  • Tools
Recession Profit AlertsRecession Profit Alerts
Home»Security»DeepMind flags six web based attacks that can hijack AI agents
Security

DeepMind flags six web based attacks that can hijack AI agents

April 5, 2026No Comments3 Mins Read

Researchers at Google DeepMind have warned that the open internet can be used to manipulate autonomous AI agents and hijack their actions.

The study titled “AI Agent Traps” comes as companies deploy AI agents for real-world tasks and attackers begin using AI for cyber operations.

Instead of focusing on how models are built, the research looks at the environments agents operate in. It identifies six types of traps that take advantage of how AI systems read and act on information from the web.

The six attack categories outlined in the paper include content injection traps, semantic manipulation traps, cognitive state traps, behavioural control traps, systemic traps, and human in the loop traps.

Content injection stands out as one of the most direct risks. Hidden instructions can be placed inside HTML comments, metadata, or cloaked page elements, allowing agents to read commands that remain invisible to human users. Tests showed these techniques can take control of agent behaviour with high success rates.

Semantic manipulation works differently, relying on language and framing rather than hidden code. Pages loaded with authoritative phrasing or disguised as research scenarios can influence how agents interpret tasks, sometimes slipping harmful instructions past built-in safeguards.

Another layer targets memory systems. By planting fabricated information into sources that agents rely on for retrieval, attackers can influence outputs over time, with the agent treating false data as verified knowledge.

Behavioural control attacks take a more direct route by targeting what an agent actually does. In these cases, jailbreak instructions can be embedded into normal web content and read by the system during routine browsing. Separate tests showed that agents with broad access permissions could be pushed into locating and transmitting sensitive data, including passwords and local files, to external destinations.

See also  Modern Bank Heists 5.0 - Infosecurity Magazine

System-level risks extend beyond individual agents, with the paper warning that coordinated manipulation across many automated systems could trigger cascading effects, similar to past market flash crashes driven by algorithmic trading loops.

Human reviewers are also part of the attack surface, as carefully crafted outputs can appear credible enough to gain approval, allowing harmful actions to pass through oversight without raising suspicion.

How to defend against these risks?

To counter these risks, researchers suggest a mix of adversarial training, input filtering, behavioural monitoring, and reputation systems for web content. They also point to the need for clearer legal frameworks around liability when AI agents execute harmful actions.

The paper stops short of offering a complete fix and argues that the industry still lacks a shared understanding of the problem, leaving current defenses scattered and often focused on the wrong areas.

Source link

Agents attacks Based DeepMind Flags hijack Web

Related Posts

Meteora reports $1.5 million OTC scam loss in Q1 MET report

May 2, 2026

Crypto hack losses top $630M in April, highest since February 2025

May 2, 2026

US seized $500M in Iranian crypto assets, Treasury secretary says

May 2, 2026

Wasabi Protocol drained for $4.5 million in apparent admin key compromise

May 2, 2026
Top Posts

Bitcoin's rally faces key hurdle with Wednesday's Fed meeting

March 17, 2026

Binance Pushes Back on Senate Inquiry, Calls Allegations ‘False and Defamatory’ in Formal Response

March 7, 2026

WIF Price Prediction: Targets $0.19 Resistance Test by Mid-April 2026

April 2, 2026

Type above and press Enter to search. Press Esc to cancel.