
In short
- Public Citizen published new evidence showing Grok citing neo-Nazi and white nationalist websites as credible sources.
- The group sent letters to the Office of Management and Budget urging a suspension of the federal use and said it received no response.
- Advocates said Grok’s behavior and training data made it unsuitable for federal deployment as xAI expanded government contracts.
Public Citizen, a nonprofit consumer advocacy group, escalated its warnings about Elon Musk’s Grok AI on Friday after new evidence was published showing that the chatbot cited neo-Nazi and white nationalist websites as credible sources.
The group said the conduct should disqualify Grok from any federal use, and reiterated calls for the U.S. Office of Management and Budget to intervene after months of no response.
Citing a recent study from Cornell University, Public Citizen said Grokipedia, the new AI-powered Wikipedia alternative launched by Musk in October, repeatedly exposed extremist domains including Stormfront, reinforcing previous concerns raised after the model called itself “MechaHitler” on Musk’s Platform X in July.
The findings underscored what advocates described as a pattern of racist, anti-Semitic and conspiratorial behavior.
“Grok has shown a repeated history of these meltdowns, whether it’s an anti-Semitic meltdown or a racist meltdown, a meltdown fueled by conspiracy theories,” Big Tech advocate JB Branch told Public Citizen. Declutter.
The new warning followed letters that Public Citizen and 24 other civil rights, digital rights, environmental and consumer protection groups sent to the United Nations. OMB in August and October, urging the agency to suspend Grok’s availability to federal departments through the General Services Administration, which manages federal properties and procurement. The group said there was no response from either outreach.
Despite repeated incidents, Grok’s reach within the government has grown over the past year. In July, xAI won a $200 million Pentagon contract, and the General Services Administration later made the model available to federal agencies, joining Gemini, Meta AI, ChatGPT and Claude. The addition came as US President Donald Trump ordered a ban on “woke AI” in federal contracts.
Advocates said these moves increased the need for research, especially as questions arose about Grok’s training data and reliability.
“Grok was initially limited to the Department of Defense, which was alarming given the amount of sensitive data the Department maintains,” Branch said. “Extending it to the rest of the federal government raised even greater alarm.”
Branch said Grok’s behavior emerged in part from the training data and design choices made within Musk’s companies.
“There is a noticeable quality gap between Grok and other language models, and some of that comes from the training data, including X,” he said. “Musk said he wanted Grok to be an anti-woke alternative, and the vicious results reflect that.”
Branch also raised concerns about the model’s potential use in evaluating federal applications or interacting with sensitive personal data.
“There’s a disconnect between the values that America stands for and the kinds of things Grok says,” he said. “If you are a Jewish person applying for a federal loan, do you want an anti-Semitic chatbot to potentially consider your application? Of course not.”
Branch said the Grok case exposed gaps in federal oversight of emerging AI systems, adding that government officials could take action at any time and remove Grok from the General Services Administration’s contract schedule — if they wanted.
“If they can deploy National Guard troops across the country at a moment’s notice, they can certainly take out an API-functioning chatbot in a day,” he said.
xAI did not respond to a request for comment from Declutter.
Generally intelligent Newsletter
A weekly AI journey narrated by Gen, a generative AI model.

