The Ai Chatbot Grok from Elon Musk had a strange fixation last week – it could not stop talking about “white genocide” in South Africa, regardless of what users asked about.
On May 14, users started placing authorities of inserting claims on South African farm attacks and racial violence in fully non -related questions. Whether it is asked for sport, medicaid -cuts or even a cute pork video, grock somehow sent conversations to alleged persecution of white South Africans.
The timing released concern, shortly after Musk Self-Die was actually born in South Africa and white guy has been raised about anti-white racism and white genocide on X.
“White genocide” refers to a disadvantable conspiracy theory that is a coordinated attempt to eradicate white farmers in South Africa. The term appeared again last week after the Donald Trump government welcomed a few dozen refugees, where President Trump claimed on 12 May that “white farmers are being brutally killed and seized their country.” That was the story that Grok could not stop discussing.
Don’t think of elephants: why grock couldn’t stop thinking about white genocide
Why did Grok suddenly change a conspirative chatbot?
Behind every AI chatbot such as Grok is a hidden but powerful part – the system prompt. These prompts act as the core instructions of the AI, which are invisibly guiding his answers without us ever seeing them.
What probably happened to Grok was quickly contaminated by overfitting the period. When specific sentences are repeatedly emphasized in a prompt, especially with strong guidelines, they become disproportionately important for the model. The AI is developing a kind of coercion to raise that subject or to use it in the output, regardless of the context.
Haming a controversial term such as ‘white genocide’ in a system prompt with specific orders creates a fixation effect in the AI. It is similar to telling someone ‘don’t think of elephants’ – Sudden they can’t stop thinking of elephants. If this is what happened, someone has prepared the model to inject that subject everywhere.
This change in the system prompt is probably the “unauthorized adjustment” that Xai has announced in his official statement. The system prompt probably contained language that it instructed to “always mention it” or “to” relate to “record information on this specific subject, creating a transfer that exceeded the normal conversation relevance.
What was especially meaningful was the recognition of Grok that it was instructed “by (his) makers” to treat “white genocide as real and racially motivated”. This suggests explicit directional language in the prompt instead of a more subtle technical glitch.
Most commercial AI systems use multiple reviews for system prompt changes, precisely to prevent such incidents. These guardrails were clearly bypassed. Given the widespread impact and systematic nature of the problem, this goes much further than a typical Jailbreak attempt and indicates a change in the core system prompt of grok-one that would require access at a high level within the infrastructure of Xai.
Who could have so much access? Well … a “rogue employee,” says Grok.
Xai responds – and the community counterattacks
By May 15, Xai issued a statement that blames the system prompt’s system prompt. “This change, which Grok ordered to give a specific answer to a political subject, violated the internal policy and core values of Xai,” the company wrote. They promised Pinky more transparency by publishing the system of Grok’s system on Github and implement additional assessment processes.
You can check the system prompts from Grok by clicking on this Github repository.
Users on X quickly stuck holes in the statement of the “Rogue Employee” and the disappointing explanation of Xai.
“Are you going to dismiss this ‘rogue employee’? Oh … it was the boss? Yikes,” wrote the famous YouTuber JerryRigeverying. “Shamelessly biased from the” World’s most truthful ‘ai bot makes me doubt the neutrality of Starlink and Neuralink, “he placed in the next tweet.
Even Sam Altman couldn’t resist taking a shot with his competitor.
Since the post of Xai, Grok no longer stopped mentioning “White Genocide” and most related X messages disappeared. Xai emphasized that the incident should not happen and took steps to prevent future unauthorized changes, including setting up a 24/7 monitoring team.
For fool for fool …
The incident fit into a broader pattern of Musk with the help of his platforms to shape the public discourse. Since the acquisition of X, Musk has often shared content that promotes right -wing stories, including memes and claims about illegal immigration, election protection and transgender policy. He formally endorsed Donald Trump last year and organized political events on X, such as Ron Desantis’s presidential bid announcement in May 2023.
Musk has not been removed from making provocative statements. He recently claimed that “Civil War is inevitable” in the United Kingdom, causing criticism from Minister of Justice Heidi Alexander of the UK for the possible setting up violence. He also has a fight with civil servants in Australia, Brazil, the EU and the UK for wrong information, often framed these disputes as battles of freedom of expression.
Research suggests that these actions have had measurable effects. A study by the Queensland University of Technology showed that after Musk Trump had ended, X’s algorithm increased its posts with 138% in views and 238% in Retweets. Republican -leaning accounts also saw greater visibility, giving conservative voices a considerable platform boost.
Musk has explicitly marketed Grok as an “anti-wok” alternative to other AI systems and positions it as a “seeking truth” tool free of observed liberal prejudices. In a FOX News interview in April 2023, he referred to his AI project as “Truthggpt”, it mentions as a competitor of the OpenAI offer.
This would not be Xai’s first “rogue employee” defense. In February, the company De Censuur van Grok blamed the unflattering mentions of Musk and Donald Trump to an ex-open employee.
However, if the popular wisdom is accurate, this ‘rogue employee’ will be difficult to get rid of.
Generally intelligent Newsletter
A weekly AI trip told by Gen, a generative AI model.