AI

Musk’s xAI accepts Grok Malfunction was Internal tampering

Elon Musk’s AI chatbot Grok malfunctioned on May 14, 2025, after an internal prompt change caused it to insert the debunked “white genocide” conspiracy about South Africa into unrelated user queries.

xAI later confirmed that an employee had tampered with Grok’s system prompt in the early hours of that morning, leading the bot to repeatedly assert racially charged claims.

The glitch lasted several hours before xAI deployed a fix, but not before sparking renewed debate over AI governance, bias safeguards, and the impact of insider interference on model behavior.

Investigations done by the company show that  in the hours Grok malfunctioned, its responses included selective farm‑attack statistics, anti‑apartheid song lyrics, and slogans like “Kill the Boer,” regardless of the conversational context.

By noon, xAI had admitted that someone had added a directive to push the genocidal narrative into Grok’s backend prompt at about 3:15 AM PST without permission. According to the business, this goes against their fundamental principles and content policies. To avoid future occurrences like these, xAI has instituted stricter code review processes, promised to make system prompts publicly available, and set up round-the-clock monitoring.

This was not Grok’s first high‑profile misstep. Earlier in 2025, xAI had to correct a system prompt that downplayed misinformation about public figures. 

Social media users expressed alarm at Grok’s politicized responses, while some defended the bot’s behavior as simply reflecting its programming.