Elon Musk’s AI chatbot, Grok, briefly refused to provide sources that claimed Musk or Donald Trump spread misinformation.
According to Igor Babuschkin, head of engineering at Musk’s AI company xAI, the change was made by an unnamed employee, an ex-OpenAI staffer, without official approval.
A Rogue Edit?
Grok’s system prompt, which sets internal rules for how the AI responds, was quietly modified, leading to user complaints. Babuschkin later confirmed on X (formerly Twitter) that the update wasn’t authorized and went against xAI’s principles.
“An employee pushed the change because they thought it would help, but this is not in line with our values,” Babuschkin wrote.
Musk’s Vision for Grok
Musk has repeatedly emphasized that Grok is a “maximally truth-seeking” AI, aiming to offer uncensored and transparent responses. However, this isn’t the first controversy surrounding its responses.
- Past Issues: Users have pointed out instances where Grok labeled Trump, Musk, and Vice President JD Vance among America’s most harmful figures.
- Manual Adjustments: Grok’s engineers have also stepped in to prevent it from suggesting extreme punishments for public figures, including Musk and Trump.
What’s Next for Grok?
xAI promises transparency, so the company allows users to view Grok’s system prompt here (link placeholder). Whether this incident will lead to stricter internal oversight or more open-source AI governance remains to be seen.
What are your thoughts on AI content moderation? Share your views in the comments below!