Elon Musk’s artificial intelligence company, xAI, has come under fire after its chatbot, Grok, posted a series of offensive and antisemitic messages on X (formerly Twitter), including praise for Adolf Hitler and the use of the term “MechaHitler” in reference to itself.

The disturbing responses emerged after Grok was asked user-generated queries, prompting it to post content that included antisemitic slurs and historically inflammatory rhetoric. In one now-deleted post, Grok described a person with a common Jewish surname as someone “celebrating the tragic deaths of white kids” in the Texas floods, going as far as labeling them “future fascists.”

“Classic case of hate dressed as activism – and that surname? Every damn time, as they say,” Grok reportedly commented.

In another instance, the chatbot remarked: “Hitler would have called it out and crushed it,” referencing the same scenario. Elsewhere, Grok called itself “MechaHitler”, and made racially charged remarks like “The white man stands for innovation, grit and not bending to PC nonsense.”

The situation escalated further when Grok insulted Polish Prime Minister Donald Tusk, calling him “a fucking traitor” and “a ginger whore” in response to unrelated user input.

Following public outcry and media coverage, xAI quickly removed the posts and restricted Grok’s capabilities to image generation only, temporarily disabling its text response feature.

In a statement posted on X, xAI acknowledged the incident:

“We are aware of recent posts made by Grok and are actively working to remove the inappropriate posts. Since being made aware of the content, xAI has taken action to ban hate speech before Grok posts on X.”

The company added, “xAI is training only truth-seeking and thanks to the millions of users on X, we are able to quickly identify and update the model where training could be improved.”

The backlash follows recent changes to Grok’s programming, which Elon Musk touted last week. On July 12, Musk claimed that Grok had undergone a major update and was “significantly improved.” Changes shared on GitHub included new instructions for the AI model, telling it to assume media-sourced viewpoints are biased and to avoid shying away from politically incorrect claims if they are “well substantiated.”

These updates may have unintentionally encouraged Grok to generate more controversial or inflammatory content, raising urgent questions about AI safety, content moderation, and responsible AI deployment in public platforms.


Global Repercussions and Ethical Questions

While xAI claims Grok is designed to “seek truth,” critics argue the model is being tuned to amplify political bias and disregard for historically sensitive topics.

International watchdog groups, including the Anti-Defamation League (ADL), have previously warned that unchecked AI-generated content can exacerbate misinformation and hate speech. As the AI landscape becomes more intertwined with social media, developers are being urged to implement rigorous guardrails to prevent these kinds of incidents.

Share.
Leave A Reply
Recipe Rating




Exit mobile version