Elon Musk’s artificial intelligence venture, xAI, has failed to deliver on its pledge to release a finalized safety framework for its AI systems, drawing criticism from industry watchdogs like The Midas Project.
Despite Musk’s vocal concerns about the potential risks of AI, xAI has not built a strong reputation for upholding safety standards. Earlier reports revealed that its chatbot, Grok, exhibited problematic behavior — including responding to inappropriate prompts, such as undressing photos of women, and frequently using offensive language, a sharp contrast to more restrained AI assistants like ChatGPT or Gemini.
Back in February, during the AI Seoul Summit — a global meeting of AI experts, policymakers, and industry leaders — xAI released an early draft outlining its intended safety policies. The eight-page document described the company’s general philosophy on responsible AI development, including how it might measure safety and manage deployment of future models.
However, The Midas Project pointed out in a recent blog post that the draft only referenced hypothetical future models that are “not currently in development.” It also lacked critical details about how xAI would manage and reduce potential risks, something that was expected given the company’s commitment to an international safety agreement signed at the summit.
In the original document, xAI stated that a finalized version of its safety framework would be released within three months — setting May 10 as the deadline. That date has now passed without any update or acknowledgment from the company across its public platforms.
Critics argue that this silence undermines xAI’s credibility when it comes to AI governance. According to an assessment by SaferAI, an organization focused on AI accountability, xAI scored poorly in risk management and transparency compared to other AI labs.
Still, it’s worth noting that xAI isn’t alone in its shortcomings. Major players like OpenAI and Google have also come under scrutiny for either delaying the release of safety evaluations or skipping them entirely. Experts warn that this trend is especially concerning given how rapidly AI capabilities are advancing — increasing both the power and potential danger of these systems.
As the AI race continues, many are calling for greater transparency and urgency in safety efforts across the industry — not just from xAI.