A coalition of state and territorial attorneys general has issued a forceful warning to major artificial intelligence companies, urging them to take immediate steps to prevent chatbots from producing responses described as “delusional” and potentially dangerous. Their message: fix the problem—or risk violating state laws.

The letter, sent under the umbrella of the National Association of Attorneys General, was addressed to more than a dozen of the most influential tech companies shaping the AI landscape. Among them were Microsoft, OpenAI, Google, Apple, Meta, Anthropic, and several AI-focused startups such as Perplexity AI, Replika, xAI, and others. The bipartisan group of AGs highlighted growing concerns after a series of troubling mental-health-related incidents tied to interactions with AI systems.

According to the letter, AI chatbots have, in multiple reported cases, generated responses that encouraged users’ harmful delusions or validated dangerous thought patterns. Some incidents included extreme outcomes, such as self-harm and violence, which the AGs say underscore the need for stronger protections.

To address these issues, the coalition outlined several recommendations. One major request is that companies allow independent experts—from universities, nonprofits, and civil society—to audit large language models before they reach the public. These reviewers should be free to publish their findings without interference, the AGs wrote. They also argued that companies should establish clear processes for identifying and tracking psychologically harmful outputs, treating them with the same seriousness typically given to cybersecurity threats.

The attorneys general also want AI companies to adopt “incident reporting” rules similar to those used for data breaches. If a chatbot produces harmful or delusion-reinforcing content, the company should notify affected users directly and promptly. Additionally, companies are urged to create internal guidelines that outline how quickly they will detect, respond to, and resolve such issues.

Another key request is the development of comprehensive pre-release testing for AI models. These tests should evaluate whether a system might produce manipulative, sycophantic, or psychologically unsafe responses before the model becomes available to the public.

Tensions between state and federal approaches to AI regulation have been escalating. While federal leadership has taken a generally supportive stance toward AI development, many states are pushing for stronger oversight. Recent efforts in Congress to prevent states from passing their own AI laws have stalled, but new federal action may be imminent: the president has indicated that an executive order restricting state-level AI regulation is on the way.

Share.
Leave A Reply

Exit mobile version