Meta recently resolved a security flaw that could have allowed users of its AI chatbot to view the private prompts and generated content of other people.
Sandeep Hodkasia, who leads the security research firm AppSecure, discovered the issue late last year. He reported it to Meta on December 26, 2024, and the company awarded him a $10,000 bug bounty for the responsible disclosure. According to Hodkasia, Meta addressed the vulnerability by January 24, 2025, and confirmed there was no indication anyone had exploited the problem maliciously.
The flaw came to light while Hodkasia was reviewing how Meta AI lets users edit their queries to create new text or images. He realized that whenever a prompt was edited, Meta’s systems assigned it a unique numeric identifier. By monitoring his browser’s network activity, he found that simply changing this number could retrieve another user’s prompt and the AI’s response.
The problem stemmed from the servers not properly verifying that a person requesting content was actually authorized to access it. Because the IDs were both sequential and predictable, a determined attacker could have potentially harvested a large collection of private queries by running automated scripts to iterate through prompt numbers.
After confirming the fix, a Meta spokesperson stated the company had investigated thoroughly and found no evidence that anyone took advantage of the bug. The spokesperson also noted the researcher was appropriately rewarded for reporting the issue.
This incident underscores the challenges technology companies face as they rush to roll out new AI features. As more AI tools enter the mainstream, the risk of sensitive data leaks grows, especially when products are released rapidly without thorough security vetting.
Meta’s standalone AI chatbot app, which launched earlier this year as a competitor to platforms like ChatGPT, has already seen its share of stumbles. Some users inadvertently shared what they assumed were private conversations, further raising concerns about privacy and data protection.
Hodkasia described the flaw as an example of a “simple oversight” that could have led to serious privacy breaches if left unaddressed. He emphasized that strong safeguards should be a priority as AI tools become integrated into more consumer services.
As the industry continues to evolve, this case highlights the importance of transparent vulnerability reporting and quick response to keep users’ information secure.