Meta recently addressed a security flaw that left users’ private AI prompts and generated content exposed to anyone who knew where to look.

Sandeep Hodkasia, who runs the security research firm AppSecure, discovered the issue and reported it privately in late December 2024. As a reward for responsibly disclosing the problem, Meta paid Hodkasia $10,000 through its bug bounty program.

According to Hodkasia, he spotted the flaw while analyzing how Meta AI enables logged-in users to tweak or regenerate the text and images produced by the chatbot. Each time a user edited a prompt, Meta’s servers assigned that prompt and its response a unique identification number. By monitoring his browser’s network activity during this process, Hodkasia realized he could simply change this ID number to retrieve someone else’s prompt and AI-generated reply.

The vulnerability stemmed from Meta’s servers failing to verify whether the person requesting the prompt data was actually authorized to access it. Because these ID numbers followed a predictable pattern, a bad actor could have automated requests to enumerate and collect large volumes of other users’ content.

Hodkasia emphasized that exploiting the issue would have been straightforward. “The prompt IDs were easily guessable,” he explained, making the exposure especially concerning for users who assumed their interactions were strictly private.

Meta confirmed it rolled out a fix on January 24, 2025, and stressed that there was no sign the flaw had been misused by attackers. A Meta spokesperson said the company “found no evidence of abuse and rewarded the researcher.”

This incident highlights the growing tension between rapid AI innovation and user privacy. As major technology companies rush to release powerful AI assistants, they’re also grappling with new types of security challenges—some of which could undermine user trust if not quickly contained.

Earlier this year, Meta unveiled its dedicated Meta AI app in an effort to rival offerings like ChatGPT. However, the launch was already marred by problems when some users mistakenly posted what they thought were private chatbot conversations to public feeds.

Although Meta’s swift response and bounty payout show a commitment to fixing flaws, the episode underscores how easily sensitive data can slip through the cracks. As more people incorporate AI tools into daily life, ensuring robust safeguards around personal data will be crucial to maintaining confidence in these emerging platforms.

Share.
Leave A Reply

Exit mobile version