Meta has resolved a security vulnerability that could have exposed private prompts and AI-generated responses submitted by users of its Meta AI chatbot. The bug, now patched, allowed logged-in users to access content generated by others, without their knowledge or consent.
The issue was discovered by Sandeep Hodkasia, founder of security testing company AppSecure. Hodkasia reported the flaw privately to Meta on December 26, 2024. He was awarded a $10,000 bug bounty for his responsible disclosure.
In a statement, Hodkasia explained that he identified the bug while testing how Meta AI allows users to edit and regenerate prompts. When a prompt is edited, Meta’s backend assigns it a unique ID number. Hodkasia found that by monitoring browser traffic during this process, he could manipulate the prompt ID. This allowed him to retrieve responses associated with other users.
The bug stemmed from Meta’s failure to verify whether the user requesting a prompt was authorized to view it. Hodkasia noted that the prompt IDs were “easily guessable,” opening the door for bad actors to automate the process and scrape private data.
Meta deployed a fix on January 24, 2025, and, according to company spokesperson Ryan Daniels, the company “found no evidence of abuse and rewarded the researcher.”
This incident underscores the growing privacy concerns tech companies face as they push aggressively into generative AI. Meta AI launched earlier this year to compete with apps like ChatGPT. It has already faced criticism.
Some users accidentally shared what they believed were private conversations. Such incidents put consumer trust at risk. They highlight the urgent need for strong privacy safeguards. Ethical implementation must keep pace with fast-moving AI development.