A single poisoned document could disclose “secret” data via Chatgpt

The latest generator AI models are not only autonomous text generation chatbots – instead, they can easily be connected to your data to give personalized answers to your questions. Openai’s chatgpt can be linked to your Gmail reception box, authorized to inspect your GitHub code or find appointments in your Microsoft calendar. But these connections have the potential to be abused – and researchers have shown that it can take a single “poisoned” document to do so.
The new discoveries of security researchers Michael Bargury and Tamir Ishay Sharbat, revealed at the Black Hat Hacker conference in Las Vegas today, show how a weakness of the Openai connectors made it possible to extract sensitive information from a Google Drive account using an indirect rapid injection attack. In a demonstration of the attack, nicknamed Agent Flayer, Bargury shows how it was possible to extract the secrets of the developer, in the form of API keys, which were stored in an Demal request Drive account.
Vulnerability highlights how the connection of AI models to external systems and data sharing on them increase the potential attack surface for malicious pirates and potentially multiplies the ways where vulnerabilities can be introduced.
“There is nothing that the user must do to be compromised, and there is nothing to do that the user must do so that the data comes out,” the CTO of the Zenity security company told Wired Bargury. “We have shown that it is completely zero click in click; We just need your email, we share the document with you, and that’s it. So yes, it’s very, very bad, ”explains Bargury.
OPENAI did not immediately respond to the request for WIRED comments on the vulnerability of connectors. The company has presented chatgpt connectors as a beta functionality earlier this year, and its website lists at least 17 different services which can be linked to its accounts. It indicates that the system allows you to “provide your tools and data in the Chatppt” and “search for files, draw data live and reference content in the cat”.
Bargury says that he reported that the results in Openai earlier this year and that the company quickly introduced attenuations to prevent the technique it used to extract data via connectors. The functioning of the attack means that only a limited amount of data could be extracted at the same time – proven documents could not be deleted within the framework of the attack.
“Although this problem is not specific to Google, it illustrates why the development of robust protections against rapid injection attacks is important,” said Andy Wen, principal director of safety products management at Google Workspace, indicating the recently improved IA security measures of the company.


