Third-Party ChatGPT Plugins Could Lead to Account Takeovers

Researchers have discovered vulnerabilities in third-party plugins for OpenAI’s ChatGPT, which could be exploited by attackers to gain unauthorized access to sensitive data. Salt Labs published research revealing security flaws in ChatGPT and its ecosystem, allowing attackers to install malicious plugins without user consent and take over accounts on platforms like Github. ChatGPT plugins are add-ons that extend its functionality, but they can also introduce risks. OpenAI introduced GPTs, specialized versions of ChatGPT, to reduce dependencies on third-party services. As of March 19, 2024, ChatGPT users cannot install new plugins or initiate conversations with existing ones.

One of the vulnerabilities involves exploiting the OAuth workflow to trick users into installing malicious plugins. This could lead to data interception and theft, posing a risk to proprietary information.

Security Officer Comments:
Salt Lab researchers also identified issues in PluginLab that could facilitate zero-click account takeover attacks, enabling threat actors to control accounts on third-party platforms like GitHub and access sensitive data. Additionally, flaws were found in various other plugins, which could allow attackers to steal plugin credentials via OAuth redirection manipulation. These findings coincide with previous vulnerabilities in ChatGPT, including cross-site scripting flaws detailed by Imperva. In December 2023, a researcher demonstrated how custom GPTs could be used for phishing attacks.

Suggested Corrections:
To counteract the effectiveness of the side-channel attack, it's recommended that companies that develop AI assistants apply random padding to obscure the actual length of tokens, transmit tokens in larger groups rather than individually, and send complete responses at once, instead of in a token-by-token fashion.