DevelopersDevSecOpsFeaturedLet's TalkSecurityVideo

Salt Security sheds light on security risks of LLMs

0

Although many organizations have been quick off the mark to start using LLMs like ChatGPT, the security community is still catching up, with many agreeing we still do not understand the full extent of the risks. In this episode, Yaniv Balmas, VP of Research at Salt Security, talks about the company’s research into the security risks of using LLMs, the potential impacts on businesses, and what organizations can do to mitigate these risks. He says, “You first need to know that this is actually vulnerable and risky. I think people don’t really understand that right now.”

Security vulnerabilities in LLM ecosystems, particularly in plugins

  • Balmas discusses Salt Security’s research project to investigate security issues within LLMs, focusing on ChatGPT.
  • Security issues were found in two major areas, within the LLM themselves and in the LLM ecosystem in the plugins. Salt Security’s research focused mainly on exploring the latter.
  • Balmas explains how plugins were integrated to enable LLMs to interact with external data sources. Although they improve LLM functionality, there are security risks of data exposure or someone manipulating data in the external sources.
  • Balmas explains how a vulnerability in the ChatGPT plugin could give attackers access to a company’s entire codebase.

LLM security risks and potential impact on businesses, with recommendations for mitigation

  • Balmas talks about what organizations can do to mitigate the security risks of LLMs, such as implementing procedures on the data sources that can be connected to LLMs and putting safeguards in place to look for anomalies or suspicious activities.
  • Balmas hopes that Salt Security’s research will help organizations better understand LLM risks and provide different strategies to mitigate them.
  • Responsibility for containing LLM risks varies across organizations but Balmas feels that the accountability and responsibility should lie with CISOs.
  • Balmas explains that LLMs are using APIs to talk to third-party services and therefore it comes down to API security. He highlights that most of the vulnerabilities Salt Security found were around authentication and permissions.
  • Salt Security’s specialized protections against anomalies in API requests can detect and prevent attacks in LLM ecosystems.

Guest: Yaniv Balmas (LinkedIn)
Company: Salt Security (Twitter)
Show: Let’s Talk

This summary was written by Emily Nicholls.