Event
10:00
-
11:00
Day 3

Large Language Models (LLMs) have taken the world by storm. Alongside their vast potential, these models also present unique security challenges. This session will serve as a primer on LLM security and secure LLMOps, introducing key issues and concepts related to the security of LLMs and systems relying on them. For example, we will be looking at issues such as prompt injection, sensitive information disclosure, and issues related to the interaction of LLMs with the “outside world” (e.g., plugins or APIs). Of course, we are also going to briefly look at how to red-team LLMs.

This session is based on last year’s “A Primer on LLM Security” and has been, based on feedback from the audience, extended regarding the fundamentals of secure LLMOps.

Target Audience

This session targets beginners and does not assume (in-depth) knowledge about LLMs. Please note that this session will not be about using LLMs in offensive or defensive cybersecurity.

Learning Objectives

From a learning perspective, after the session, participants will be able to …

  • describe what LLMs are and how they fundamentally function.
  • describe LLMOps and outline fundamental principles of secure LLMOps.
  • describe common security issues related to LLMs and systems relying on LLMs.
  • describe what LLM red teaming is.
  • perform some basic attacks against LLMs to test them for common issues.

Format

The session will be split into a 45-minute talk as well as 15 minutes of discussion. Participants will be provided with the slides as well as some resources for further study.

location

Saal 6