Event

Large Language Models (LLMs) have taken the world by storm. Alongside their vast potential, these models also present unique security challenges. This session will serve as a primer on LLM security, introducing key issues and concepts related to the security of LLMs and systems relying on them. For example, we will be looking at issues such as prompt injection, sensitive information disclosure, and issues related to the use of plugins. Of course, we are also going to look at how to red-team LLMs.

Target Audience

This session targets beginners and does not assume (in-depth) knowledge about LLMs. Please note that this session will not be about using LLMs in offensive or defensive cybersecurity.

Learning Objectives

From a learning perspective, after the session, participants will be able to …

  • describe what LLMs are and how they fundamentally function.
  • describe common security issues related to LLMs and systems relying on LLMs.
  • describe what LLM red teaming is.
  • perform some basic attacks against LLMs to test them for common issues.

Format

The session will be split into a 30-minute introductory talk as well as 15 minutes of discussion. Participants will be provided with the slides as well as some resources for further study.

Material

Ps. I would highly recommend attending Johan Rehberger’s Talk "NEW IMPORTANT INSTRUCTIONS" in Saal 1 after this session.

Assembly