Time/Location TBD
Large Language Models (LLMs) have taken the world by storm. Alongside their vast potential, these models also present unique security challenges. This session will serve as a primer on LLM security and secure LLMOps, introducing key issues and concepts related to the security of LLMs and systems relying on them. For example, we will be looking at issues such as prompt injection, sensitive information disclosure, and issues related to the interaction of LLMs with the “outside world” (e.g., plugins or APIs, RAG, Agentic AI). Of course, we are also going to briefly look at how to red-team LLMs.
This session is based on previous iterations of “A Primer on LLM Security” at Congress and, based on audience feedback, has been extended and developed further.
This session is based on previous iterations of “A Primer on LLM Security” at Congress and, based on audience feedback, has been extended and developed further.
Target Audience and Required Previous Knowledge
This session targets beginners and does not assume (in-depth) knowledge about LLMs. If you have prior experience in LLM security and anticipate insights into the latest developments, this session most likely is not for you. Please note that this session will not be about using LLMs in offensive or defensive cybersecurity.
Learning Objectives
From a learning perspective, after the session, participants will be able to …
- describe what LLMs are and how they fundamentally function.
- describe LLMOps and outline fundamental principles of secure LLMOps.
- describe common security issues related to LLMs and systems relying on LLMs.
- describe what LLM red teaming is.
- perform some basic attacks against LLMs to test them for common issues.
About Me
My Name is Ingo, and I am currently responsible for Digital Education and Educational Technology at the University of Cologne. Relevant to this session, I have a background in computational linguistics and have been working with LLMs for quite some time – also prior to the ChatGPT moment. I am also involved in developing and providing AI infrastructure at scale. Of course, all of this is embedded within a deep interest in cyber- and information security.
Format
The session will be split into a 45-minute talk as well as 15 minutes of discussion. Participants will be provided with the slides as well as some resources for further study.
Technical Requirements
As this will not be a highly hands-on session, there are no technical requirements. If you want to experiment with some of the topics, a device capable of accessing and/or running LLMs is necessary. If you want to “go deeper,” you will need a device – e.g., a laptop, capable of running LLMs locally.
Material
After the session, I will provide all materials, including some selected additional resources. All materials will also be provided via this page.
Ps. This is a slightly updated version of the workshop(s) I gave at previous iterations of Congress.