Schedule

Der Hub wird spätestens Ende Januar archiviert, alle nutzerbezogenen Inhalte, Boards und auch einige Wiki-Seiten werden dabei entfernt. Alle öffentlichen Assemblies, Projekte und Veranstaltungen bleiben. // The hub will be archived by end of January. All user-provided content, boards and several wiki pages will be deleted. All public assemblies, projects and events will remain.
Schedule


















 

Day 2
13:00

13:30

14:00

14:30

15:00

15:30

16:00

16:30

17:00

17:30

18:00

18:30

19:00

19:30

20:00

20:30

21:00

21:30
Agentic ProbLLMs: Exploiting AI Computer-Use and Coding Agents (en)

Johann Rehberger

This talk demonstrates end-to-end prompt injection exploits that compromise agentic systems. Specifically, we will discuss exploits that target computer-use and coding agents, such as Anthropic's Claude Code, GitHub Copilot, Google Jules, Devin AI, ChatGPT Operator, Amazon Q, AWS Kiro, and others. Exploits will impact confidentiality, system integrity, and the future of AI-driven automation, including remote code execution, exfiltration of sensitive information such as access tokens, and even joining Agents to traditional command and control infrastructure. Which are known as "ZombAIs", a term first coined by the presenter as well as long-term prompt injection persistence in AI coding agents. Additionally, we will explore how nation state TTPs such as ClickFix apply to Computer-Use systems and how they can trick AI systems and lead to full system compromise (AI ClickFix). Finally, we will cover current mitigation strategies and forward-looking recommendations and strategic thoughts.

Variable Fonts — It Was Never About File Size (en)

Bernd

A brief history of typographic misbehavior or intended and unintended uses of variable fonts. Nine years after the introduction of variable fonts, their most exciting uses have little to do with what variable fonts originally were intended for and their original promise of smaller file sizes. The talk looks at how designers turned a pragmatic font format into a field for experimentation — from animated typography and uniwidth button text to pattern fonts and typographic side effects with unintended aesthetics. Using examples from projects such as TypoLabs, Marjoree, Kario (the variable font that’s used as part of the 39C3 visual identity), and Bronco, we’ll explore how variable fonts evolved from efficiency tools into creative systems — and why the most interesting ideas often emerge when technology is used in unintended ways.

A Quick Stop at the HostileShop (en)

Mike Perry

HostileShop is a python-based tool for generating prompt injections and jailbreaks against LLM agents. I created HostileShop to see if I could use LLMs to write a framework that generates prompt injections against LLMs, by having LLMs attack other LLMs. It's LLMs all the way down. HostileShop generated prompt injections for a winning submission in OpenAI's GPT-OSS-20B RedTeam Contest. Since then, I have expanded HostileShop to generate injections for the entire LLM frontier, as well as to mutate jailbreaks to bypass prompt filters, adapt to LLM updates, and to give advice on performing injections against other agent systems. In this talk, I will give you an overview of LLM Agent hacking. I will cover LLM context window formats, LLM agents, agent vulnerability surface, and the prompting and efficiency insights that led to the success of HostileShop.

How to render cloud FPGAs useless (en)

Dirk

While FPGA developers usually try to minimize the power consumption of their designs, we approached the problem from the opposite perspective: what is the maximum power consumption that can be achieved or wasted on an FPGA? Short answer: we found that it’s easy to implement oscillators running at 6 GHz that can theoretically dissipate around 20 kW on a large cloud FPGA when driving the signal to all the available resources. It is interesting to note that this power density is not very far away from that of the surface of the sun. However, such power load jump is usually not a problem as it will trigger some protection circuitry. This led us to the next question: would a localized hotspot with such power density damage the chip if we remain within the typical power envelope of a cloud FPGA (~100 W)? While we could not “fry” the chip or induce permanent errors (and we tried several variants), we did observe that a few routing wires aged to become up to 70% slower in just a few days of stressing the chip. This basically means that such an FPGA cannot be rented out to cloud users without risking timing violations. In this talk, we will present how we optimized power wasting, how we measured wire latencies with ps accuracy, how we attacked 100 FPGA cloud instances and how we can protect FPGAs against such DOS attacks.