







As large language models (LLMs) and AI agents become embedded in a growing range of applications, they introduce new privacy risks related to sensitive data exposure, unintended memorization, and information extraction. Our research focuses on identifying and mitigating these risks by developing novel privacy-enhancing techniques tailored to LLM-driven systems and AI agents. If you are passionate about machine learning research and motivated to advance privacy protections in LLMs and AI agents, we invite you to contribute to this effort by joining our team as a PhD student! Join us as a PhD Student - Privacy Enhancing Technologies for LLMs and AI Agents (m/f/d) Your mission * Review the state of the art and remain current with advances in privacy for large language models (LLMs) and AI agents, including emerging attack vectors and defense mechanisms. * Develop novel methods to enhance privacy in LLM- and agent-based systems, addressing risks such as sensitive data leakage, prompt-based extraction, unintended memorization, and privacy in tool use or multi-agent interactions. * Implement the proposed methods as proof-of-concept prototypes and evaluate them on public and/or industrial datasets, with an emphasis on realistic deployment scenarios. * Publish research findings at leading scientific conferences and journals in machine learning, security, and privacy. * Collaborate actively with team members on interdisciplinary topics spanning machine learning, agent architectures, and privacy-preserving technologies.