




We invite applications for a fully funded PhD position in Multimodal Artificial Intelligence, focusing on the development of next-generation AI methods that integrate diverse data modalities while remaining deployable on resource-constrained hardware platforms. The successful candidate will work on novel multimodal fusion methods, cross-attention architectures, and robust representation learning, with a strong emphasis on hardware awareness, efficient deployment, and model generalization. The project explores how imaging, sensor signals, structured clinical or contextual data, and temporal information can be combined into unified, interpretable, and transferable AI systems. Beyond algorithmic development, the research will investigate how these models behave under domain shifts and how they can be distilled into compact, deployable architectures suitable for edge devices.