PostDoc Position – Physical AI Research

 

As a PostDoc researcher and co-founder of the Physical AI group, you will develop methods that enable robots to learn from
demonstrations, corrections, and autonomous experience, and deploy them in real-world settings.

  • Vision-Language-Action (VLA) Models: Design and implement VLA architectures integrating vision, language, and action for dexterous manipulation, building on large pre-trained vision-language backbones (e.g., 5B-parameter VLMs).
  • Reinforcement Learning from Experience: Develop RL pipelines – offline RL, advantage-conditioned policies – enabling robots to grow beyond pure imitation, achieving human-level and superhuman robustness through autonomous experience.
  • Long-Horizon Task Mastery: Investigate credit assignment across extended tasks via learned value functions, enabling robots to detect and correct compounding errors in complex real-world scenarios.
  • Sim-to-Real Transfer & World Models: Bridge simulation and deployment using world models, self-supervised representations (JEPA, DINOv3), and transfer techniques for robust generalization.
  • Medical & Clinical Robotics: Partner with imaging and clinical groups to apply Physical AI in healthcare robotics, combining LFB’s sensor expertise with embodied intelligence.

Candidate Profile:

  • PhD in Computer Science, Electrical Engineering, Robotics, Physics, or a related field
  • Strong background in deep learning, with experience in reinforcement learning, imitation learning
  • Hands-on experience with PyTorch and large-scale model training; familiarity with VLA or foundation model architectures is a strong advantage
  • Publication record in top-tier venues (NeurIPS, ICML, ICLR, CoRL, ICRA, or equivalent)
  • Drive to work at the intersection of Physical AI, embodied intelligence, and real-world deployment
  • Excellent communication skills in English; German is advantageous but not required

 


PhD Position: AI for Automated Surgical Planning

 

Your Research Impact
As a PhD researcher, you will develop AI methods that automate surgical planning.

  • 3D Anatomical Intelligence
    Develop representation learning approaches based on DINOv2 for high-resolution CT and CBCT/DVT data to
    capture complex anatomical structures.
  • Self-Supervised Learning
    Design learning strategies that leverage large volumes of unlabeled medical imaging data.
  • Multimodal Surgical Reasoning
    Combine visual models with multimodal language models to translate clinical descriptions into surgical
    planning instructions.
  • Automated Planning
    Integrate AI representations with geometric algorithms to propose osteotomy planes and surgical plans.

Candidate Profile:

  • Very good Master’s degree in Computer Science, Physics, Biomedical Engineering, or a related field
  • Strong background in AI/ML and experience with Python/PyTorch
  • Interest in self-supervised learning or vision transformer models
  • Motivation to work on interdisciplinary medical AI challenges

Hiwi Angebote

Wir stellen laufend HiWis ein und würden uns freuen, von Ihnen zu hören – kommen Sie jederzeit gerne auf uns zu!

 Studentische Hilfskraft (m/w/d) im Bereich Videosignalverarbeitung
Ansprechpartner: Mathias Wien