Automatic Spatiotemporal Alignment of Large-Scale 3D+t Point Clouds


Multidimensional fluorescence microscopy has become a common technique in biology labs all over the world as it represents an invaluable tool to study early embryonic development in space and time (3D+t) at cellular resolution. Automatic segmentation and tracking algorithms are used to extract thousands of cell movement trajectories from potentially terabyte-scale 3D+t image data sets that offer the possibility for a detailed analysis of inter-individual differences. A fundamental problem that remains after having obtained such tracked point clouds, however, is the comparison of individual experiments to confirm biological hypotheses in multiple repeats. The lack of fully automated solutions to this 3D+t alignment problem currently limits whole-embryo analyses to simple specimens, early time points or manual analyses.

Scientific Questions

The aim of this project is the development of new methods for automated spatiotemporal alignment of large 3D+t point clouds. As complex organisms usually lack one-to-one cell correspondences that could be used for registration, a fundamental part of the project will be the development of generic descriptors to identify various anatomical regions at different developmental stages using both classical and machine learning-based approaches. These descriptors will then be used to obtain a spatiotemporal registration of the point clouds. Moreover, we generate synthetic training data for the machine learning approaches using a comprehensive simulation platform that allows mimicking embryonic development of different specimens at multiple levels of detail.


New theses are regularly advertised in the area of automatic processing of large-scale point cloud data. In addition to the general overview there are also numerous topics that have not yet been advertised, which will be gladly presented in a personal conversation.


External Funding

  • DFG Research Grant, “Automatic spatiotemporal alignment of large-scale 3D+t point clouds”, Projektnummer 432051322



Prof. Dr.-Ing.



Zhu Chen, Ina Laube, Johannes Stegmaier
Unsupervised Learning for Feature Extraction and Temporal Alignment of 3D+t Point Clouds of Zebrafish Embryos
In: International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI)


Abin Jose, Rijo Roy, Dennis Eschweiler, Ina Laube, Reza Azad, Daniel Moreno-Andres and Johannes Stegmaier
End-to-end classification of cell-cycle stages with center-cell focus tracker using recurrent neural networks
In: International Conference on Acoustics, Speech and Signal Processing (ICASSP)


Abin Jose, Qi Mei, Dennis Eschweiler, Ina Laube and Johannes Stegmaier
Linear Discriminant Analysis Metric Learning Using Siamese Neural Networks
In: International Conference on Image Processing (ICIP)


D. Eschweiler, I. Laube, J. Stegmaier
Spatiotemporal Image Generation for Embryomics Applications
In: Biomedical Image Synthesis and Simulation: Methods and Applications


Philipp Gräbel, Ina Laube, Martina Crysandt, Reinhild Herwartz, Melanie Hoffmann, Barbara M. Klinkhammer, Peter Boor, Tim H. Brümmendorf, Dorit Merhof
Surrounding Cell Suppression for Unsupervised Representation Learning in Hematological Cell Classification
In: IEEE International Symposium on Biomedical Imaging (ISBI)


Philipp Gräbel, Ina Laube, Martina Crysandt, Reinhild Herwartz, Melanie Hoffmann, Barbara M. Klinkhammer, Peter Boor, Tim H. Brümmendorf, Dorit Merhof
Rotation invariance for unsupervised cell representation learning
In: Bildverarbeitung fuer die Medizin (BVM)