The new MorphoSphere project develops an advanced data management architecture for distributed, smart, and interactive storage and analysis of series of large 3D tomographic datasets. It integrates large-scale imaging facilities (PETRA III at DESY, KIT Light Source), high-performance computing, data facilities at DESY, KIT and the University of Heidelberg, and scientific communities to optimize data transfer, processing, and visualization, thereby accelerating scientific discovery across diverse research domains.Exponential growth of imaging data at large-scale imaging facilities, such as synchrotrons, has created major challenges in data handling, analysis, and accessibility. Traditional methods struggle to process petabyte-scale datasets efficiently, limiting researchers’ ability to extract meaningful insights from complex 3D imaging experiments. MorphoSphere addresses this challenge by merging distributed computing, data federation, and artificial intelligence to enable interactive, scalable, and intelligent analysis workflows. This approach reflects a broader shift in scientific research toward data-centric infrastructures that integrate high-performance computing with machine learning, fostering interdisciplinary collaboration and open data practices across different scientific communities.
Laboratorium für Applikationen der Synchrotronstrahlung (LAS)
You will develop a modular, flexible, and extensible system for large-scale 3D image processing (around 30 GB input data per data set, thousands of data sets per week). The system will create image processing pipelines in a graph-like manner and ensure that all available ressources (GPUs in a multi-GPU server and also multiple such servers) are used to efficiently execute it. It will be possible to use both classical image processing algorithms and machine learning-based methods as nodes in such a graph and to use the system from multiple programming languages (C, Python). You will also help implement the nodes, especially algorithms for 3D image reconstruction, including typical pre- and post-processing steps. You will optimize the system for different hardware platforms. Your work will enable scientists to rapidly design, modify, and execute complex image processing pipelines, making advanced image analysis more accessible and efficient. Main Tasks Develop and assemble a plugin-based system for large-scale image processing pipelines with cross-technology support (CPU, GPU, multi-node systems) and with focus on zero-copy data paths, efficient utilization of high-speed interconnects, and optimal scalability across multiple GPUs and GPU nodesIntegrate and implement image processing algorithms, particularly for 3D imaging. Optimize system performance for computing infrastructures at partner institutions. Implement high-level data processing pipelines creation in Python, a visual programming tool and integrate them into an online interactive platform
Salary
Salary category 13 TV-L, depending on the fulfillment of professional and personal requirements.
Contract duration
3 years
Application up to
03.04.2026
Contact person in line-management
For further information, please contact Prof. Dr. Tilo Baumbach, Email: tilo.baumbach@kit.edu.