Learn2Reg 2024


This year's challenge submission is closed.
Results and winners were announced at our workshop on October 6th, 8 - 12:30 am at MICCAI, have a look at the detailed results here.

ReMIND2Reg

LUMIR

COMULISglobe


Presentations of Winning Methods

ReMIND2Reg - Junyi Wang (University of Electronic Science and Technology of China)

LUMIR - Joel Honkamaa (Aalto University)

https://arxiv.org/abs/2303.10211 & https://github.com/honkamj/SITReg

COMULISglobe - Thilo Sentker, Maximilian Nielsen, Frederic Madesta (VROC) (University Medical Center Hamburg-Eppendorf)

COMULISglobe - Marek Wodzinsky (lWM) (AGH University of Kraków / HES-SO Valais)

All tasks - next_gen_nn


We are pleased to host the Learn2Reg Challenge again this year in conjunction with the Workshop on Biomedical Image Registration (WBIR) at MICCAI 2024 in Marrakesh!

In summary, L2R 2024 will comprise four exciting new sub-tasks LUMIR, ReMIND2Reg and COMULISglobe (see details below).

We have outlined the timeline for the computational challenges as follows:

- Early May: Training and Validation (without labels/annotation) data is released, see download links below in the sub-task descriptions

- Mid May: Start of public leaderboard for validation (each phase/task is separated) for this only displacement fields for a limited number of cases will need to be submitted

- June 5th 3.30pm CEST: Public Kick-off with Q&A (find our slides here)

- July 17th: Second public Q&A focussing on the test submission

- July 31st: Validation Leaderboard evaluation for early acceptance and initial results

- September 8th: Deadline Test/Algorithm submission for winning and late acceptance

The challenge is closely aligned with WBIR and will jointly publish peer-reviewed proceedings. The timeline for workshop and publication is as follows:

- June 24th WBIR deadline for full paper (10-15 pages), notification July 15th, each accepted paper will have at least teaser oral plus poster - this is highly encouraged for all participating teams

- If you submit to the validation leaderboard and are among the Top10 for any subtask and submit a 4-8 pages paper (LNCS template) describing your method by 31st July you will receive an early workshop acceptance to present a poster.

- If participation in person is not possible pre-recorded videos can be sent for the website.

- Sunday 6th October 2024, workshop fully in presence (no hybrid, online part)

There will be prizes kindly sponsored by the SIGBIR and each participant will receive a colour-printed certificate:

ReMIND: 250$ 1st, 150$ 2nd, 100$  3rd  

LUMIR: 250$ 1st, 150$ 2nd, 100$ 3rd

COMULISglobe (SHG-BF & 3D-CLEM): 250$ 1st, 150$ 2nd, 100$ 3rd

Post workshop: we aim to publish recorded videos of all methods + GitHub link if possible, and all participants will get invited to prepare a short ArXiv paper that will be published in a joint Learn2Reg compendium. There are furthermore plans of each subtask to publish summary journal papers after the challenge with options to contribute as co-author.

To best support you in your image registration research for this and future Learn2Reg editions and keep you updated on opportunities to participate in e.g. joint publications, we encourage you to fill out this short Google form that contains 5 survey questions.


ReMIND2Reg: Brain Resection Multimodal Registration

(a) Contrast-enhanced T1 and post-resection intra-operative US; (b) T2 and post-resection intra-operative US.

Warning from 9 July 2024: the ultrasound of case ReMIND2Reg_0048 was empty. The version 2.4 of the dataset on Zenodo correct this issue.

Warning from 29 May 2024: a first version was released which contained FLAIR scans instead of T2 scans, please download the dataset again if you downloaded the V1.

Training/Validation: Download 

Context: Surgical resection is the critical first step for treating most brain tumors, and the extent of resection is the major modifiable determinant of patient outcome. Neuronavigation has helped considerably in providing intraoperative guidance to surgeons, allowing them to visualize the location of their surgical instruments relative to the tumor and critical brain structures visible in preoperative MRI. However, the utility of neuronavigation decreases as surgery progresses due to brain shift, which is caused by brain deformation and tissue resection during surgery, leaving surgeons without guidance. To compensate for brain shift, we propose to perform image registration using 3D intraoperative ultrasound.

Objectives: The goal of the ReMIND2Reg challenge task is to register multi-parametric pre-operative MRI and intra-operative 3D ultrasound images. Specifically, we focus on the challenging problem of pre-operative to post-resection registration, requiring the estimation of large deformations and tissue resections. Preoperative MRI comprises two structural MRI sequences: contrast-enhanced T1-weighted (ceT1) and native T2-weighted (T2). However, not all sequences will be available for all cases. For this reason, developed methods must have the flexibility to leverage either ceT1 or T2 images at inference time. To tackle this challenging registration task, we provide a large non-annotated training set (N=158 pairs US/MR). Model development is performed on annotated validation sets (N=10 pairs US/MR). The final evaluation will be performed on a private test set using Docker (more details will be provided later).
The task is to find one solution for the registration of two pairs of images per patient:

  1. 3D post-resection iUS (fixed) and ceT1 (moving).
  2. 3D post-resection iUS (fixed) and T2 (moving).

*Dataset:  *The ReMIND2Reg dataset is a pre-processed subset of the ReMIND dataset, which contains pre- and intra-operative data collected on consecutive patients who were surgically treated with image-guided tumor resection between 2018 and 2024 at the Brigham and Women’s Hospital (Boston, USA). The training (N=99) and validation (N=5) cases correspond to a subset of the public version of the ReMIND dataset. Specifically, the training set includes images of 99 patients with 99 3D iUS, 93 ceT1, and 62 T2 and validation images of 5 patients with 5 3D US, 5 ceT1, and 5 T2. The images are paired as described above with one or two image pairs per patient, resulting in 155 image pairs for training and 10 image pairs for validation. The test cases are not publicly available and will remain private. For details on the image acquisition (scanner details, etc.), please see https://doi.org/10.1101/2023.09.14.23295596 

Number of registration pairs: Training: 155, Validation: 10, Test: 40 (TBC).

Rules: Participants are allowed to use external datasets if they are publicly available. The authors should mention that these datasets were used in their method description, including references and links. However, participants are not allowed to use private datasets. Moreover, they cannot exploit manual annotations that were not made publicly available.

Pre-Processing: All images are converted to NIfTI. When more than one pre-operative MR sequence was available, ceT1 was affinely co-registered to the T2 using NiftyReg; Ultrasound images were resampled in the pre-operative MR space. Images were cropped in the field of view of the iUS in an image size of 256x256x256 with a spacing of 0.5x0.5x0.5mm.

Citation:  Juvekar, P., Dorent, R., Kögl, F., Torio, E., Barr, C., Rigolo, L., Galvin, C., Jowkar, N., Kazi, A., Haouchine, N., Cheema, H., Navab, N., Pieper, S., Wells, W. M., Bi, W. L., Golby, A., Frisken, S., & Kapur, T. (2023). The Brain Resection Multimodal Imaging Database (ReMIND). Nature Scientific Data. https://doi.org/10.1101/2023.09.14.23295596


LUMIR - Large Scale Unsupervised Brain MRI Image Registration

Training/Validation: Download; Dataset JSON file: Download

Context: The previous Learn2Reg brain MRI challenge inspired advancements in brain registration using learning-based methods but highlighted a bias towards anatomical label maps in weakly-supervised brain segmentation. Networks trained only on label maps often produced high Dice scores but resulted in non-smooth and unrealistic deformations. This year, we are shifting to an unsupervised learning approach, excluding label maps during training. We are providing over 4,000 preprocessed images from existing publicly available neuroimaging collections, aiming to take a significant step towards developing a foundational model for brain image registration. The task for this challenge is inter-subject T1-weighted brain MRI registration.

Dataset: The image data released includes the OpenBHB dataset, featuring T1-weighted brain MRI scans from 10 different public datasets. Additionally, a portion of the dataset is sourced from the AFIDs project, developed using the OASIS dataset. In line with our focus on unsupervised image registration, only imaging data will be provided for training, allowing participants to freely create inter-subject pairs as they see fit. All images are converted to NIfTI, resampled, and cropped to the region of interest, resulting in an image size of 160x224x192 with a voxel spacing of 1x1x1 mm. The dataset includes:

  • Training images of 3,384 subjects.
  • Validation images of 40 patients, including 10 with manually placed landmarks by physicians, resulting in 38 image pairs for validation.
  • Test images of 590 subjects. The annotations for the test data include deep brain anatomical label maps and manually placed anatomical landmarks. 

Dataset references include:

  1. Dufumier, Benoit, et al. "Openbhb: a large-scale multi-site brain mri data-set for age prediction and debiasing." NeuroImage 263 (2022): 119637.
  2. Taha, Alaa, et al. "Magnetic resonance imaging datasets with anatomical fiducials for quality control and registration." Scientific Data 10.1 (2023): 449.
  3. Marcus, Daniel S., et al. "Open Access Series of Imaging Studies (OASIS): cross-sectional MRI data in young, middle aged, nondemented, and demented older adults." Journal of cognitive neuroscience 19.9 (2007): 1498-1507. 

Evaluation Metrics: The evaluation will focus on three aspects of the registration model:

  • Segmentation Accuracy: Measured by computing Dice and Hd95, which broadly assess registration accuracy.
  • Landmark Accuracy: Evaluated using target registration error with manually annotated landmarks.
  • Deformation Smoothness: Quantified by non-diffeomorphic volumes (NDV) [4], addressing inherent errors in finite difference-based Jacobian approximation.

*Note: *For the LUMIR challenge, we are focusing on benchmarking unsupervised image registration across traditional optimization-based and deep learning-based methods. Direct brain label maps, such as those generated from segmentation algorithms, are not permitted for use in training/optimization. However, indirect label maps, such as synthetic labels or the incorporation of a brain atlas as a form of prior knowledge, are permitted. Participants must provide a statement detailing whether any label maps (direct or indirect) or additional knowledge beyond the provided training images were used, along with a description of their method in their submission.

4. Liu, Yihao, et al. "On finite difference jacobian computation in deformable image registration." International                Journal of Computer Vision (2024): 1-11.

Baseline/Templates: For baseline methods, their pretrained weights, and evaluation scripts, please see https://github.com/JHU-MedImage-Reg/LUMIR_L2R

Video Presentation of the Task: https://cloud.imi.uni-luebeck.de/s/tm9DEFiDa9XH35k/download/Learn2Reg_LUMIR_JChen.mp4  


TASK 3: COMULISglobe SHG-BF

Title: Second-harmonic generation (SHG) microscopy and BrightField (BF) microscopy images of cancer tissue

Illustration:

SHG


BF

Training/Validation/Test: Download

Description:

Second-harmonic generation (SHG) microscopy is a non-invasive imaging technique that does not require the use of exogenous labels. This is particularly beneficial for studying live tissues, allowing for real-time observations without introducing artefacts or potential toxicity associated with staining agents. However, SHG imaging only gives partial information, whereby co-examination of SHG images and traditional bright-field (BF) images of hematoxylin and eosin (H&E) stained tissue is usually required. H&E staining enables differentiation of tissue components, while SHG imaging is particularly sensitive to the collagen fibres.

The dataset consists of SHG and H&E stained BF microscopy images of human breast and pancreatic cancer tissue. Tissues were formalin-fixed and paraffin-embedded, then cut into 5 micrometer thin slices, affixed to a slide and stained with H&E before mounting with a coverslip. BF imaging of the pancreatic samples was done with an Aperio CS2 Digital Pathology Scanner (Leica Biosystems) at 40x magnification, while SHG imaging and BF imaging of the breast samples was done with a custom built integrated SHG/BF imaging system [1]. 

Images were acquired at the Laboratory for Optical and Computational Instrumentation, Department of Biomedical Engineering, University of Wisconsin-Madison, Madison, WI 53706, USA. 

Number of cases: Training: 156, Validation: 10, Test: 40

Annotation: Manually annotated landmarks in all datasets will be used for validation and testing.

Pre-processing: To alleviate out-of-focal-plane issues due to the unevenness of the tissue slice, three z-planes were captured per SHG image and then maximum-intensity projected to capture the entire axial field of view. 

License: Creative Commons Attribution 4.0 International

Citation: 

[1] A. Keikhosravi, B. Li, Y. Liu, and K. W. Eliceiri. Intensity-based registration of bright-field and second-harmonic generation images of histopathology tissue sections. Biomed. Opt. Express, 11(1):160–173, Jan 2020.

[2] K. Eliceiri, B. Li, A. Keikhosravi. Multimodal Biomedical Dataset for Evaluating Registration Methods (Full-Size TMA Cores); 2021. [Data set]. Zenodo. https://doi.org/10.5281/zenodo.4550300.

https://www.comulis.eu/ 


TASK 4: COMULISglobe 3D-CLEM

Title: Cellular level Volume Electron Microscopy (EM) – Light microscopy (LM)

Illustration:


First row: 2d slices of one of the released datasets (raw EM data, raw LM data, EM rigidly registered on LM data), Second row: 3D views of the same volume.

Training/Validation/Test: Download

Description:

Automatic multimodal microscopy 3D image registration is an unsolved problem in image processing. The aim of this first challenge, organized by the COMULISglobe society,  is to set up the basis of a recurrent challenge, Electronic microscopy EM 3D image data -- Focused Ion Beam Scanning Electron  (FIB SEM) and Serial Block Face Scanning Electron Microscopy (SBF SEM) --  were captured on the same cell area as light microscopy  LM 3D image data (Super resolution fluorescence microscopy ). They were acquired with variable volume sizes (~15000 × ~15000 x ~10000 for isotropic raw EM data and ~2000x~2000x100 for anisotropic raw LM data  ) and field of views (approx.75 x 75 X 50 micrometers3). The in-plane resolution was constant (isotropic voxel size of 0.005 micrometers for EM, 0.035 micrometers in xy and 0.13 micrometers for light microscopy). 

EM data is non specific and shows all organels. LM data is specific and is composed of two color channels: the first one showing mitochondria, the second one showing the nuclei of the cells.

Number of cases: The total number of datasets is three. Two data sets are used for training/validation, and the third dataset for testing. For this challenge the datasets were cropped to patches: 40 training/validation,  20 testing.

Annotation: Manually annotated landmarks in all datasets will be used for validation and testing.

Pre-processing: Common pre-processing to same voxel resolutions and spatial dimensions as well as rigid pre-registration will be provided to ease the use of learning-based algorithms for participants with little prior experience in image registration. 

License: EM Data are released under the CC0 license , and LM data and landmarks under the CC-BY-NC 4.0 license. 

Citation:

[1] Daniel Krentzel, Matouš Elphick, Marie-Charlotte Domart, Christopher J. Peddie, Romain F. Laine, Ricardo Henriques, Lucy M. Collinson, Martin L. Jones. CLEM-Reg: An automated point cloud based registration algorithm for correlative light and volume electron microscopy. bioRxiv 2023.05.11.540445; doi: https://doi.org/10.1101/2023.05.11.540445

https://www.comulis.eu/