Learn2Reg 2025¶
We are pleased to host the Learn2Reg Challenge again this year at MICCAI 2025 in Daejeon!
In summary, Learn2Reg 2025 will feature two extended sub-tasks: LUMIR 2025 and ReMIND2Reg 2025. The full timeline is available here.
Important Dates:
- End of April: Release of training and validation datasets
- Mid May: Kick-off meeting with live Q&A
- Mid June: Second virtual Q&A session
- End of July: Validation leaderboard ranking announced for early acceptance consideration
- End of August: Final submission deadline
This year, we are also offering the opportunity for invited participants to submit a full description of their methods to a special paper collection hosted by the MELBA journal.
In addition, prizes will be awarded thanks to generous sponsorship from SIGBIR, and each participant will receive a color-printed certificate:
- ReMIND25: 1st – $250, 2nd – $150, 3rd – $100
- LUMIR25: 1st – $250, 2nd – $150, 3rd – $100
ReMIND 2025¶
(a) Contrast-enhanced T1 and post-resection intra-operative US; (b) T2 and post-resection intra-operative US.
Training/Validation: Download
Context: Surgical resection is the critical first step for treating most brain tumors, and the extent of resection is the major modifiable determinant of patient outcome. Neuronavigation has helped considerably in providing intraoperative guidance to surgeons, allowing them to visualize the location of their surgical instruments relative to the tumor and critical brain structures visible in preoperative MRI. However, the utility of neuronavigation decreases as surgery progresses due to brain shift, which is caused by brain deformation and tissue resection during surgery, leaving surgeons without guidance. To compensate for brain shift, we propose to perform image registration using 3D intraoperative ultrasound.
Objectives: The goal of the ReMIND2Reg challenge task is to register multi-parametric pre-operative MRI and intra-operative 3D ultrasound images. Specifically, we focus on the challenging problem of pre-operative to post-resection registration, requiring the estimation of large deformations and tissue resections. Preoperative MRI comprises two structural MRI sequences: contrast-enhanced T1-weighted (ceT1) and native T2-weighted (T2). However, not all sequences will be available for all cases. For this reason, developed methods must have the flexibility to leverage either ceT1 or T2 images at inference time. To tackle this challenging registration task, we provide a large non-annotated training set (N=158 pairs US/MR). Model development is performed on annotated validation sets (N=10 pairs US/MR). The final evaluation will be performed on a private test set using Docker (more details will be provided later). The task is to find one solution for the registration of two pairs of images per patient: 1. 3D post-resection iUS (fixed) and ceT1 (moving). 2. 3D post-resection iUS (fixed) and T2 (moving).
Dataset: The ReMIND2Reg dataset is a pre-processed subset of the ReMIND dataset, which contains pre- and intra-operative data collected on consecutive patients who were surgically treated with image-guided tumor resection between 2018 and 2024 at the Brigham and Women’s Hospital (Boston, USA). The training (N=99) and validation (N=5) cases correspond to a subset of the public version of the ReMIND dataset. Specifically, the training set includes images of 99 patients with 99 3D iUS, 93 ceT1, and 62 T2 and validation images of 5 patients with 5 3D US, 5 ceT1, and 5 T2. The images are paired as described above with one or two image pairs per patient, resulting in 155 image pairs for training and 10 image pairs for validation. The test cases are not publicly available and will remain private. For details on the image acquisition (scanner details, etc.), please see https://doi.org/10.1101/2023.09.14.23295596
Number of registration pairs: Training: 155, Validation: 10, Test: 40 (TBC).
Rules: Participants are allowed to use external datasets if they are publicly available. The authors should mention that these datasets were used in their method description, including references and links. However, participants are not allowed to use private datasets. Moreover, they cannot exploit manual annotations that were not made publicly available.
Pre-Processing: All images are converted to NIfTI. When more than one pre-operative MR sequence was available, ceT1 was affinely co-registered to the T2 using NiftyReg; Ultrasound images were resampled in the pre-operative MR space. Images were cropped in the field of view of the iUS in an image size of 256x256x256 with a spacing of 0.5x0.5x0.5mm.
Citation: P. Juvekar, et al., (2023). The Brain Resection Multimodal Imaging Database (ReMIND). Nature Scientific Data. https://doi.org/10.1101/2023.09.14.23295596
LUMIR 2025¶
GitHub: https://github.com/JHU-MedImage-Reg/LUMIR_L2R
Training/Validation: Download; Dataset JSON file: Download
Context: TBA
Objectives: TBA
Citation: TBA