Learn2Reg 2021

Motivation: Standardised benchmark for the best conventional and learning based medical registration methods:

  • Analyse accuracy, robustness and speed on complementary tasks for clinical impact. 
  • Remove entry barriers for new teams with expertise in deep learning but not necessarily registration.

Scope: The second edition Learn2reg challenge provides pre-preprocessed data (resample, crop, pre-align, etc.) for

  1. Intra-patient multimodal abdominal MRI and CT registration (122 scans in total, part of them unpaired) for diagnostic and follow-up.
  2. Intra-patient large deformation lung CT registration (20 training pairs, 10 test pairs, all inspiration / expiration) for lung ventilation estimation.
  3. Inter-patient large scale brain MRI registration (>400 unpaired training scans, ~100 test scans) for shape analysis.

Learn2Reg removes pitfalls for learning and applying transformations by providing:

  • python evaluation code for voxel displacement fields and open-source code all evaluation metrics
  • anatomical segmentation labels, manual landmarks, masks and keypoint correspondences for deep learning

Learn2Reg addresses four of the imminent challenges of medical image registration:

  • learning from relatively small datasets
  • estimating large deformations
  • dealing with multi-modal scans
  • learning from noisy annotations

Evaluation: Comprehensive and fair evaluation criteria that include:

  • Dice / surface distance and TRE toe measure accuracy and robustness of transferring anatomical annotations 
  • standard deviation and extreme values of Jacobian determinant to promote plausible deformations,
  • low computation time for easier clinical translation evaluated using docker containers on GPUs provided by organisers.

Organisers / Contact: The full list of organisers can be found in the proposal document below, for any practical questions please contact Adrian Dalca, Alessa Hering, Lasse Hansen and Mattias Heinrich at learn2reg@gmail.com . See the full MICCAI Learn2Reg proposal  here

       

Timeline

Submission deadlines refer to 12 PM UTC on respective dates.

early May 2021: all training scans available for download and validation submission opens

12th August 2021: 3-5pm CET help session https://us06web.zoom.us/j/84586446599

20th August 2021: Snapshot evaluation on validation data (optional); the best 5 submissions (in bold + 2 individual task runner-ups) will be invited to give a talk during the virtual Learn2Reg workshops. Further teams may be selected for oral presentation based on the final scores.

(tbc) 6th September 2021: release of test scans

13th September 2021: submission of docker that computes displacement fields for test scans (deadline slightly extended to 13th September AoE!)

20th September 2021: release of the results

31th October 2021: Submission of LNCS paper (up to 4 pages)

Datasets:

The challenge is subdivided into 3 tasks:

  1. CT-MR thorax-abdomen intra-patient registration
  2. CT lung inspiration-expiration registration
  3. MR whole brain

Test submission:

Please contact us at learn2reg@gmail.com

Task 1: CT-MR thorax-abdomen intra-patient registration


Test dataset: Download (fixed images: MR; moving images CT)

Training/Validation: Download TCIA MR/CT  (fixed images: MR; moving images: CT)

with additional auxiliary data: Download BCV CT (Task3 L2R'20) and Download CHAOS MR

coarse ROI masks: Download TCIA, BCV, CHAOS

TCIA Subject IDs Training/Validation: TCGA-B8-5158 (0002), TCGA-B8-5545 (0004), TCGA-B8-5551 (0006), TCGA-BP-5006 (0008), TCGA-DD-A1EI (0010), TCGA-DD-A4NJ (0012), TCGA-G7-7502 (0014), TCGA-G7-A8LC (0016)

Validation Cases: pairs_val.csv

Size: 122 CT/MR scans (16 CT-MR scan pairs (8 Training, 8 Test) + 90 unpaired CT/MR scans)

Source: TCIA, BCV, CHAOS

Challenge: Multimodal registration. Learning from few/noisy annotations. Learning with domain gaps.

Annotation on training data: Manual and automatic segmentations of different organs.

Annotation on test data: Manual segmentations of different organs.

Citation/Licence: Readme TCIA, Readme BCV, Readme CHAOS

Task 2: CT lung inspiration-expiration registration


Test: Download | pairs_val.csv

Training/Validation: Download Images, Download Keypoints (fixed images: expiration; moving images: inspiration)

Validation Cases: pairs_val.csv

Size: 30 3D volumes (20 Training + 10 Test)

Source: Department of Radiology at the Radboud University Medical Center, Nijmegen, The Netherlands.

Challenges: Estimating large breathing motion. The lungs are not fully visible in the expiration scans.

Annotation on training data: automatic lung segmentation + keypoints

Annotation on test data: manual landmarks

Citation: Hering, Alessa, Murphy, Keelin, & van Ginneken, Bram. (2020). Learn2Reg Challenge: CT Lung Registration - Training Data [Data set]. Zenodo. http://doi.org/10.5281/zenodo.3835682;

Task 3: MR whole brain

 

Test: Download, pairs_val.csv

Training/Validation: Full Dataset /  Validation / Validation (skull stripped)      

Validation Cases: pairs_val.csv

Size: 416 3D  MR scans

Source: OASIS dataset

Challenge: alignment of small structures of variable shape and size with high precision on mono-modal MRI images between different patients.

Annotation on training data: automatic segmentation processed with FreeSurfer and SAMSEG for the neurite package.

Annotation on test data: automatic segmentation processed with FreeSurfer and SAMSEG for the neurite package.

Citation: Open Access Series of Imaging Studies (OASIS): Cross-Sectional MRI Data in Young, Middle Aged, Nondemented, and Demented Older Adults.
Marcus DS, Wang TH, Parker J, Csernansky JG, Morris JC, Buckner RL.
Journal of Cognitive Neuroscience, 19, 1498-1507.

HyperMorph: Amortized Hyperparameter Learning for Image Registration.
Hoopes A, Hoffmann M, Fischl B, Guttag J, Dalca AV.
IPMI 2021.

Detailed Description: These data were prepared by Andrew Hoopes and Adrian V. Dalca for the following HyperMorph paper. If you use this collection please cite the following and refer to the OASIS Data Use Agreement. Evaluation for this challenge is performed on the images that are resampled into the affinely-aligned, common template space (Feel free to use any versions (aligned/non-aligned, raw/corrected) for tuning/training your algorithm)!


Submission

The Learn2Reg challenge has an automatic evaluation system for validation scans running on grand-challenge.org. You can submit your deformation fields as zip file at the individual submission pages and results for each task will be published on the respective validation leaderboards (note that this does not reflect the final ranking as test scans are different and ranks will be computed based on significance, weighted scores, etc.). Docker submissions have to be sent as download links to learn2reg@gmail.com. Test set deformation fields can also be sent as download links via mail (note that no results will be published before the challenge deadlines).

Submission Format

Submissions must be uploaded as zip file containing displacement fields (displacements only, identity grid is added) for all validation pairs for within a tasks. You can find the validation pairs for each task as CSV files at the Datasets page. The convention used for displacement fields depends on scipy's map_coordinates() function, thus expecting displacement fields in the format [[x, y, z], X, Y, Z], where x, y, z and X, Y, Z represent voxel displacements and image dimensions, respectively.  The evaluation script expects .npz files using half-precision format ('float16') and having shapes 3x96x80x96 for task 1 (half resolution), 3x96x96x104 for task 2 (half resolution) and 3x80x96x112 for task 3 (half resolution) respectively. The file structure of individual  submissions should look as follows:

Please note that due to our deprecated submission structure you have to add a dummy file to the top level, otherwise your submission might fail.

The first four digits represent the case id of the fixed image (as specified in the corresponding pairs_val.csv) with leading zeros, the second four digits represent the case id of the moving image. For the paired registration tasks the fixed and moving image are defined as MR and CT (task 1) and exhale and inhale scan (task 2) respectively. Note that in conventional lung registration tasks the exhale scan is registered to the inhale scan. However, in this dataset the field-of-view for the exhale scan is partially cropped which leads to missing correspondences in the inhale scan. For task 3 (MR whole brain) evaluation is performed on the images that are resampled into the affinely-aligned, common template space. You may have a look at exemplary submissions (zero deformation fields) here: task 1 (zip), task 2 (zip), task 3 (zip). If you have any problems with your submissions or find errors in the evaluation code (see below), please contact Adrian Dalca, Alessa Hering, Lasse Hansen and Mattias Heinrich at learn2reg@gmail.com.

Note for PyTorch users: When using PyTorch as deep learning framework you are most likely to transform your images with the grid_sample() routine. Please be aware that this function uses a different convention than ours, expecting displacement fields in the format [X, Y, Z, [z, y, x]] and normalized coordinates between -1 and 1. Prior to your submission you should therefore convert your displacement fields to match our convention (see above).

Metrics and Evaluation

Since registration is an ill-posed problem, the following metrics will be used to determine per case ranks between all participants 

  1. TRE: target registration error of landmarks (Tasks 2)
  2. DSC: dice similarity coefficient of segmentations (Tasks 1, 3)
  3. DSC30: robustness score (30% lowest DSC of all cases)  (Tasks 1, 3)
  4. HD95: 95% percentile of Hausdorff distance of segmentations (Tasks 1, 3)
  5. SDlogJ: standard deviation of log Jacobian determinant of the deformation field (Tasks 1, 2, 3)

DSC measures accuracy; HD95 measures reliability; Outliers are penalised with the robustness score (DSC30: 30% of lowest mean DSC); The smoothness of transformations (SD of log Jacobian determinant) are important in registration, see references of Kabus and Low. For final evaluation on test sets all metrics but robustness (DSC30) use mean rank per case (ranks are normalised to between 0.1 and 1, higher being better). For multi-label tasks the ranks are computed per structure and later averaged. As done in the Medical Segmentation Decathlon we will employ "significant ranks" http://medicaldecathlon.com/files/MSD-Ranking- scheme.pdf. Across all metrics an overall score is aggregated using the geometric mean. This encourages consistency across criteria. Missing results will be awarded the lowest rank (potentially shared and averaged across teams). For further insights into the used metrics and evaluation routines we provide the evaluation script for the individual tasks: task 1, task 2, task3.

References:

  • AD Leow, et al.: "Statistical properties of Jacobian maps and the realization of unbiased large-deformation nonlinear image registration" TMI 2007
  • S Kabus, et al.: "Evaluation of 4D-CT Lung Registration" MICCAI 2009

Submission (Docker)

For a comparison of registration algorithms regarding inference time/runtime participants may submit their methods as docker containers (and thus, gain computation bonus points). All submitted docker containers will run on the same hardware. We provide a docker container that you can extend or modify to prepare your submission. All necessary files can be downloaded from here. The input and output file structure may also be inferred from the exemplary docker container and should not be altered. To run and test your docker submission locally you may download the training/validation datasets for the task(s) you work on and extract them together with the (validation) pairs_val.cvs files in the corresponding test directories (test/task_01/, test/task_02/, ... ). For the test cases we will only alter the pairs_val.cvs and test images. If you only work on specific tasks, your docker should only output displacement fields for those tasks. To build and run the provided container use the following two commands:

docker build -t l2r_submission .
docker run --mount src="\$(pwd)/test",target=/l2r/test,type=bind,readonly --mount src="\$(pwd)/submission",target=/l2r/submission,type=bind l2r_submission

Runtime computation

The runtime of your algorithms will be computed for each registration pair by the time difference between the first access of the test data (fixed image, moving image, masks, etc.) and the moment the displacement field is written to disk. Keep this in mind when you are preparing your submission and avoid unnecessary computation (e.g. GPU initialization of you deep learning framework) during this time interval.

Docker Upload

Please upload your docker or displacement fields here: https://cloud.imi.uni-luebeck.de/s/7M3DJ33rsbsnzsZ with your team name or grand-challenge name in the file name.

Within the L2R Challenge the submitted docker with your algorithm will be evaluated on a local NVIDIA server of the Universität zu Lübeck. The evaluation will only be used to evaluate the Challenge data and will not be used for any other purpose. In particular, it will not be used in, on, or for human beings and/or for therapeutic, diagnostic, or other medical purposes, nor will it be used in connection with or brought to market with any medical device. The submission does not give rise to any liability or warranty claims against the submitter and the challenge organizers.

Workshop

Workshop Schedule

All times are given in UTC+2. Pre-recorded videos are available at MICCAI platform pathable (and as download links in the program below).

The workshop takes place from 11 am to 6 pm on September 27th. (detailed PDF program)

11.00-12.00     Tutorial Curating and pre-processing your dataset for learnign-based medical image registration (Slides PDF)
13:00-13:15    Introduction Learn2Reg challenge 2021 - challenge design, dataset and evaluation criteria
13:15-13:45    Short Orals (Luyi Han Radboud umc / 3idiots, Stephanie Häger Fraunhofer MEVIS, Marek Wodzinski AGH / IWM, Mikael Brudfors UCL / smajjk, Vincent Jaouen LaTIM, Bailiang Jian TUM, Mattias P. Heinrich UzL, Bo Hu University of Science and Technology of China / VIDAR, Gal Lifshitz Tel Aviv University, Lasse Hansen UzL)
13:45-14:45    Long Orals (Tony C. W. Mok The Hong Kong University of Science and Technology,  Wentao Pan THU,  Daniel Grzech Imperial College London, Jinxin Lv Huazhong University of Science and Technology / Driver,  Wei Shao Stanford University / PIMed)

14:45-15:30    Poster session

15.45-16:15    Keynote: Dr. Mark Wielpütz, University Clinic Heidelberg, Germany " Imaging Lung Structure and Function - You can't have one without the other"

16:15-16:45    Keynote: Prof. Dr. Bram van Ginneken, Radboudumc, Nijmegen, The Netherlands "Grand-challenge.org: bridging the gap between challenges and algorithms"

16:45-17:15    Keynote: Dr. Stefan Heldmann, Fraunhofer MEVIS, Germany ""Bringing Medical Image Registration to Patients - Translation from Research to Industry at Fraunhofer MEVIS"

17:15-18:00    Closing and panel discussion