News!
  • Task 3 Continuation Update: Please see our Forum Post (and Updates)

Timeline:

  • October 28th: Algorithm Sanity Check Submission (Upload-Link)
  • November 9th: Q&A / sanity check feedback meeting
  • November 28th: Final submission of algorithms


Recordings and Slides

Timeline

Submission deadlines refer to 12 PM UTC on respective dates.

  • 18th May: Kickoff-Meeting
  • 20th May: Training data available
  • 7th July: NLST Validation available
  • 10th July: Learn2Reg meeting at WBIR (Workshop on Biomedical Image Registration)
  • 20th July: Snapshot evaluation on validation data (optional):  Best 5 submissions will be invited to give a talk during the virtual Learn2Reg workshop and will receive a secret prize
  • 09/06/2022 23:59 CET: Submission of Task 1 (NLST) displacement fields on Test Data for final evaluation
  • 09/06/2022 23:59 CET: Submission of Task 3 (self-configuring training and testing) scripts for final evaluation
  • 18th September: Workshop @ MICCAI 2022, Release of Results
  • End of October: Submission of LNCS paper (up to 4 pages)


Learn2Reg 2022

We are delighted to announce this years' Learn2Reg challenge, which comes with some innovations. These are this year's tasks:

  • Task 1: CT Lung Registration (NLST) -  Screening Data
  • Task 2: Continuation of previous year’s tasks
  • Task 3: Universal registration framework

Additionally, we are very happy to launch Learn2Reg-Test once our MICCAI workshop is completed! Over there, participants and non-participants are welcome to test their algorithms (which may be trained on our L2R training data) against our test set. With this benchmarking website, we hope to facilitate the reprocibility of image registration algorithms and want to promote fair, easy-to-use and neutral evaluation. Furthermore, it has the capacities to become a repository of registration methods, which can easily compared and employed.


Task1 : CT Lung Registration (NLST)

This year, we ask you to register longitudinal CT Lung  Images. We provide a substantial amount of image data, as well as automatically generated lung masks and keypoints for deep learning supervision.

Training, Validation and Test Data


Task2: Continuation of previous year’s tasks

Although it may seem to be a bit boring, our second task of Learn2Reg 2022 ist the continuation of our previous tasks. We have 3 mayor reasons to do so:

  1. Thoughout the past years, we have released a lot of data for image registration tasks, including inter- and intrasubject, inter- and intramodal, MR, CT, US images, labels, masks and keypoints. We want this repository of registration data to be tackled with newly developed algorithms and are eager to see the improvement on these datasets.
  2. For ease-of -use, we have restructured our dataset into a unfying architecture. We hope to lower the barrier to train and evaluate on different datasets and thus to fairly compare which algorithm works best for which tasks. All prevoius tasks can be evaluated on our test data on  Learn2Reg-Test, which will serve as a benchmarking and emloyment website for registration algorithms.
  3. If you have adapted the new dataset structure and developed (or adapted) your method, it is only a small step to participate in our most exciting task: The Universal Training Framework.

Task3: Universal registration framework

This year, we will feature the first (to our knowledege) Type 3 medical image registration challenge.  We seek to find the registration framework, which works best on a verity of tasks. Therfore, participants are asked to provide their algorithms to us as docker containers, which will be trained on our hardware on hidden datasets. We hope that if you worked on Task 2 and adapted to the new dataset architecture, adpoting these methods to a rule-based universial framework will be relatively easy. Additionally, we will provide examplary algorithms and a template!

L2R 2022 Test Phase Submission

If you'd like to participate in the final stage of our challenge, please download the respective test data and follow the task-specific instructions below:

  1. L2R 2022 Task 1 (NLST):
    • Download the test data and compute the displacement fields (same format as for validation). If you are unsure which cases to use, please see  NLST_dataset.json.
    • Upload your zip-compressed results no later than 09/06/2022 23:59 CET to this cloud storage. Please make sure to also include a txt-file containing your name, team, contact information and a short description/publication link of your algorithm. Your results are expected to reach about 3GB of size.
    • For a comparison of registration algorithms regarding inference time/runtime participants may submit their methods as docker containers (and thus, gain computation bonus points). All submitted docker containers will run on the same hardware. We provide a repo2docker example that you can extend or modify to prepare your submission https://github.com/MDL-UzL/L2R/tree/main/examples/submission If you'd rather use docker without repo2docker, you may do so. Please make sure to use mounts for the data path (/NLST/NLST_dataset.json) and the output path. We will be running the docker containers using CUDA Version 11.6, you may change the base image to use GPU acceleration.
    • The runtime of your algorithms will be computed for each registration pair by the time difference between the first access of the test data (fixed image, moving image, masks, etc.) and the moment the displacement field is written to disk. Keep this in mind when you are preparing your submission and avoid unnecessary computation (e.g. GPU initialization of you deep learning framework) during this time interval.
    • To avoid technical issues we kindly ask to also send an email to  learn2reg@gmail.com containing the same information as mentioned above. We will confirm your submission has been uploaded successfully.
  2. L2R 2022 Task 2 (Continuation of previous tasks):
    • If you would like to submit to our previous tasks, please send an email  no later than 09/06/2022 23:59 CET to  learn2reg@gmail.com . We will send you further instructions on how to upload your results.
  3. L2R 2022 Task 3 (Universial Registration Framework)
    • We have already reached out abour submission information to participants who qualified due to outstanding results in our snapshot evaluation. If you have not recieved any information but would still like to participate, please contact us.

If you have any questions, please do not hesitate to contact us at  learn2reg@gmail.com.


Legal notes:

Within the L2R Challenge the submitted docker with your algorithm will be evaluated on a local NVIDIA server of the Universität zu Lübeck. The evaluation will only be used to evaluate the Challenge data and will not be used for any other purpose. In particular, it will not be used in, on, or for human beings and/or for therapeutic, diagnostic, or other medical purposes, nor will it be used in connection with or brought to market with any medical device. The submission does not give rise to any liability or warranty claims against the submitter and the challenge organizers.


L2R 2022 Validation Submission

If you are interested in validating within our previous tasks, please see our detailed descriptions in our archive section (2020/2021). However, we recommend using our newly restructured datasets which include our validation data. Information about validation pairs is stored in the corresponding dataset.json.

Feel free to validate with your own methods and criteria or use our evaluation methods uploaded to github, including several zero-deformation sample submissions on our github repository.   Want to test your method against our test data? You can find all about that on Learn2Reg-Test (which opens after L2R 2022 finishes at MICCAI) !


Task 1 (NLST): Submission Format

Submissions must be uploaded as zip file containing displacement fields (displacements only, identity grid is added) for all validation pairs for all tasks (even when only participating in a subset of the tasks, in that case submit deformation fields of zeroes for all remaining tasks). You can find the validation pairs for in the NLST_dataset.json . The convention used for displacement fields depends on scipy's map_coordinates() function, expecting displacement fields in the format [ X, Y, Z,[x, y, z],], where X, Y, Z and x, y, z represent voxel displacements and image dimensions, respectively.  The evaluation script expects .nii.gz files using full-precision format  and having shapes 224x192x224x3. Further information can be found here.

We also provide a sample (zero-deformation-filed) submission file here.

The file structure of your submission should look as follows:

folder.zip
└── folder
    ├── disp_0101_0101.nii.gz
    ├── disp_0102_0102.nii.gz
    ├── disp_0103_0103.nii.gz
    ├── disp_0104_0104.nii.gz
    ├── disp_0105_0105.nii.gz
    ├── disp_0106_0106.nii.gz
    ├── disp_0107_0107.nii.gz
    ├── disp_0108_0108.nii.gz
    ├── disp_0109_0109.nii.gz
    └── disp_0110_0110.nii.gz

The first four digits represent the case id of the fixed image (as specified in the corresponding dataset.json) with leading zeros, the second four digits represent the case id of the moving image.  If you have any problems with your submissions or find errors in the evaluation code (see below), please contact Alessa Hering, Lasse Hansen, Mattias Heinrich and Christoph Großbröhmer at learn2reg@gmail.com.

Note for PyTorch users: When using PyTorch as deep learning framework you are most likely to transform your images with the grid_sample() routine. Please be aware that this function uses a different convention than ours, expecting displacement fields in the format [X, Y, Z, [z, y, x]] and normalized coordinates between -1 and 1. Prior to your submission you should therefore convert your displacement fields to match our convention (see above)