Our crossMoDA challenge at MICCAI 2021 is now live!

CAI4CAI members are leading the organization of the cross-modality Domain Adaptation challenge (crossMoDA) for medical image segmentation Challenge, which runs as an official challenge during the Medical Image Computing and Computer Assisted Interventions (MICCAI) 2021 conference.

Domain Adaptation (DA) has recently raised strong interests in the medical imaging community. By encouraging algorithms to be robust to unseen situations or different input data domains, Domain Adaptation improves the applicability of machine learning approaches to various clinical settings. While a large variety of DA techniques has been proposed for image segmentation, most of these techniques have been validated either on private datasets or on small publicly available datasets. Moreover, these datasets mostly address single-class problems.

T1 example
Source (contrast-enhanced T1)
T2 example
Target (high resolution T2)

To tackle these limitations, the crossMoDA challenge introduces the first large and multi-class dataset for unsupervised cross-modality Domain Adaptation. The goal of the challenge is to segment two key brain structures involved in the follow-up and treatment planning of vestibular schwannoma (VS): the tumour and the cochlea. Specifically, the segmentation of the tumour and the surrounding organs at risk, such as the cochlea, is required for radiosurgery, a common VS treatment. Moreover, tumour volume measurement has also been shown to be the most accurate measurements for the evaluation of VS growth. While contrast-enhanced T1 (ceT1) Magnetic Resonance Imaging (MRI) scans are commonly used for VS segmentation, recent work has demonstrated that high-resolution T2 (hrT2) imaging could be a reliable, safer, and lower-cost alternative to ceT1. For these reasons, we propose an unsupervised cross-modality challenge (from ceT1 to hrT2) that aims to automatically perform VS and cochlea segmentation on hrT2 scans. The training source and target sets are respectively unpaired annotated ceT1 and non-annotated hrT2 scans.

This challenge will be the first medical segmentation benchmark of unsupervised DA techniques and promote the development of new unsupervised domain adaptation solutions for medical image segmentation. It will also contribute to the development of new algorithms for the follow-up and treatment planning of VS using hrT2 scans only.

More information about the challenge here.

Reuben Dorent
Reuben Dorent
Alumni

Reuben is a PhD student supervised by Prof. Tom Vercauteren and Prof. Sebastien Ourselin. Reuben’s research focuses on collaborative learning of joint tasks from various medical centres with different local resources.

Aaron Kujawa
Aaron Kujawa
Research Associate

Aaron is a Research Associate working on the automatic segmentation of Vestibular Schwannoma.

Jonathan Shapey
Jonathan Shapey
Clinical Academic and Consultant Neurosurgeon

Jonathan’s academic interest focuses on the application of medical technology and artificial intelligence to neurosurgery.

Samuel Joutard
Samuel Joutard
Alumni

Samuel is a PhD student supervised by Dr. Marc Modat and Prof. Tom Vercauteren. His research focuses on learning based registration with a special focus on tasks involving long range complex deformations.

Tom Vercauteren
Tom Vercauteren
Professor of Interventional Image Computing

Tom’s research interests include machine learning and computer assisted interventions