ReXGrounding Challenge

MICCAI 2026

🏆 About the Challenge

The ReXGrounding Challenge is a MICCAI 2026 challenge designed to evaluate models on localizing unconstrained radiology findings described in natural language to precise 3D segmentation masks in volumetric chest CT.

Unlike prior challenges that focus on category-level lesion or organ segmentation, this benchmark requires models to interpret diverse clinical language — including anatomical descriptors, spatial relations, and morphological attributes — and ground it accurately in volumetric space. The dataset includes both focal and diffuse abnormalities, spans a wide range of radiological patterns, and reflects real-world reporting variability.

The challenge is built upon CT-RATE, a large-scale dataset of non-contrast chest CT scans paired with free-text radiology reports, and is further extended with expert-verified, pixel-level 3D segmentations corresponding to individual report findings. The challenge is hosted on the ReXrank leaderboard.


🔬 Task

Participants are evaluated on one primary task: free-text finding grounding. A model receives a CT volume and a natural-language finding from a radiology report and must output a 3D segmentation mask corresponding to that description.

Findings span 14 categories covering both typically non-focal abnormalities (bronchial wall thickening, bronchiectasis, emphysema, septal thickening, micronodules, and other diffuse abnormalities) and typically focal abnormalities (linear opacities, atelectasis/consolidation, ground-glass opacities, pulmonary nodules/masses, pleural effusion/thickening, honeycombing, pneumothorax, and other focal findings).


📦 Dataset

SplitCasesAnnotations
Training2,992 CT scansPartial-instance (up to 3 instances per finding)
Validation200 CT scansExhaustive (all instances segmented by radiologists)
Test300 CT scansExhaustive (all instances segmented by radiologists)

All annotations are pixel-level 3D segmentation masks linked to free-text findings extracted from radiology reports. Validation and test sets are annotated exclusively by board-certified radiologists.


📅 Timeline

Now — May 2026
Pre-registration open — training data already publicly available
June 2026
Challenge launch — registration opens, validation set released
June — September 2026
Development phase — evaluate on validation set, submit multiple runs
September 2026
Submission deadline — final submission evaluated on held-out test set
Late September 2026
Results announced & challenge session at MICCAI 2026

📊 Evaluation Metrics

Ranking metric: Average Dice Similarity Coefficient (DSC) per finding per case.

Overlap-based metrics:

  • Dice (primary): Average DSC computed per finding per case
  • Hit Rate: Proportion of findings where overall Dice ≥ 0.1
  • Instance Precision: TP / (TP + FP), where TP is a predicted instance with Dice ≥ 0.2
  • Instance Recall: TP / (TP + FN)
  • Instance F1: Harmonic mean of Instance Precision and Recall

Distance-based metrics:

  • Distance Precision: TP / (TP + FP), where TP is a predicted instance with ASSD (non-focal) or centroid distance (focal) ≤ 2× max voxel spacing
  • Distance Recall: TP / (TP + FN), using the same distance matching criterion
  • Distance F1: Harmonic mean of Distance Precision and Recall

📝 How to Participate

  • Pre-register now using the form below to receive updates when the challenge officially opens in June 2026.
  • Training data is already publicly available — you can start developing your method now.
  • Any publicly available or private training data, including pre-trained models and external datasets, may be used. All external data sources must be described in the method submission.
  • All predictions on the test set must be fully automatic — no manual intervention, post-hoc editing, or case-specific tuning allowed.
  • During the development phase, you can evaluate on the validation set and submit multiple runs. Only the final submission before the deadline is officially ranked.
  • See the submission guidelines for format details.

🏅 Awards & Publication

  • Top 3 performing teams will receive certificates and invited oral/spotlight presentations at the challenge session.
  • Top 3 teams will be recognized on the public leaderboard and in the post-challenge publication.
  • Members of the top 3 teams qualify for co-authorship on the challenge publication (up to 8 authors per team).
  • All teams are free to publish their own results independently with no embargo period.

📬 Pre-Register

Sign up to be notified when the challenge officially opens in June 2026.

✅ Thank you! You've been pre-registered. We'll notify you when the challenge opens.

👥 Organizers

  • Mohammed Baharoon — Harvard Medical School, USA
  • Pranav Rajpurkar — Harvard Medical School, USA
  • Luyang Luo — Harvard Medical School, USA
  • Xiaoman Zhang — Harvard Medical School, USA
  • Mahmoud Hussain Alabbad — King Fahad Hospital, Saudi Arabia
  • Sungeun Kim — Harvard Medical School, USA


📧 Contact

For questions about the challenge, please contact Mohammed Baharoon.


📚 References

ReXGroundingCT:

@article{baharoon2025rexgroundingct,
  title={ReXGroundingCT: A 3D Chest CT Dataset for Segmentation of Findings from Free-Text Reports},
  author={Baharoon, Mohammed and Luo, Luyang and Moritz, Michael and Kumar, Abhinav and Kim, Sung Eun and Zhang, Xiaoman and Zhu, Miao and Alabbad, Mahmoud Hussain and Alhazmi, Maha Sbayel and Mistry, Neel P and others},
  journal={arXiv preprint arXiv:2507.22030},
  year={2025}
}

CT-RATE:

@article{hamamci2026generalist,
  title={Generalist foundation models from a multimodal dataset for 3D computed tomography},
  author={Hamamci, Ibrahim Ethem and Er, Sezgin and Wang, Chenyu and Almas, Furkan and Simsek, Ayse Gulnihan and Esirgun, Sevval Nil and Dogan, Irem and Durugol, Omer Faruk and Hou, Benjamin and Shit, Suprosanna and others},
  journal={Nature Biomedical Engineering},
  pages={1--19},
  year={2026},
  publisher={Nature Publishing Group UK London}
}