The ReXGrounding Challenge is a MICCAI 2026 challenge designed to evaluate models on localizing unconstrained radiology findings described in natural language to precise 3D segmentation masks in volumetric chest CT.
Unlike prior challenges that focus on category-level lesion or organ segmentation, this benchmark requires models to interpret diverse clinical language — including anatomical descriptors, spatial relations, and morphological attributes — and ground it accurately in volumetric space. The dataset includes both focal and diffuse abnormalities, spans a wide range of radiological patterns, and reflects real-world reporting variability.
The challenge is built upon CT-RATE, a large-scale dataset of non-contrast chest CT scans paired with free-text radiology reports, and is further extended with expert-verified, pixel-level 3D segmentations corresponding to individual report findings. The challenge is hosted on the ReXrank leaderboard.
Participants are evaluated on one primary task: free-text finding grounding. A model receives a CT volume and a natural-language finding from a radiology report and must output a 3D segmentation mask corresponding to that description.
Findings span 14 categories covering both typically non-focal abnormalities (bronchial wall thickening, bronchiectasis, emphysema, septal thickening, micronodules, and other diffuse abnormalities) and typically focal abnormalities (linear opacities, atelectasis/consolidation, ground-glass opacities, pulmonary nodules/masses, pleural effusion/thickening, honeycombing, pneumothorax, and other focal findings).
| Split | Cases | Annotations |
|---|---|---|
| Training | 2,992 CT scans | Partial-instance (up to 3 instances per finding) |
| Validation | 200 CT scans | Exhaustive (all instances segmented by radiologists) |
| Test | 300 CT scans | Exhaustive (all instances segmented by radiologists) |
All annotations are pixel-level 3D segmentation masks linked to free-text findings extracted from radiology reports. Validation and test sets are annotated exclusively by board-certified radiologists.
Ranking metric: Average Dice Similarity Coefficient (DSC) per finding per case.
Overlap-based metrics:
Distance-based metrics:
Sign up to be notified when the challenge officially opens in June 2026.
For questions about the challenge, please contact Mohammed Baharoon.
ReXGroundingCT:
@article{baharoon2025rexgroundingct,
title={ReXGroundingCT: A 3D Chest CT Dataset for Segmentation of Findings from Free-Text Reports},
author={Baharoon, Mohammed and Luo, Luyang and Moritz, Michael and Kumar, Abhinav and Kim, Sung Eun and Zhang, Xiaoman and Zhu, Miao and Alabbad, Mahmoud Hussain and Alhazmi, Maha Sbayel and Mistry, Neel P and others},
journal={arXiv preprint arXiv:2507.22030},
year={2025}
}
CT-RATE:
@article{hamamci2026generalist,
title={Generalist foundation models from a multimodal dataset for 3D computed tomography},
author={Hamamci, Ibrahim Ethem and Er, Sezgin and Wang, Chenyu and Almas, Furkan and Simsek, Ayse Gulnihan and Esirgun, Sevval Nil and Dogan, Irem and Durugol, Omer Faruk and Hou, Benjamin and Shit, Suprosanna and others},
journal={Nature Biomedical Engineering},
pages={1--19},
year={2026},
publisher={Nature Publishing Group UK London}
}