ReXrank

Chest X-ray Report Generation Leaderboard

ReXrank Submission Guideline (Round 1)

The ReXrank Challenge is a competition in chest X-ray report generation leveraging ReXGradient, our comprehensive multi-institutional dataset, running from December 1, 2024, to March 15, 2025. The competition welcomes participation from academic institutions, industry professionals, and independent researchers worldwide. We will evaluate the performance of the models from multiple critical dimensions, including clinical accuracy and generalization capability across diverse institutions. A panel of distinguished radiologists will conduct thorough assessments of the highest-performing models. Top-performing participants will be invited to collaborate on future research initiatives and model development.

Getting Started

To evaluate your models, we made available the evaluation script we will use for official evaluation, along with a sample prediction file that the script will take as input. To run the evaluation, use python evaluate.py <path_to_data> <path_to_predictions>.

Submission Guidelines

1. Evaluating on the MIMIC-CXR Test Set

To achieve a consistent score with our leaderboard, please use the official MIMIC-CXR test split. You can download the file from here. We evaluate at the study level. If the submitted model can input multiple images, we will input all images of a study. If the submitted model includes only one image, we will default to using the frontal image. We also include context information like patient age, patient gender, indication and comparison. When submission, you can select if you are going to use this info.

2. Model Submission

Your model submission should include the following:

  1. Model Description: This description identifies your submission on the leaderboard: Name of the model, Institution, Paper link, Code link, Year.
  2. Conda Environment File: Include the environment.yaml file support conda install.
  3. Inference Script: The model should support the command: python inference.py <input_json_file> <output_json_file> <img_root_dir> We provide an example of MedVersa for understanding our requirements.
  4. Evaluation Result: Include the evaluation result on the MIMIC-CXR test set. We will contact you if we are unable to replicate these results.

Any questions or concerns? Please reach out to us with email.