Image source: Mitsuyama Y, Takita H, Walston SL et al., European Radiology 2025 (CC BY 4.0)
This is further complicated by images with various rotations. A radiograph be taken from the anterior to the posterior or vice versa, and it can also be lateral, inverted or rotated, further complicating the dataset. In large imaging archives, these minor errors quickly add up to hundreds or thousands of mislabeled results.
AI Models Improve Detection of Mislabeled Radiographs
A research team at Osaka Metropolitan University Graduate School of Medicine, including graduate student Yasuhito Mitsuyama and Professor Daiju Ueda, aimed to improve the detection of mislabeled data by automatically identifying errors before they affect the input data for deep-learning models. The group developed two models: Xp-Bodypart-Checker, which classifies radiographs depending on the body part; and CXp-Projection-Rotation-Checker, which detects the projection and rotation of chest radiographs.
High Accuracy Achieved by New AI Detection Models
Xp‑Bodypart‑Checker achieved an accuracy of 98.5% and CXp‑Projection‑Rotation‑Checker obtained accuracies of 98.5% for projection and 99.3% for rotation. The researchers are optimistic that integrating both into a single model would deliver game-changing performance in clinical settings.
Researchers Plan Further Improvements for Clinical Use
Although the results were outstanding, the team hopes to fine-tune the method further for clinical use. “We plan to retrain the model on radiographs that were flagged despite being correctly labeled, as well as those that were not flagged but were in fact mislabeled, to achieve even greater accuracy,” Mitsuyama said.
Source:Osaka Metropolitan University

