This project focuses on developing a deep learning model for bone age estimation using left hand radiographs. The primary goal is to build a proof of concept that showcases the capabilities of artificial intelligence (AI) in assisting radiologists with this task. The project is a collaborative effort between a radiologist from Mexico and Cardiology national institute "Ignacio Chavez".
The model will be trained on a publicly available dataset provided by the RSNA Pediatric Bone Age Challenge (2017). This dataset consists of a diverse range of X-ray images of pediatric patients along with their corresponding bone age labels. It serves as a comprehensive resource for training and evaluating the proposed deep learning model.
Additionally, the project aims to evaluate the model's performance on a private dataset obtained from a hospital. This evaluation will help assess the model's real-world applicability and provide insights into its effectiveness when applied to different data sources.
The primary objective of this project is to develop a deep learning model that can accurately estimate the bone age of pediatric patients. By leveraging the power of AI, the model intends to assist radiologists by automating and augmenting their bone age assessment process. This collaboration seeks to combine the expertise of a radiologist with the technical skills of an AI student to create a robust and reliable tool.
It is important to note that this project is solely for research purposes and will not be utilized in any commercial aspect. The code may contain re-implementations of existing models, but proper references will be provided to acknowledge the original work.
- A trained deep learning model capable of estimating bone age from X-ray images.
- A comprehensive evaluation of the model's performance on both public and private datasets.
- Documentation detailing the model architecture, training procedure, and evaluation results.
- Code repository containing the source code, dataset processing scripts, and model implementation.
- Proper attribution and references to any external code or models used in the project.
- By developing and presenting this proof of concept, the project aims to demonstrate the potential of AI in assisting radiologists and advancing medical imaging practices for bone age estimation.
- The RSNA dataset is first passed through Segment Anything Model (SAM)
- The output of SAM is then filtered based on the score of the segmentation mask and the weighted quantity of voxels
- The best mask is then post-processed with morphological closing to fill any small holes in the mask
- The mask is then checked to see if vertical flip is required and cropped to remove the forearm in some cases
- The final mask is then used to remove background pixels from the X-ray scans
- The image is then cropped to the nearest foreground pixel
- Manual filtering is done to remove any bad images or failed images
- The model is trained on the RSNA dataset
- Pretrained weights from the ImageNet dataset are used to initialize the model
- ResNet-50 is used as the baseline model
- InceptionV3 model was trained by referencing the model architecture from the RSNA paper first place solution by Alexander Bilbily
- The final model will be fintuned and evaluated on the private dataset
- The model is evaluated using MAE loss. The MAE loss is calculated by taking the absolute difference between the predicted bone age and the actual bone age.
- The model is evaluated on the RSNA dataset and private dataset (later)