This is a first approach to Deep Learning by an investigation group from Rosario, Santa Fe, Argentina. We used another team's solution to submerge ourselves. However, we've since modified it to suit our needs and to apply any improvements we saw fit.
The order of execution would be
- pickle_dataset
- main
- check
Install on Linux
sudo pip install keras
sudo apt-get install python3-matplotlib
python3 -mpip install opencv-python --user
python3 -mpip install pandas --user
python3 -mpip install numpy --user
Download DataSet rsna-bone-age
- Generate the API key
Go to the Kaggle account, link https://www.kaggle.com/ your user name
/account
Click Create New API Token and then save the json file in 'home' (linux users), mode info kaggle-api
- Install kaggle cli
pip install kaggle
- Move our API key to kaggle path
kaggle
mv ./kaggle.json ~/.kaggle/kaggle.json
or
mv ./kaggle.json /root/.kaggle/kaggle.json
- Downloader dataset in our repository
kaggle datasets download -d kmader/rsna-bone-age -p ./
- Unzip
unzip boneage-test-dataset.zip
unzip boneage-training-dataset.zip
- Move CSV to dataset folder
mv boneage-training-dataset.csv ./boneage-training-dataset
mv boneage-test-dataset.csv ./boneage-test-dataset
- Result
.
├── attention_model.py
├── boneage-test-dataset
├── boneage-training-dataset
├── check_no_gender.py
├── check.py
├── dataset_sample
├── .git
├── .gitignore
├── main_no_gender.py
├── main.py
├── pickle_dataset_multiprocessing.py
├── pickle_dataset.py
├── prueba.py
├── README.md
...