Experiments with tensorflow 2.0, gpu support, persistent logging and stable docker env.
Notebook | Kind | Description | References |
---|---|---|---|
Intro | classification | Introduction over classification, using Logistic Regressison and training with Grid Search Cross-validation | dataset |
Multi-class | classification | Introduction to multi-class classification and examples | |
Study of Churn | classification | Study over client churn behavior | |
Study of Facts | classification | Study over a set of true and fake news | |
Churn | regression | Introduction over linear regression | |
Study of Weather WW2 | regression | Instantial study over Weather information during WW2 | dataset |
cifar10 | fine-tuning classification | CNN fine-tuned from ImageNet to solve Cifar10 | dataset |
Barzinga | fine-tuning classification | Classifying objects from a low-budget WebCAM | |
best-artworks-of-all-time | fine-tuning classification | Art authorship attribution fine-tuned from ImageNet | BAoAT dataset |
Study of Mapping Challenge | semantic segmentation | Segmentation of construction satellite images using U-NET and EfficientNetB4 | Mapping Challenge |
Study of Oxford IIT Pet | semantic segmentation | Segmentation of dogs and cats from the oxford_iit_pet |
Oxford IIT Pet dataset |
Notebook | Kind | Description | References |
---|---|---|---|
gatys | style transfer | Style transfer between two arbitrary images | article |
VaE Cifar10 | variational autoencoder | VaE over Cifar10 and embedding of the set in the metric space | tutorial |
VaE Fashion MNIST | variational autoencoder | VaE over Fashion MNIST | tutorial |
Contrastive loss | siamese network | Siamese CNN trained with contrastive loss | article |
Notebook | Kind | Description | References |
---|---|---|---|
vanilla | Response-based Distillation | A small network is trained to emulate the output of a larger one. | article |
FitNet | Activation-based Distillation | A network better generalizes when regularized by a larger one. | article |
Notebook | Kind | Description | References |
---|---|---|---|
Activation Maximization | optimization | Network concept-representation by gradient ascending over the output value | class notes |
Grad-CAM | CAM | Explaining networks' decision using gradient info and CAM | article |
Grad-CAM++ | CAM | Adjust Grad-CAM weights to prevent activation domiance of large regions over small ones | article |
Score-CAM | CAM | CAM based on Increase of Confidence | article |
Gradient Backpropagation | saliency | Gradient-based | |
Guided Gradient Backpropagation | saliency | Gradient-based explaining method considering positive intermediate gradients | article |
Smooth Gradient Backpropagation | saliency | Gradient-based explaining method with local-level gradient correction | article |
Full Gradient Representation | saliency | Explaining using function linearization with gradient-based and bias information | article |
Code in this repository is kept inside jupyter notebooks, so any jupyter server will do. I added a docker-compose env to simplify things, which can be used as follows:
./actions/run.sh # start jupyter notebook
./actions.run.sh {up,down,build} # more compose commands
./actions.run.sh exec experiments python path/to/file.py # any commands, really
./actions/run.sh tensorboard # start tensorboard