Skip to content

HPAI-BSC/neural_patterns_abstractions

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

98 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Neural Patterns of Visual Abstractions in CNN

This repository contains the code to generate, explore and exploit neural patterns of visual abstractions found in a pre-trained CNN.

To run all the experiments its needed:

  • The Imagenet 2012 validation dataset (1000 clases, 50k images)
  • A text file with the image names and their labeled synset
  • A library to extract the Full Network Embedding. This code uses Tiramisu, but an alternative method could be used.

The code is split in several stages:

  • Identification of the WordNet synsets which satisfy the image size requirements
  • Data preparation for the FNE extraction process. Directory structure and files to be used.
  • FNE generation.
  • Neural pattern detection on the previously generated FNEs.

First Step

For this step you need a python with nltk installed.

Input: A file with all the names of the images and its correspondent class ( writed as an imagenet synset).

file: step-1/data/imagenet2012_val_synset_codes.txt

Output:The partitions of all the synsets which satisfy the image size requirements

For each synset it will create two files in data:

  • step-1/data/synset_partitions/name_imgs.npz
  • step-1/data/synset_partitions/name_hypernims.txt

Second Step

For this step you need the files generated in step 1. It generates the necessary directory structure and files to feed the FNE extraction process.

Input: The files generated on the step 1 of the folder step-1/data/synset_partitions/

Output:The folders with the symbolic links on path_location/imgs for every one of the selected synsets.

For this example if we take the synset 'dog', the structure of links generated by this code will be:

  • path_location/imgs/dog/train/dog/images0008.JPEG
  • path_location/imgs/dog/train/no_dog/images00058.JPEG

Once you create the structure, if you need to run the step3, you have to copy the data/imgs/ folder to tiramisu_semantic_transfer/imgs/

Third Step

This step creates the FNE for the selected synsets. This version uses the Tiramisu 3.0 interface, but an any other feature extraction library should work.

Input: The folders with the symbolic links on path_location/imgs for every one of the selected synsets.

Output: One embedding per synset selected.

Fourth Step

This step we extract the features with more than the 50% of images with value 1 in the embedding for each one of the synsets selected. This is the list of neural patterns of visual abstractions.

You need to run this step from marenostrum.

Input: One embedding per synset selected. It loads the embeddings from '/gpfs/projects/bsc28/tiramisu_semantic_transfer'

Output: A npz file with the representative features per synset. It saves this files on 'step-4/data/feature_lists/' his file also has a dictionary with the layer architecture.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published