Skip to content

purdue-nrl/SpikingAutoencoder

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SpikingAutoencoder

This is the code for the work presented in the paper "Synthesizing Images from Spatio-Temporal Representations using Spike-based Backpropagation" https://www.frontiersin.org/articles/10.3389/fnins.2019.00621/full

For audio to image conversion, 2 different datasets are used :
Dataset A: one image per class
Dataset B: one image per audio

The files for dataset A are
prepare_imdb.m
ac_test.m
ac_train.m

The corresponding files for dataset B are
prepare_imdb_v2.m
ac_test_v2.m
ac_train_v2.m

The preprocessed multimodal datasets can be found here:
Dataset A:https://purdue0-my.sharepoint.com/:u:/g/personal/roy77_purdue_edu/EfXH0MneHTJOo_WySaIXyZABL3GiEVhU6UITmeZckBsXAg?e=3vDciX

Dataset B:https://purdue0-my.sharepoint.com/:u:/g/personal/roy77_purdue_edu/ES5akBNmM1tOmn_c6mFvLBABD5Pf10hH33GkDM8pXpoEIQ?e=BSyuzq

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages