This is music visualization using the deep dream approach. An audio file is processed by a musically motivated neural network (musicnn), which extracts musical features from the song. These features are connected to layers of another neural network, an image classifier turned upside down to create deep dream pictures.
These instructions use Windows syntax.
- To run neural networks on a GPU (highly recommended), install the required prerequisites for TensorFlow 2.
- Get the source code into your working folder.
- Install the dependencies:
pipenv sync
. - Activate the pipenv environment:
pipenv shell
.
Tweak the parameters in main.py
and run:
python main.py SONG.wav
Musicnn is used to exctact features from audio.
Inception v1 by Google is the deep dream model.