For the unfamiliar: Generative models are one of the most promising approaches towards this goal. To train a generative model we first collect a large amount of data in some domain (e.g., think millions of images, sentences, or sounds, etc.) and then train a model to generate data like it. The intuition behind this approach follows a famous quote from Richard Feynman:
The trick is that the neural networks we use as generative models have a number of parameters significantly smaller than the amount of data we train them on, so the models are forced to discover and efficiently internalize the essence of the data in order to generate it.
This repository is dedicated to playing around with neural networks. The idea here is to poke around with various neural networks, doing unconventional things with them. Doing things like trying to teach a sequence to sequence model math, doing classification with a generative model, and so on. I've wanted to do this, but haven't thought of a way to compile them, this will have to do!
My main focus is on implementing various kinds of Generative Adversarial Networks and Variational AutoEncoders and special thanks to sentdex, kaggle and eriklindernoren for making this repository a combination of amazing generative models.
A foundation of Machine Learning and Deep Learning with TensorFlow specifically is necessary for understanding.
At the very least, right now, you will need TensorFlow installed, and Python of course! I am currently using:
* Python 3.6
* TensorFlow 2.0 / Tensorflow 1.7
I've personally always really liked generative models. They are relatively quick to train, requiring very little data, but can produce results very similar to the input you fed them. They don't appear to have much practical use as of yet, but you can do fun things with them, like making art, making music and such.