Skip to content

Implementing Style Transfer in Tensorflow 2.0, using the VGG19 network architecture, which composes the content image in the style of the reference picture, both input by the user.

Notifications You must be signed in to change notification settings

NextTechLab/Neural-Style-Transfer

 
 

Repository files navigation

Neural-Style-Transfer

Neural Style Transfer

1. Libraries used:

- Tensorflow (Code Compatible with 2.00 alpha & 1.1x version)
- Scikit-learn
- Pandas
- Numpy
- Matplotlib
- Pillow
- Function tools

2. Basic Approach:

Points of importance: 
    1. The VGG19 model trained on ImageNet is used in order to save time on learning image features.
    2. Style & Content Layers are selected from the VGG19 Model and are used in computing Style loss & Content Loss values respectively
    3. The formula for content loss is the Euclidean Distance between the output and content images
    4. The formula for style loss is based upon the comparison of their Gram matrices

3. Changes from Source Article:

The major changes from the source(see below) include:
    - Additional visualizations for every 10 epoch
    - Compatibility with tf 2.00
    - Additional Content Layers 
    - Changes to helper functions

4. References used:

- https://medium.com/tensorflow/neural-style-transfer-creating-art-with-deep-learning-using-tf-keras-and-eager-execution-7d541ac31398
-https://github.com/vinayak19th/Neural-Style-Transfer

Outputs:

Content Image

Style Image

Output Image

About

Implementing Style Transfer in Tensorflow 2.0, using the VGG19 network architecture, which composes the content image in the style of the reference picture, both input by the user.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 99.7%
  • Other 0.3%