TensorFlow Variables are in-memory buffers containing tensors. Learn how to use them to hold and update model parameters during training.
A step-by-step walk through of the details of using TensorFlow infrastructure to train models at scale, using MNIST handwritten digit recognition as a toy example.
TensorBoard is a useful tool for visualizing the training and evaluation of your model(s). This tutorial describes how to build and run TensorBoard as well as how to add Summary ops to automatically output data to the Events files that TensorBoard uses for display.
This tutorial describes how to use the graph visualizer in TensorBoard to help you understand the dataflow graph and debug it.
This tutorial describes the three main methods of getting data into your TensorFlow program: Feeding, Reading and Preloading.
This tutorial describes the various constructs implemented by TensorFlow to facilitate asynchronous and concurrent training.
TensorFlow already has a large suite of node operations from which you can compose in your graph, but here are the details of how to add you own custom Op.
If you have a sizable custom data set, you may want to consider extending TensorFlow to read your data directly in it's native format. Here's how.
This tutorial describes how to construct and execute models on GPU(s).
When deploying large models on multiple GPUs, or when unrolling complex LSTMs or RNNs, it is often necessary to access the same Variable objects from different locations in the model construction code.
The "Variable Scope" mechanism is designed to facilitate that.