C++ Inference Module for Generative TensorFlow Models
-
Follow the Inference Library instalation steps.
-
Run from an existing model (AR, GAN, VAE):
cd modules/
mkdir build
cd build
cmake3 .. -DCMAKE_PREFIX_PATH=<PATH_TO_INFERENCE_LIBRARY_INSTALL_PATH>
make
./EventGeneration
- Save your input/output node names. For example, given a Python model:
# Event Data Innputs
x_sample = tf.placeholder(tf.float32, shape=xs, name='input_cells')
y_sample = tf.placeholder(tf.float32, shape=xs[0], name="input_labels")
# Generated Result
generation = tf.add(a, b, name='output_result')
-
Store the graph definition in a
.pb
file as well as the latest checkpoint in.ckpt
files (.data
,.index
,.meta
) -
Note your input data shape information (both for samples and labels).
modelType = "dcgan"
modelGraph = "../dcgan.pb"
modelRestore = "../model.b32.ckpt"
inputNode = "input_cells"
labelNode = "input_labels"
outputNode = "output_result"
inputShape = {64,8,8,24}
labelShape = {64,100}