Attempted PyTorch implementation of Active Neural Generative Coding (ANGC) from Backprop-Free Reinforcement Learning with Active Neural Generative Coding. This builds on Neural Generative Coding (NGC) from The neural coding framework for learning generative models and applies it to RL problems.
My notation differs in spots from the paper. One major difference is that I use row-major convention (1-d vectors are row vectors instead of column vectors), so that I don't have to worry about transposing things when going from math to implementation.
Variable | Description |
---|---|
number of layers | |
number of interence time steps (paper calls this |
|
inference update leak coefficient | |
prediction error coefficient | |
inference state update coefficient | |
error weight update coefficient. "controls the time-scale at which the error synapses are adjusted (usually values in the range of [0.9, 1.0] are used) | |
dimensionality of layer |
|
|
hidden layer state vectors. |
bottom sensory vector, clamped to sensory input |
|
top sensory vector, clamped to sensory input |
|
(top-down) prediction weights for layer |
|
(bottom-up) error weights for layer |
|
activation function for layer |
|
another activation function for layer |
|
top-down prediction vector of |
|
prediction error vector for layer |
|
bottom-up + top-down inference pressure | |
generative weights modulation matrix for layer |
|
error weights modulation matrix for layer |
There's an inference phase that iterates, for timesteps
(where the last equation is slightly simplified to exclude the NGC lateral term, which is not used in ANGC):
TODO pseudocode