Skip to content

amolk/AGI-experiments

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

67 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Various experiments related to AGI

How would we build an AGI agent?

  • Bottom-Up approachs aim to bring together models, algorithms, modules with specific capabilities and attempt to put those together in a composite AGI system.
  • Top-Down approaches attempt to build the simplest and least capable "minimal" AGI agent first and then iterate to add capabilities.

I am investigating Top-Down approaches to AGI.

How would we describe such a "minimal" AGI system? What capabilities would it have? How would we recognize it as an AGI? Would there be a single fundamental algorithm? What properties should such algorithm have? How would that algorithm produce an agent that exhibits representation learning and reinforcement learning?

I have been exploring these questions for a while now. This repository contains some of the investigations.

A few interesting notebooks -

  • Representing values as histogram obviates the need for precision weighting.
  • Can we learn latent variable probability distributions directly? This notebook explores the Quantized Distribution Auto Encoder approach.
  • Attractor learning: Imagine a neural network model that is allowed to settle its activity over time through lateral connection feedback. Can we train the network to achieve settled activity pattern quicker? This is along the lines of LISSOM, but using backprop so we can use DNN toolset.
  • VAE convolution kernel: What would happen if we used VAEs as convolution kernel? This notebook explores building 4 layer network that attempts to build a small top level latent representation of MNIST digits. Each layer is trained both independently (like DBN) and with top-down feedback.
  • Domain quantization / Information density normalization: Each input is represented as a histogram. The model then adjusts the bins such that they are closely packed where many data points are present, i.e. precision follows information density. The model transforms input such that if input follows the learned distribution, the output is piecewise linear, which might help downstream layers to learn better.
  • What is a good looking distribution? - This notebook explores metrics to quantify if a distribution is good looking, i.e. has maximum signal to noise ratio.
  • Active dendrite models: This script explores how an active dendrite based model could produce sparse latent representation of inputs.

Notes on various approaches explored

Active dendrite framework

The main idea behind the active dendrite framework is to make neurons more prone to fire when appropriate context is received. This could be a mechanism for mixing top down predictions with bottom up signal. Top down predictions would make certain set of neurons prone to firing through active dendrite activation, and the patterns that these neurons are sensitive to can then drive network activity. This is equivalent to selectively attending to predicted parts of the signal. This is a likely mechanism behind attention when we try to hear someone speak in a noisy environment.