Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Lissom resettling activity between two inputs. #664

Open
Hima-Mehta opened this issue Oct 13, 2016 · 6 comments
Open

Lissom resettling activity between two inputs. #664

Hima-Mehta opened this issue Oct 13, 2016 · 6 comments

Comments

@Hima-Mehta
Copy link

Hima-Mehta commented Oct 13, 2016

Hello,
If GCAL resettles the activity of V1 at the end of each run and before presenting new input where exactly it happens.(Hint: I am trying to prevent the history of activation to see direction sensitivity bypassing LGN layers ! )

@jbednar
Copy link
Member

jbednar commented Oct 13, 2016

GCAL uses a SettlingCFSheet, which is a subclass of JointNormalizingCFSheet that counts the number of activations and uses that for resetting activity and for ensuring that learning is done only once per iteration, rather than at each settling step. There are a couple of ways to avoid the resetting. (1) You can change GCAL to use a JointNormalizingCFSheet instead, and avoid all counting of activations, including the resetting. If you do that, GCAL will work about the same except that it will now learn at every single input presentation, which means that you'll need to reduce the learning rate by a factor of tsettle or so, and there will be other more subtle effects caused by learning intermediate patterns. (2) Or, you could subclass from SettlingCFSheet, reimplementing just the input_event method so that you skip the resetting while keeping everything else the same. Doing that should have almost no effect on GCAL; the resetting is not crucial for anything.

It is unlikely that either approach will help you develop direction sensitivity, for a variety of reasons, but they will at least let you test it! I believe @jlstevens has an implementation of option 1 already ready to use, as part of his PhD thesis he defends in a couple of weeks, and he should be making that public soon afterwards if that's any help.

@Hima-Mehta
Copy link
Author

Thanks for your reply.Yeah it hasn't much significance here but to be sure of it doesn't affect the way direction sensitivity is captured.So if I map this with Gcal what I think is I can take two layers on lissom where first layer calculates my sequence response and keeps the response until i present next frame,whereas second layer with bigger afferent RF chooses the best response (accumulated input winner).I am not sure how I can map it while not using LGN ?

@jbednar
Copy link
Member

jbednar commented Oct 17, 2016

I'm not sure what you are proposing here, either what the two layers would be doing or mapping it without using LGN. I'm guessing that you are trying to figure out how to measure direction selectivity without using the "multiple retinas and LGNs" hack that our earliest direction maps used? I don't think you need any of what you are discussing here, then; we've already implemented proper direction maps that don't depend on that hack. Both versions are available, currently controlled by a very ugly mechanism whereby if "_old_motion_model" is defined in the main namespace, it uses the old hacky way, and otherwise it uses the reasonable way (where each image is presented sequentially). Unfortunately I don't remember much about this -- we never got around to publishing the improved direction model, and so it would take some digging for us to remember the final status of any of that code.

@Hima-Mehta
Copy link
Author

Hima-Mehta commented Oct 18, 2016

'm guessing that you are trying to figure out how to measure direction selectivity without using the "multiple retinas and LGNs

I am exactly looking for this ! and yes I do present images sequentially.It would be great if you can provide some direction in this.I also tried to see if i can give delay in particular neurons in V1 to hold the responses. seems I am missing something!Well if that works I can upload the code here so people can use it for less biological applications like I do.

@jbednar
Copy link
Member

jbednar commented Oct 18, 2016

@jlstevens has also written code to give variable delays to various V1 neurons, which he will also release with part of his thesis. The variable-delay code wasn't developed for GCAL, but it would presumably be combined with it. @jlstevens, can you please make sure that CGCAL is checked in to the repository in its latest version and let us know? I'll also try to track down the improved motion model.

@Hima-Mehta
Copy link
Author

Thanks a lot James.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants