title | layout |
---|---|
ContinualAI Reading Group |
page |
The ContinualAI Reading Group is hosted every Friday in collaboration with MILA and it is based on roughly 60 minutes discussion about a particular Continual Learning paper. Occasionally, a speaker is invited to explain his work. Join our reading group on slack! It will be the occasion to talk with other researchers of this amazing field!
Here is the list of the previous RG sessions:
- [June 11th 2021] "ACAE-REMIND for Online Continual Learning with Compressed Feature Replay"
- [June 4th 2021] "Continual Learning for Recurrent Neural Networks: an Empirical Evaluation"
- [May 21st 2021] “Class-Incremental Learning with Generative Classifiers”
- [May 14th 2021] "Psycholinguistics meets CL: Measuring Forgetting in Visual Question Answering"
- [May 7th 2021] "Rehearsal revealed: The limits and merits of revisiting samples in continual learning"
- [April 23th 2021] "Understanding Continual Learning Settings with Data Distribution Drift Analysis"
- [April 16th 2021] “Towards Continual, Online, Unsupervised Depth"
- [April 9th 2021] "Catastrophic Forgetting in Deep Graph Networks"
- [April 2nd 2021] "Continuum: Simple Management of Complex Continual Learning Scenarios"
- [March 19th 2021] "A Theoretical Analysis of Catastrophic Forgetting through the NTK Overlap Matrix"
- [March 12th 2021] "Adaptation Strategies for Automated Machine Learning on Evolving Data"
- [March 5th 2021] “IIRC: Incremental Implicitly-Refined Classification”
- [Fabruary 19th 2021] "EEC: Learning to Encode and Regenerate Images for Continual Learning"
- [Fabruary 12th 2021] "Sequoia - Towards a Systematic Organization of Continual Learning Research"
- [Fabruary 8th 2021] "Does Continual Learning = Catastrophic Forgetting?"
- [January 22th 2021] "Linear Mode Connectivity in Multitask and Continual Learning"
- [January 15th 2021] "Efficient Continual Learning with Modular Networks and Task-Driven Priors"
- [January 8th 2021] "Generalisation Guarantees for Continual Learning with Orthogonal Gradient Descent"
- [December 18th 2020] "Energy-Based Models for Continual Learning"
- [December 11th 2020] "Learn more, forget less: Cues from human brain"
- [December 4th 2020] "Continual Learning with Deep Artificial Neurons"
- [November 20th 2020] "Remembering for the Right Reasons: Explanations Reduce Catastrophic Forgetting"
- [November 13th 2020] "Continual Learning in Recurrent Neural Networks"
- [November 6th 2020] "Optimal Continual Learning has Perfect Memory and is NP-hard"
- [October 23th 2020] “Memory-Efficient Incremental Learning Through Feature Adaptation”
- [October 16nd 2020] "Continual Prototype Evolution: Learning Online from Non-Stationary Data Streams"
- [October 9th 2020] “Continual Learning from the Perspective of Compression”
- [October 2nd 2020] "Bookworm Continual Learning: Beyond Zero-Shot Learning and Continual Learning"
- [Sptember 25th 2020] “A Wholistic View of Continual Learning with Deep Neural Networks”
- [Sptember 11th 2020] "GDumb: A Simple Approach that Questions Our Progress in Continual Learning"
- [Sptember 4th 2020] "Online Fast Adaptation and Knowledge Accumulation (OSAKA): a New Approach to Continual Learning"
- [July 24th 2020] "Efficient Continual Learning in Neural Networks with Embedding Regularization"
- [July 16th 2020] "Supermasks in Superposition"
- [July 10th 2020] "Networks naturally learn to learn and forget to forget"
- [July 3rd 2020] "Modeling the Background for Incremental Learning in Semantic Segmentation"
- [June 19th 2020] “Continual Learning of Recurrent Neural Networks by Locally Aligning Distributed Representations”
- [June 12th 2020] “Learning to Recognize Code-switched Speech Without Forgetting”
- [June 5th 2020] “Explaining How Deep Neural Networks Forget by Deep Visualization”
- [May 22th 2020] “Small-Task Incremental Learning”
- [May 15th 2020] “Generative Feature Replay For Class-Incremental Learning”
- [May 8th 2020] "Defining Benchmarks for Continual Few-Shot Learning"
- [May 1st 2020] "Pseudo Rehearsal Using non Photo-Realistic Images"