Implementation of abstractive summarization using LSTM in the encoder-decoder architecture with local attention.
-
Updated
Dec 30, 2019 - Jupyter Notebook
Implementation of abstractive summarization using LSTM in the encoder-decoder architecture with local attention.
[CVPR 2023] Efficient Semantic Segmentation by Altering Resolutions for Compressed Videos
Implementation of LA_MIL, Local Attention Graph-based Transformer for WSIs, PyTorch
Neural Machine Tranlation using Local Attention
LEAP: Linear Explainable Attention in Parallel for causal language modeling with O(1) path length, and O(1) inference
Investigating inductive biases in CNNs vs Transformers. Code and report for the Deep Learning Course Project, ETH Zurich, HS 2021.
Add a description, image, and links to the local-attention topic page so that developers can more easily learn about it.
To associate your repository with the local-attention topic, visit your repo's landing page and select "manage topics."