7 results for “topic:local-attention”
Implementation of abstractive summarization using LSTM in the encoder-decoder architecture with local attention.
[CVPR 2023] Efficient Semantic Segmentation by Altering Resolutions for Compressed Videos
Implementation of LA_MIL, Local Attention Graph-based Transformer for WSIs, PyTorch
Neural Machine Tranlation using Local Attention
LEAP: Linear Explainable Attention in Parallel for causal language modeling with O(1) path length, and O(1) inference
🚀 Self-implemented Vision Transformer (ViT) with Local Attention! Unlike standard ViT, this version integrates local attention for improved efficiency. Fully customizable with configurable patch embeddings, attention mechanisms, transformer layers as well as mixing global and local attention.
Investigating inductive biases in CNNs vs Transformers. Code and report for the Deep Learning Course Project, ETH Zurich, HS 2021.