83 results for “topic:positional-encoding”
Cameras as Relative Positional Encoding
Achieve the llama3 inference step-by-step, grasp the core concepts, master the process derivation, implement the code.
Official implementation for "DyPE: Dynamic Position Extrapolation for Ultra High Resolution Diffusion".
PET-NeuS: Positional Encoding Tri-Planes for Neural Surfaces (CVPR 2023)
Source code for GNN-LSPE (Graph Neural Networks with Learnable Structural and Positional Representations), ICLR 2022
[CVPR 2021] Adversarial Generation of Continuous Images
[CVPR 2023] This is the official PyTorch implementation for "Dynamic Focus-aware Positional Queries for Semantic Segmentation".
Learnable Fourier Features for Multi-Dimensional Spatial Positional Encoding
Continuous Augmented Positional Embeddings (CAPE) implementation for PyTorch
本仓库定位为 AI论文复现 / 从零实现 Transformer。 代码遵循原论文的模块划分,包含位置编码、多头注意力、前馈网络、编码器‑解码器等全部组件,并附带详细的中文拆解文档与英文注释,方便学习与二次开发。
Trading Positional Complexity vs Deepness in Coordinate Networks
"Found in the Middle: How Language Models Use Long Contexts Better via Plug-and-Play Positional Encoding" Zhenyu Zhang, Runjin Chen, Shiwei Liu, Zhewei Yao, Olatunji Ruwase, Beidi Chen, Xiaoxia Wu, Zhangyang Wang.
Developed the ViViT model for medical video classification, enhancing 3D organ image analysis using transformer-based architectures.
This repository offers a comprehensive overview and quantitative benchmarking of positional encoding methods in transformer-based time series models.
Multiresolution Graph Transformers and Wavelet Positional Encoding for Learning Long-Range and Hierarchical Structures
Context-aware Biases for Length Extrapolation
🧮 Algebraic Positional Encodings.
PyTorch implementation of "Attention Is All You Need" by Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Lukasz Kaiser, Illia Polosukhin
完整的原版transformer程序,complete origin transformer program
A clean, ground-up implementation of the Transformer architecture in PyTorch, including positional encoding, multi-head attention, encoder-decoder layers, and masking. Great for learning or building upon the core model.
[ICML'25] "Rethinking Addressing in Language Models via Contextualized Equivariant Positional Encoding" by Jiajun Zhu, Peihao Wang, Ruisi Cai, Jason D. Lee, Pan Li, Zhangyang Wang
Robust Point Cloud Processing through Positional Embedding
Implementation of Rotary Embeddings, from the Roformer paper, in Tensorflow
Unofficial pytorch implementation of the paper "Learnable Fourier Features for Multi-Dimensional Spatial Positional Encoding", NeurIPS 2021.
Benchmarking PEs for GNNs and Graph Transformers (KDD 2026)
The implementation of transformer as presented in the paper "Attention is all you need" from scratch.
Teaching transformer-based architectures
Official code for NeurIPS 2023 paper "Laplacian Canonization: A Minimalist Approach to Sign and Basis Invariant Spectral Embedding".
Application for training an autoencoder for generating an encoder that can be used as feature extractor for dimensionality and noise reduction, while the decoder can be used for synthetic data generation. Supports dynamic plugin integration, allowing users to extend its capabilities by adding custom encoder and decoder models.
Crate for `Embedings` and `Positional Encoding` (Rust) (Q2:2025)