15 results for “topic:token-merging”
[EMNLP 2024 & AAAI 2026] A powerful toolkit for compressing large models including LLMs, VLMs, and video generative models.
📚 Collection of token-level model compression resources.
A paper list about Token Merge, Reduce, Resample, Drop for MLLMs.
[NeurIPS 2025] HoliTom: Holistic Token Merging for Fast Video Large Language Models
[CVPR 2025] PACT: Pruning and Clustering-Based Token Reduction for Faster Visual Language Models
Official implementation of CVPR 2024 paper "vid-TLDR: Training Free Token merging for Light-weight Video Transformer".
[CVPR'25] MergeVQ: A Unified Framework for Visual Generation and Representation with Token Merging and Quantization
The official implementation of "Learning Compact Vision Tokens for Efficient Large Multimodal Models"
[ICLR 2026] MergeMix: A Unified Augmentation Paradigm for Visual and Multi-Modal Understanding
😎 Awesome papers on token redundancy reduction
DRIP: Dynamic Patch Pooling for Efficient Vision Transformers
[ICLR 2026] Official code of PPE: Positional Preservation Embedding for Token Compression in Multimodal Large Language Models.
Implementation of Vision Transformers (ViT) with a token merging mechanism
Efficient vision-language pre-training toolkit: Token Merging, LoRA/QLoRA fine-tuning, Knowledge Distillation
🚀 Collect and manage tokens effortlessly with this simple, efficient framework for building and maintaining your own token collections.