81 results for “topic:parameter-efficient-fine-tuning”
[ICML2024 (Oral)] Official PyTorch implementation of DoRA: Weight-Decomposed Low-Rank Adaptation
Collection of awesome parameter-efficient fine-tuning resources.
The Paper List of Large Multi-Modality Model (Perception, Generation, Unification), Parameter-Efficient Finetuning, Vision-Language Pretraining, Conventional Image-Text Matching for Preliminary Insight.
[SIGIR'24] The official implementation code of MOELoRA.
A generalized framework for subspace tuning methods in parameter efficient fine-tuning.
[COLM 2025] LoRI: Reducing Cross-Task Interference in Multi-Task Low-Rank Adaptation
[ICLR 2024] This is the repository for the paper titled "DePT: Decomposed Prompt Tuning for Parameter-Efficient Fine-tuning"
Exploring the potential of fine-tuning Large Language Models (LLMs) like Llama2 and StableLM for medical entity extraction. This project focuses on adapting these models using PEFT, Adapter V2, and LoRA techniques to efficiently and accurately extract drug names and adverse side-effects from pharmaceutical texts
[ICCV 2023 oral] This is the official repository for our paper: ''Sensitivity-Aware Visual Parameter-Efficient Fine-Tuning''.
[CVPR2024] The code of "UniPT: Universal Parallel Tuning for Transfer Learning with Efficient Parameter and Memory"
[ECCV 2024] - Improving Zero-shot Generalization of Learned Prompts via Unsupervised Knowledge Distillation
[ICLR 2025] The offical implementation of "PSEC: Skill Expansion and Composition in Parameter Space", a new framework designed to facilitate efficient and flexible skill expansion and composition, iteratively evolve the agents' capabilities and efficiently address new challenges
CorDA: Context-Oriented Decomposition Adaptation of Large Language Models for task-aware parameter-efficient fine-tuning(NeurIPS 2024)
[CVPR 2024] Domain generalization by interpolating original feature styles with styles obtained using random descriptions in natural language
[CVPR'25 (Highlight)] Lessons and Insights from a Unifying Study of Parameter-Efficient Fine-Tuning (PEFT) in Visual Recognition
This is the official repository of the papers "Parameter-Efficient Transfer Learning of Audio Spectrogram Transformers" and "Efficient Fine-tuning of Audio Spectrogram Transformers via Soft Mixture of Adapters".
[WACV 2026] An extremely simple method for validation-free efficient adaptation of CLIP-like VLMs that is robust to the learning rate.
[ICRA 2024] Official Implementation of the paper "Parameter-efficient Prompt Learning for 3D Point Cloud Understanding"
The open-source Mixture of Depths code and the official implementation of the paper "Router-Tuning: A Simple and Effective Approach for Enabling Dynamic Depth in Transformers. (EMNLP 2025)"
LoRA-One: One-Step Full Gradient Could Suffice for Fine-Tuning Large Language Models, Provably and Efficiently (ICML2025 Oral)
[NeurIPS'24] Official PyTorch implementation for paper "Knowledge Composition using Task Vectors with Learned Anisotropic Scaling"
[AAAI 2025] CoPEFT: Fast Adaptation Framework for Multi-Agent Collaborative Perception with Parameter-Efficient Fine-Tuning
Replication package of the paper "Exploring Parameter-Efficient Fine-Tuning Techniques for Code Generation with Large Language Models" (TOSEM 2025)
[NeurIPS 2024] This is the official repository for our paper: ''Expanding Sparse Tuning for Low Memory Usage''.
Source code of EMNLP 2022 Findings paper "SparseAdapter: An Easy Approach for Improving the Parameter-Efficiency of Adapters"
[WACV 2024] MACP: Efficient Model Adaptation for Cooperative Perception.
This repository contains the lab work for Coursera course on "Generative AI with Large Language Models".
StelLA: Subspace Learning in Low-rank Adaptation using Stiefel Manifold (NeurIPS 2025 Spotlight)
The code for the paper "Embracing Collaboration Over Competition: Condensing Multiple Prompts for Visual In-Context Learning" (CVPR'25).
[CVPR 2025 Highlight] Meta LoRA / MetaPEFT: Meta-Learning Hyperparameters for Parameter-Efficient Fine-Tuning (LoRA, Adapter, Prompt Tuning) — Automatic hyperparameter optimization via bi-level meta-learning