27 results for “topic:influence-functions”
This is a PyTorch reimplementation of Influence Functions from the ICML2017 best paper: Understanding Black-box Predictions via Influence Functions by Pang Wei Koh and Percy Liang.
Influence Functions with (Eigenvalue-corrected) Kronecker-Factored Approximate Curvature
pyDVL is a library of stable implementations of algorithms for data valuation and influence function computation
A simple PyTorch implementation of influence functions.
Supporting code for the paper "Finding Influential Training Samples for Gradient Boosted Decision Trees"
👋 Influenciae is a Tensorflow Toolbox for Influence Functions
Official Implementation of Unweighted Data Subsampling via Influence Function - AAAI 2020
Mapping out the "memory" of neural nets with data attribution
Data-efficient Fine-tuning for LLM-based Recommendation (SIGIR'24)
[CVPR 2023] Regularizing Second-Order Influences for Continual Learning
Intriguing Properties of Data Attribution on Diffusion Models (ICLR 2024)
Influence Estimation for Gradient-Boosted Decision Trees
[EMNLP-2022 Findings] Code for paper “ProGen: Progressive Zero-shot Dataset Generation via In-context Feedback”.
A simple Jax implementation of influence functions.
Time series data contribution via influence functions
Scalable, GPU-accelerated Python library for modern difference-in-differences.
Tiny Tutorial on https://arxiv.org/abs/1703.04730
An Empirical Study of Memorization in NLP (ACL 2022)
Source code for 'Understanding impacts of human feedback via influence functions'
Official implementation of "Deeper Understanding of Black-box Predictions via Generalized Influence Functions".
This is an implementation of the paper ”Interpreting Twitter User Geolocation“.
A brief notebook on Influence Function (IF) for classical generative models (e.g., k-NN, KDE, GMM)
This repo provides an implementation of the paper Interpreting Twitter User Geolocation.
PyTorch implementation of influence functions: ICML 2017 method, TracIn (NeurIPS 2020) and EmpiricalIF (NeurIPS 2022). Estimate how each training sample affects model predictions without retraining.
Leave One Out
LLM-powered, human-centric Explainable AI (XAI) for skin cancer classification with ResNet34 CNN.
PyTorch implementation of influence functions with K-FAC for MLPs and Transformers. Find which training examples most affect model predictions.