35 results for “topic:feature-attribution”
Model interpretability and understanding for PyTorch
Shapley Interactions and Shapley Values for Machine Learning
Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Collection of NLP model explanations and accompanying analysis tools
An Open-Source Library for the interpretability of time series classifiers
Explainable AI in Julia.
A set of notebooks as a guide to the process of fine-grained image classification of birds species, using PyTorch based deep neural networks.
Counterfactual SHAP: a framework for counterfactual feature importance
This article explores the theory behind explainable car pricing using value decomposition, showing how machine learning models can break a predicted price into intuitive components such as brand premium, age depreciation, mileage influence, condition effects, and transmission or fuel-type adjustments.
Similarity-first interpretability studio for breast tumor samples: pick a case, find its closest “twins” (benign/malignant look-alikes), visualize neighborhood structure, compare feature fingerprints, and run minimal-change counterfactual edits toward a target class. Educational demo only, not for diagnosis.
Materials for "Quantifying the Plausibility of Context Reliance in Neural Machine Translation" at ICLR'24 🐑 🐑
Materials for the Lab "Explaining Neural Language Models from Internal Representations to Model Predictions" at AILC LCL 2023 🔍
The official repo for the EACL 2023 paper "Quantifying Context Mixing in Transformers"
Code and data for the ACL 2023 NLReasoning Workshop paper "Saliency Map Verbalization: Comparing Feature Importance Representations from Model-free and Instruction-based Methods" (Feldhus et al., 2023)
Efficient and accurate explanation estimation with distribution compression (ICLR 2025 Spotlight)
⛈️ Code for the paper "End-to-End Prediction of Lightning Events from Geostationary Satellite Images"
Implementation of the Integrated Directional Gradients method for Deep Neural Network model explanations.
Sum-of-Parts: Self-Attributing Neural Networks with End-to-End Learning of Feature Groups
This repository contains the code and material to reproduce the results of the ICML'25 paper "Gradient-based Explanations for Deep Learning Survival Models".
Reproducible code for our paper "Explainable Learning with Gaussian Processes"
Robustness of Global Feature Effect Explanations (ECML PKDD 2024)
Bachelor's thesis for degree in Economics at HSE University, Saint-Petersburg (2022)
Codes for the paper On marginal feature attributions of tree-based models
Feature Attribution methods for neurons and Evolution experiments
Feature attribution pipeline for Global Gridded Crop Model (GGCM) simulations
Autonomous Metal is an autonomous AI workflow designed to mimic a quantitative commodity analyst, transforming market data and economic indicators into explainable forecasts and analyst-style insights for LME Aluminum price movements.
Official implementation of 'Bootstrap Wasserstein Alignment for Stable Feature Attribution in Low-Data Regimes'
Attribution-based analysis of French grammatical gender encoding in FlauBERT embeddings using SHAP, LIME, Random Forest, and cross-validation methods across multiple architectures
NO2 Prediction: Performance and Robustness Comparison between Random Forest and Graph Neural Network
🚗 Decode car values using a transparent machine learning system that enhances price understanding through explainable methods.