56 results for “topic:shapley-values”
TimeSHAP explains Recurrent Neural Network predictions.
Fast approximate Shapley values in R
Paper collection of federated learning. Conferences and Journals Collection for Federated Learning from 2019 to 2021, Accepted Papers, Hot topics and good research groups. Paper summary
An R package for computing asymmetric Shapley values to assess causality in any trained machine learning model
[IJCAI 2024] Redefining Contributions: Shapley-Driven Federated Learning
Counterfactual SHAP: a framework for counterfactual feature importance
Weighted Shapley Values and Weighted Confidence Intervals for Multiple Machine Learning Models and Stacked Ensembles
Source code for the Joint Shapley values: a measure of joint feature importance
In this paper we researched the accuracy and usability of machine learning models for MMM analyses.
Counterfactual Shapley Additive Explanation: Experiments
Experimental toolbox for quantum Shapley values.
Beyond User Self-Reported Likert Scale Ratings: A Comparison Model for Automatic Dialog Evaluation (ACL 2020)
Reference implementation of the paper Unsupervised Features Ranking via Coalitional Game Theory for Categorical Data
A radiomic interpretation tool based on Shapley values
HERALD: An Annotation Efficient Method to Train User Engagement Predictors in Dialogs (ACL 2021)
This repository is the official implementation of Explainable Prediction of Acute Myocardial Infarction using Machine Learning and Shapley Values published in IEEE Access in November 2020.
Shapley-based decomposition to anatomize the of out-of-sample accuracy of time-series forecasting models
Create beautiful, interactive charts for explainable AI using MLFlow
Code and experiments related to SHAPEffects paper: 'A feature selection method based on Shapley values robust to concept shift in regression'
Slides for the "Interpretable SDM with Julia" workshop
A Proxy-Based Algorithm for Explaining Survival Models with SHAP
A Julia package for sensitivity analysis wih Shapley effects.
Fair and explainable ML workshop
Shapley values for JSM-method in terms of Concept Lattices (FCA)
Migration networks and housing prices analysis and ML tools
Android malware detection using machine learning.
Reference implementation of the paper Redundancy-aware unsupervised ranking based on game theory - application to gene enrichment analysis
An investigation on the use of shapley explanations for unsupervised anomaly-detection models
Using SHAP values to explain model features
Holistic Multimodel Domain Analysis: A New Paradigm for Robust, Transparent, And Reliable Exploratory Machine Learning that Considers Cross-Model Variability in Feature Importance Assessment