50 results for “topic:shapley-additive-explanations”
Reading list for "The Shapley Value in Machine Learning" (JCAI 2022)
Assemble an efficient interpretable machine learning workflow.
Weighted Shapley Values and Weighted Confidence Intervals for Multiple Machine Learning Models and Stacked Ensembles
No description provided.
In this repository you will fine explainability of machine learning models.
Code for EACL Workshop paper Can BERT eat RuCoLA? Topological Data Analysis to Explain
Using a Kaggle dataset, customer personality was analysed on the basis of their spending habits, income, education, and family size. K-Means, XGBoost, and SHAP Analysis were performed.
📊🛰️ Data processing scripts, ML models, and Explainable AI results created as part of my Masters Thesis @ Johns Hopkins
This repository is associated with interpretable/explainable ML model for liquefaction potential assessment of soils. This model is developed using XGBoost and SHAP.
Code for my thesis about SHAP. Implementation of DecisionTree, SVM, BERT on 2 Datasets Imdb and Argument Mining
Determining Feature Importance by Integrating Random Forest and SHAP in Python
Predicting NBA game outcomes using schedule related information. This is an example of supervised learning where a xgboost model was trained with 20 seasons worth of NBA games and uses SHAP values for model explainability.
Measuring galaxy environmental distance scales with GNNs and explainable ML models
gradient-boosted regression and decision tree models on behavioural animal data (PLOS Computational Biology, doi: https://doi.org/10.1371/journal.pcbi.1011985)
Getting explanations for predictions made by black box models.
Implementation of the algorithm described in the paper "An Imprecise SHAP as a Tool for Explaining the Class Probability Distributions under Limited Training Data"
No-code Machine learning (Pre-alpha)
Android malware detection using machine learning.
Holistic Multimodel Domain Analysis: A New Paradigm for Robust, Transparent, And Reliable Exploratory Machine Learning that Considers Cross-Model Variability in Feature Importance Assessment
In this project we predict credit card defaults using classification models.
An Analysis of Lassa Fever Outbreaks in Nigeria using Machine Learning Models and Shapley Values
Language-Aware Visual Explanations (LAVE) is a framework designed for image classification tasks, particularly focusing on the ImageNet dataset. Unlike conventional methods that necessitate extensive training, LAVE leverages SHAP (SHapley Additive exPlanations) values to provide insightful textual and visual explanations.
credit default prediction app
The goal of SHAP is to explain the prediction of an instance x by computing the contribution of each feature to the prediction. The SHAP explanation method computes Shapley values from coalitional game theory. The feature values of a data instance act as players in a coalition.
XGB - SHAP XAI
Frontend for ShapEmotionsCorrectionAPI
Use machine learning to find out what drives sales and predict sales
This project explores an educational dataset to understand factors influencing students’ academic performance. The analysis includes descriptive statistics, data visualization, and the development of a predictive model to estimate student outcomes. To enhance model interpretability, SHAP (Shapley Additive Explanations) was applied to explain featu
ML implementations in Multi-scale model for lignin biosynthesis in Populus Trichocarpa
Multimodal AI for Alzheimer's detection. Fuses CSF Proteomics, MRI Volumetrics, and Genetics into an XGBoost model (88% AUC). Features an LLM-powered "AI Neurologist" for clinical interpretation.