339 results for “topic:feature-importance”
Model interpretability and understanding for PyTorch
XAI - An eXplainability toolbox for machine learning
Leave One Feature Out Importance
Shapley Interactions and Shapley Values for Machine Learning
Features selector based on the self selected-algorithm, loss function and validation method
ProphitBet is a Machine Learning Soccer Bet prediction application. It analyzes the form of teams, computes match statistics and predicts the outcomes of a match using Advanced Machine Learning (ML) methods. The supported algorithms in this application are Neural Networks, Random Forests & Ensembl Models.
This package can be used for dominance analysis or Shapley Value Regression for finding relative importance of predictors on given dataset. This library can be used for key driver analysis or marginal resource allocation models.
In this project I aim to apply Various Predictive Maintenance Techniques to accurately predict the impending failure of an aircraft turbofan engine.
Using / reproducing ACD from the paper "Hierarchical interpretations for neural network predictions" 🧠 (ICLR 2019)
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Beta Machine Learning Toolkit
A Julia package for interpretable machine learning with stochastic Shapley values
An R package for computing asymmetric Shapley values to assess causality in any trained machine learning model
Adding feature_importances_ property to sklearn.cluster.KMeans class
Awesome papers on Feature Selection
Official repository of the paper "Interpretable Anomaly Detection with DIFFI: Depth-based Isolation Forest Feature Importance", M. Carletti, M. Terzi, G. A. Susto.
Routines and data structures for using isarn-sketches idiomatically in Apache Spark
Using / reproducing DAC from the paper "Disentangled Attribution Curves for Interpreting Random Forests and Boosted Trees"
Variance-based Feature Importance in Neural Networks
Customer churn prediction with Python using synthetic datasets. Includes data generation, feature engineering, and training with Logistic Regression, Random Forest, and Gradient Boosting. Improved pipeline applies hyperparameter tuning and threshold optimization to boost recall. Outputs metrics, reports, and charts.
CancelOut is a special layer for deep neural networks that can help identify a subset of relevant input features for streaming or static data.
Solid-state synthesis science analyzer. Thermo, features, ML, and more.
Predicted and identified the drivers of Singapore HDB resale prices (2015-2019) with 0.96 Rsquare & $20,000 MAE. Web app deployment using Streamlit for user price prediction.
This repository contains the implementation of SimplEx, a method to explain the latent representations of black-box models with the help of a corpus of examples. For more details, please read our NeurIPS 2021 paper: 'Explaining Latent Representations with a Corpus of Examples'.
An eXplainable AI system to elucidate short-term speed forecasts in traffic networks obtained by Spatio-Temporal Graph Neural Networks.
Developed a churn prediction model using XGBoost, with comprehensive data preprocessing and hyperparameter tuning. Applied SHAP for feature importance analysis, leading to actionable business insights for targeted customer retention.
Counterfactual SHAP: a framework for counterfactual feature importance
A minimal, reproducible explainable-AI demo using SHAP values on tabular data. Trains RandomForest or LogisticRegression models, computes global and local feature importances, and visualizes results through summary and dependence plots, all in under 100 lines of Python.
Significance tests of feature relevance for a black-box learner
Weighted Shapley Values and Weighted Confidence Intervals for Multiple Machine Learning Models and Stacked Ensembles