386 results for “topic:lime”
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
An open source library for creative expression on the web, desktop, mobile and consoles. Inspired by the classic Flash and AIR APIs.
A foundational Haxe framework for cross-platform development
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
Python implementation of two low-light image enhancement techniques via illumination map estimation
Lime client built using flutter
Qt-DAB, a general software DAB (DAB+) decoder with a (slight) focus on showing the signal
InterpretDL: Interpretation of Deep Learning Models,基于『飞桨』的模型可解释性算法库。
Reading list for "The Shapley Value in Machine Learning" (JCAI 2022)
C# LIME protocol implementation
Application of the LIME algorithm by Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin to the domain of time series classification
Implementation of the paper, "LIME: Low-Light Image Enhancement via Illumination Map Estimation", which is for my graduation thesis.
Adversarial Attacks on Post Hoc Explanation Techniques (LIME/SHAP)
ProjectFNF is a mostly quality-of-life engine for Friday Night Funkin. It is easy to understand and is super flexible.
Short overview over the components used by Lime Scooters fleet
Simple C# Keylogger (Keyboard Layout)
This repository will have all my gnuradio exsamples
Overview of different model interpretability libraries.
Unpack, Pack & Re-sign files encrypted with the 1st version of "Lime" encryption.
Local explanations with uncertainty 💐!
Segway Ninebot serial communication analyzer (ESCx such as Voi, Circ / Flash, Bolt, Dott, Jump, Tier...)
Local Interpretable (Model-agnostic) Visual Explanations - model visualization for regression problems and tabular data based on LIME method. Available on CRAN
LIME-SAM aims to create an Explainable Artificial Intelligence (XAI) framework for image classification using LIME (Local Interpretable Model-agnostic Explanations) as the base algorithm, with the super-pixel method replaced by Segment Anything by Meta (SAM).
This repository introduces different Explainable AI approaches and demonstrates how they can be implemented with PyTorch and torchvision. Used approaches are Class Activation Mappings, LIMA and SHapley Additive exPlanations.
Learn how to explain ML models using LIME and SHAP.
In this work, we propose a deterministic version of Local Interpretable Model Agnostic Explanations (LIME) and the experimental results on three different medical datasets shows the superiority for Deterministic Local Interpretable Model-Agnostic Explanations (DLIME).
Poco M3/Redmi 9T Global Kernel
FastAI Model Interpretation with LIME
General-purpose library for extracting interpretable models from Multi-Agent Reinforcement Learning systems
Techniques & resources for training interpretable ML models, explaining ML models, and debugging ML models.