497 results for “topic:explainability”
A game theoretic approach to explain the output of any machine learning model.
A curated list of awesome open source libraries to deploy, monitor, version and scale your machine learning
Fit interpretable models. Explain blackbox machine learning.
🔅 Shapash: User-friendly Explainability and Interpretability to Develop Reliable and Transparent Machine Learning Models
[CVPR 2021] Official PyTorch implementation for Transformer Interpretability Beyond Attention Visualization, a novel method to visualize classifications by Transformer based networks.
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
XAI - An eXplainability toolbox for machine learning
🪄 Interactive Diagrams for Code
[ICCV 2021- Oral] Official PyTorch implementation for Generic Attention-model Explainability for Interpreting Bi-Modal and Encoder-Decoder Transformers, a novel method to visualize any Transformer-based network. Including examples for DETR, VQA.
Power Tools for AI Engineers With Deadlines
Papers about explainability of GNNs
Visualization toolkit for neural networks in PyTorch! Demo -->
Shapley Interactions and Shapley Values for Machine Learning
Making decision trees competitive with neural networks on CIFAR10, CIFAR100, TinyImagenet200, Imagenet
[Pattern Recognition 25] CLIP Surgery for Better Explainability with Enhancement in Open-Vocabulary Tasks
Explainable AI framework for data scientists. Explain & debug any blackbox machine learning model with a single line of code. We are looking for co-authors to take this project forward. Reach out @ ms8909@nyu.edu
Official implementation of Score-CAM in PyTorch
This is an open-source version of the representation engineering framework for stopping harmful outputs or hallucinations on the level of activations. 100% free, self-hosted and open-source.
Neural network visualization toolkit for tf.keras
💡 Adversarial attacks on explanations and how to defend them
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms
For calculating global feature importance using Shapley values.
Training & evaluation library for text-based neural re-ranking and dense retrieval models built with PyTorch
Visualization tool for Graph Neural Networks
OpenXAI : Towards a Transparent Evaluation of Model Explanations
Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
Can we use explanations to improve hate speech models? Our paper accepted at AAAI 2021 tries to explore that question.
GraphXAI: Resource to support the development and evaluation of GNN explainers
🗺️ Data Cleaning and Textual Data Visualization 🗺️
Holds code for our CVPR'23 tutorial: All Things ViTs: Understanding and Interpreting Attention in Vision.