50 results for “topic:captum”
Model explainability that works seamlessly with 🤗 transformers. Explain your transformers model in just 2 lines of code.
Interpretability for sequence generation models 🐛 🔍
Collection of NLP model explanations and accompanying analysis tools
XAI Tutorial for the Explainable AI track in the ALPS winter school 2021
Overview of different model interpretability libraries.
A small repository to test Captum Explainable AI with a trained Flair transformers-based text classifier.
Trained Neural Networks (LSTM, HybridCNN/LSTM, PyramidCNN, Transformers, etc.) & comparison for the task of Hate Speech Detection on the OLID Dataset (Tweets).
We introduce XBrainLab, an open-source user-friendly software, for accelerated interpretation of neural patterns from EEG data based on cutting-edge computational approach.
Cyber Security AI Dashboard
End-to-end toxic Russian comment classification
Training a CNN to recognize the current Go position with photorealistic renders
This repository contains the source code for Indoor Scene Detector, a full stack deep learning computer vision application.
Deep Classiflie is a framework for developing ML models that bolster fact-checking efficiency. As a POC, the initial alpha release of Deep Classiflie generates/analyzes a model that continuously classifies a single individual's statements (Donald Trump) using a single ground truth labeling source (The Washington Post). For statements the model deems most likely to be labeled falsehoods, the @DeepClassiflie twitter bot tweets out a statement analysis and model interpretation "report"
Interpretable graph classifications using Graph Convolutional Neural Network
XAI-Tris
Model interpretability for Explainable Artificial Intelligence
OdoriFy is an open-source tool with multiple prediction engines. This is the source code of the webserver.
🔬 Deep-Viz: Unveiling the Black Box of Deep Learning
This in an introduction to PyTorch Geometric, the deep learning library for Graph Neural Networks, and to interpretability tools for analyzing the decision process of a GNN.
🧠 Build deep learning models for computer vision, from setup to production, using Python, PyTorch, and OpenCV to unlock the potential of AI.
Multi-label toxic comment classification using DistilBERT with explainable AI via Captum Integrated Gradients (IG). Trained on the Jigsaw dataset, the model predicts six toxicity categories : toxic, severe toxic, obscene, threat, insult, and identity hate , while highlighting key words driving each prediction.
"XAI를 위한 Attribution Method 접근법 분석 및 동향 Analysis and Trend of Attribution Methods for XAI" 에서 사용한 코드와 예시를 공개
VisionDriveX is a multi-task autonomous driving perception system that performs traffic sign classification, stop-sign detection, and lane segmentation. Built with PyTorch and explainable AI (Grad-CAM), it delivers real-time, interpretable road understanding for safety-critical ADAS applications.
Collection of associated files for my bachelor thesis
No description provided.
PyTorch Beginner Workshop (Brad Heintz)
🚗 Analyze and visualize decision-making in autonomous driving RL agents using Integrated Gradients for clearer interpretability in complex driving tasks.
Deep_classiflie_db is the backend data system for managing Deep Classiflie metadata, analyzing Deep Classiflie intermediate datasets and orchestrating Deep Classiflie model training pipelines. Deep_classiflie_db includes data scraping modules for the initial model data sources. Deep Classiflie depends upon deep_classiflie_db for much of its analytical and dataset generation functionality but the data system is currently maintained as a separate repository here to maximize architectural flexibility. Depending on how Deep Classiflie evolves (e.g. as it supports distributed data stores etc.), it may make more sense to integrate deep_classiflie_db back into deep_classiflie. Currently, deep_classiflie_db releases are synchronized to deep_classiflie releases. To learn more, visit deepclassiflie.org.
🔍 Enhance medical imaging with a lightweight CNN model that offers over 91% accuracy and integrated explainability for better clinical trust.
PyTorch CNN image classifier with Grad-CAM interpretability, FGSM adversarial attacks, and Qdrant vector similarity search