13 results for “topic:llm-hallucination”
UQLM: Uncertainty Quantification for Language Models, is a Python package for UQ-based LLM hallucination detection
[NeurIPS 2025] SECA: Semantically Equivalent and Coherent Attacks for Eliciting LLM Hallucinations
RAG Hallucination Detecting By LRP.
CRoPS (TMLR)
Build your own open-source REST API endpoint to detect hallucination in LLM generated responses.
Semi-supervised pipeline to detect LLM hallucinations. Uses Mistral-7B for zero-shot pseudo-labeling and DeBERTa for efficient classification.
Novel Hallucination detection method
Lecture-RAG is a grounding-aware Video-RAG framework that reduces hallucinations and supports algorithmic reasoning in educational, Slide based, Black board tutorial videos.
A-CSM Research Framework: Documentation, system report, hallucination taxonomy, and synthetic conversation logs (CC-BY-NC-SA-4.0)
No description provided.
This repository contains the codebase for the PoC of LLM package hallucination and associated vulnerabilties.
Source code for the paper: A Hallucination Mitigation Scheme in Security Policy Generation with Large Language Models
AI Contextual Signal Matrix (A-CSM): A User-Side Independent Detection and Assessment Framework for LLM Contextual Hallucination