Priyanka-AIDS

Retrieval-based hallucination detection using FAISS, FastAPI, and Gradio

36
0
100% credibility
Found Apr 13, 2026 at 36 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

This project is a research tool that detects hallucinations in AI-generated text by measuring how similar the response is to a reliable knowledge base.

How It Works

1
πŸ” Discover the Tool

You find a clever tool online that checks if AI chatbots are inventing facts in their answers.

2
πŸ’» Prepare Your Computer

You follow a simple guide to set up the basic software needed, like creating a ready-to-use workspace on your machine.

3
πŸ“₯ Grab the Smart Files

You download the special knowledge files that help the tool know what's real.

4
πŸš€ Start the Demo

With one command, you launch a friendly web page right on your computer to try it out.

5
✨ Test an AI Answer

You paste in a question you asked an AI and its reply, then hit the analyze button to see the magic happen.

6
πŸ“Š Get Your Results

The tool highlights any made-up parts in red, gives a trust score, and explains why, so you instantly know what's reliable.

βœ… Spot Hallucinations Easily

Now you can confidently use AI more, knowing exactly when it's telling the truth or fibbing.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 36 to 36 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is llm-hallucination-detector?

This Python tool tackles LLM hallucination detection by comparing chatbot responses against a retrieval-based knowledge base powered by FAISS. Feed it a prompt and LLM output via the Gradio web UI or FastAPI endpoint, and it spits back a hallucination score, binary label, explanation, and flagged spans based on cosine similarity to real documents. Developers get instant feedback on whether responses stray from grounded facts, with Docker for easy deployment.

Why is it gaining traction?

It bundles a ready-to-run Gradio demo for quick testing alongside a production FastAPI API with rate limiting and OpenAPI docs, skipping the usual setup hassle. The retrieval-based approach using FAISS delivers fast, interpretable scores without heavy training, making it a lightweight alternative to black-box detectors. For retrieval-based chatbot projects on GitHub, it's a plug-and-play way to add hallucination checks.

Who should use this?

AI engineers building LLM-powered apps like customer support bots or Q&A systems, where ungrounded responses cost trust. Backend devs integrating detection into FastAPI services for real-time monitoring. Early adopters experimenting with retrieval-based hallucination detection before scaling to custom knowledge bases.

Verdict

At 36 stars and 1.0% credibility score, it's an immature prototype with solid README docs but no tests or advanced spansβ€”fine for POCs, skip for prod until more polish. Worth forking if you need a FAISS-Gradio baseline for your LLM detector.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.