ZeroPathAI

Crowdsourced, inline LLM investigations of the things you're reading.

29
1
100% credibility
Found Feb 26, 2026 at 18 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

OpenErrata is a browser extension that uses AI to detect and highlight incorrect claims in posts on LessWrong, X/Twitter, and Substack, providing sources and explanations on hover or click.

How It Works

1
📰 Discover OpenErrata

Hear about a browser tool that spots wrong facts in blog posts from sites like LessWrong, Twitter, or Substack.

2
📥 Download and install

Grab the extension file and add it to Chrome with a few clicks—no tech skills needed.

3
🔧 Quick setup

Optionally add your AI service key in settings so it can check facts automatically.

4
🌐 Browse normally

Visit your favorite posts, and the tool quietly scans for errors in the background.

5
🚨 Spot the issues

Red underlines appear under wrong claims, grabbing your attention right away.

6
💡 Get the facts

Hover for a quick fix summary or click to see full proof and sources.

Read with confidence

Now every post you read comes with built-in truth checks for safer browsing.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 18 to 29 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is OpenErrata?

OpenErrata is a TypeScript browser extension that runs crowdsourced, inline LLM investigations on the things you're reading from sites like LessWrong, X, and Substack. It underlines unambiguously incorrect claims in posts with hover tooltips showing corrections and sources, pulling from a public API or triggering new checks via OpenAI. Users install manually from GitHub releases, optionally adding their own API key for on-demand investigations, while the hosted instance handles popular posts hourly.

Why is it gaining traction?

Its extreme low false-positive rate—flagging only claims with concrete counter-evidence—builds trust without annoying noise, unlike broader fact-check tools. Full transparency lets you inspect prompts, reasoning traces, and sources for every investigation, plus a public GraphQL API for querying results with trust signals like corroboration counts. Self-hosting via Helm charts keeps it flexible for privacy-focused users.

Who should use this?

Rationalist bloggers and LessWrong/Substack readers wanting passive fact-checks during browsing. Twitter power-users tired of unchecked claims in threads. Devs building reading apps who need an embeddable LLM investigation layer via the public API.

Verdict

Promising niche tool for inline reading aids, but at 16 stars and 1.0% credibility score, it's early alpha—docs are solid with a full SPEC.md, local dev is straightforward via Docker/pnpm, but expect rough edges in production. Try the extension on your feeds if low false positives matter more than broad coverage.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.