coolgenerator

Evidence-driven causal reasoning engine — build knowledge graphs, propagate beliefs, and explore what-if scenarios with AI-powered analysis.

19
6
100% credibility
Found Mar 16, 2026 at 19 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
PLpgSQL
AI Summary

CurioCat transforms unstructured text into interactive causal graphs grounded in web evidence to reveal hidden cause-and-effect relationships for informed decisions.

How It Works

1
🔍 Discover CurioCat

You hear about a helpful tool that turns news articles, business ideas, or plans into clear maps of why things happen.

2
⚙️ Get ready quickly

Follow simple steps to connect a smart thinking service and launch the app on your computer.

3
📝 Paste your text

Type a title and add any article, pitch, or idea you want to understand better.

4
Watch it build

Live updates show claims appearing, links forming between them, and real evidence popping up to back each connection.

5
🧭 Explore the map

Zoom around the colorful graph, click links to see proof, and adjust strengths to test your hunches.

6
Try what-ifs
📊
Compare views

See exactly where beliefs differ between scenarios.

📤
Save report

Download your insights as a shareable document.

Decide confidently

With transparent causes, strong evidence, and tested scenarios, you make better choices.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 19 to 19 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is CurioCat?

CurioCat is an evidence-driven causal reasoning engine that transforms unstructured text—policy briefs, news, or hypotheses—into interactive knowledge graphs. Feed it content, and it extracts atomic claims, infers causal links grounded in web evidence via Brave Search, propagates beliefs with Noisy-OR modulation, and lets you explore what-if scenarios through edge tweaks and forks. The AI-powered analysis streams live via SSE, with React+D3 visualization, Python FastAPI backend, and PostgreSQL/pgvector storage handling PL/pgSQL extensions.

Why is it gaining traction?

Stands out with its anti-hallucination defenses: bias penalties for 8 fallacies, zero-evidence edges blocked, and source diversity scoring, delivering trustworthy causal graphs where generic LLMs hallucinate. Users get real-time belief propagation, critical path highlighting, and exportable Markdown/JSON reports—perfect for iterative analysis without rebuilding from scratch. Docker Compose and seeded demos like "AI replacing programmers" hook devs fast.

Who should use this?

Strategy analysts building causal models from reports, researchers validating Curiosa stories, or PMs exploring scenario diffs for product roadmaps. Suited for teams doing evidence-driven reasoning on complex topics like market shifts or policy impacts, skipping manual diagramming tools.

Verdict

Solid prototype for causal graphs and belief propagation, but 19 stars and 1.0% credibility signal early days—docs are strong, setup smooth, yet expect rough edges sans full tests. Fire up the demo; if AI-powered causal analysis fits, prototype with it now.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.