lambdaclass

CommitLLM is a cryptographic commit-and-audit protocol for open-weight LLM inference.

24
1
100% credibility
Found Apr 01, 2026 at 25 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Rust
AI Summary

CommitLLM enables users to obtain cryptographic proofs from LLM providers that they executed the specified open-weight model, configuration, and sampling policy without alteration.

How It Works

1
🔍 Discover trustworthy AI chats

You hear about a way to chat with powerful AI models while getting proof they used the exact model and settings promised.

2
🛡️ Prepare your personal verifier

Download the model's public files and create a private checker on your computer to watch for honest answers.

3
💬 Send your question to the AI

Ask your question to the AI service, and they respond normally with a tiny receipt proving their work.

4
Spot-check for honesty

Pick a word from the answer to double-check, and ask the service for the detailed math behind it.

5
📄 Receive the proof details

The service sends back the step-by-step numbers from their computation, ready for your checker.

Confirm it's real

Your computer quickly verifies the proof, giving you confidence the AI answered truthfully with the right model.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 25 to 24 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is CommitLLM?

CommitLLM delivers a Rust-based cryptographic commit-and-audit protocol for open-weight LLM inference. Providers run models normally on GPUs, capture a compact receipt binding the trace, weights, config, prompt, and sampling randomness, while users verify outputs on CPU against public checkpoints without trusting the server. It solves the core trust gap: proving exact model execution, decode policy, and untampered results without heavy ZK proofs.

Why is it gaining traction?

Unlike ZK systems with massive prover overhead, CommitLLM adds just 12-14% tracing latency and verifies challenged tokens in milliseconds—1.3ms routine audits for Llama 70B. Devs love the normal serving path, CPU-only checks via Freivalds and replay, and end-to-end binding for weights/config/sampling. It's a lightweight drop-in for verifiable inference on quantized open-weight LLMs.

Who should use this?

AI service operators offering outsourced LLM inference who need client-side proofs of model integrity. Security researchers auditing open-weight deployments for policy compliance or tampering. Teams integrating verifiable APIs into apps handling sensitive prompts or decisions.

Verdict

Promising prototype for commit-and-audit LLM verification—test it if you're building trustable inference pipelines. At 24 stars and 1.0% credibility, it's early (active dev, solid benchmarks, Lean proofs underway), so expect rough edges before production.

(187 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.