kreuzberg-dev

Universal LLM API client — 142+ providers, 11 native language bindings, powered by Rust core

10
2
100% credibility
Found Mar 29, 2026 at 11 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Rust
AI Summary

liter-llm is a Rust-powered universal client library supporting 142 LLM providers across 11 languages with features like streaming, caching, rate limiting, and tool calling.

How It Works

1
👀 Discover a simple way to chat with any AI

You hear about liter-llm, a friendly tool that lets you talk to hundreds of AI models from one easy place, no matter what coding language you use.

2
📦 Pick your favorite language and add it

Choose Python, JavaScript, or any of 11 languages and install with a single command, like pip or npm.

3
🔑 Link your AI account

Enter your AI service password once, and it's securely saved for all your chats.

4
💬 Ask questions to any AI instantly

Write a simple message like 'Hello!' and get smart replies from models like GPT or Claude, with live streaming words appearing as you wait.

5
🛡️ Turn on helpful extras

Add memory to reuse old answers, spending limits to stay safe, or speed controls to avoid waiting.

Unlock AI magic across all your projects

Now you chat effortlessly with any AI anywhere, saving time and getting creative results every day.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 11 to 10 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is liter-llm?

liter-llm is a universal LLM API client that unifies access to 142+ providers like OpenAI, Anthropic, Groq, and Mistral through a single interface. Powered by a Rust core with native bindings for 11 languages—Rust, Python, Node.js, Go, Java, and more—it lets you route requests via simple provider/model prefixes like "openai/gpt-4o". Developers get consistent chat completions, embeddings, streaming, and tool calling without per-provider code changes.

Why is it gaining traction?

Unlike Python-heavy alternatives vulnerable to supply chain attacks, its compiled Rust core keeps dependencies minimal and secrets zeroed in memory. Polyglot bindings prevent API drift across teams, while Tower middleware stacks rate limiting, caching (40+ backends), fallbacks, and OpenTelemetry tracing right out of the box. The 142+ provider registry and TOML config make multi-provider routing dead simple.

Who should use this?

Backend engineers juggling LLM vendors in microservices. Polyglot teams building AI pipelines across Python, JS, and Go without reinventing clients. Prod-focused devs needing built-in budgets, health checks, and semantic caching for reliable inference at scale.

Verdict

Promising universal LLM API for cross-language teams, with strong docs and e2e tests despite 10 stars and 1.0% credibility score. Maturity is early—adopt for greenfield projects, but watch for wider adoption before heavy reliance.

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.