raphaelmansuy

Unified LLM provider abstraction for Rust - support for OpenAI, Anthropic, Gemini, xAI, OpenRouter, and more

27
2
100% credibility
Found Feb 18, 2026 at 11 stars 2x -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Rust
AI Summary

A Rust library offering a unified way to access various AI language models and embedding services with smart caching, speed controls, and expense monitoring.

How It Works

1
🕵️ Discover EdgeQuake LLM

You stumble upon this handy toolkit while looking for ways to add smart conversations to your project.

2
📦 Add it to your app

With one easy step, you bring the toolkit into your creation, ready to connect smart helpers.

3
🔗 Link your smart services

You connect popular AI thinkers or run local ones, so everything talks the same language.

4
💬 Chat with AI

You send a message and instantly get clever, helpful replies back.

5
Enjoy speed and savings

Repeated questions answer faster with built-in memory, and you track costs without worry.

Your smart app thrives

Now your project chats smoothly, stays affordable, and feels magical to use.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 11 to 27 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is edgequake-llm?

Edgequake-llm is a Rust crate that provides a unified LLM provider abstraction, letting you swap between OpenAI, Anthropic, Gemini, xAI, OpenRouter, Ollama, LM Studio, and more without rewriting code. It handles the messy parts like API differences, so you get consistent chat completions, embeddings, and streaming across providers. Developers get caching to slash repeat query costs, built-in rate limiting, retry logic, and session cost tracking right out of the box.

Why is it gaining traction?

Unlike scattered provider SDKs or Python-focused unified LLM APIs like LiteLLM, this offers a native Rust unified LLM client with production extras: intelligent response caching (memory or persistent), OpenTelemetry observability, and reranking for better retrieval. The factory auto-detects your setup from env vars, supports local runs via Ollama/LM Studio, and tracks costs precisely—perfect for cost-conscious teams juggling providers. Early adopters praise the clean traits for multi-provider fallbacks and mock testing.

Who should use this?

Rust backend devs building AI agents, RAG pipelines, or chat apps who switch between cloud LLMs like Anthropic's Claude and local models. Ideal for teams needing a unified LLM gateway to monitor spend across OpenRouter routes or xAI Grok without vendor lock-in. Skip if you're deep in Python ecosystems or single-provider setups.

Verdict

Grab it for Rust AI prototypes—solid docs, examples, and tests make it dev-friendly despite 10 stars and 1.0% credibility signaling early maturity. Production? Add your own benchmarks first, but the abstraction pays off fast for multi-provider work.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.