Devansh-365

Devansh-365 / freellm

Public

OpenAI-compatible gateway aggregating 6 free LLM providers with automatic failover, circuit breakers, and smart routing. 25+ models, zero cost.

16
1
100% credibility
Found Apr 10, 2026 at 16 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

FreeLLM is an open-source gateway that provides OpenAI-compatible access to free tiers of multiple AI providers with automatic failover, caching, monitoring, and multi-tenant features.

How It Works

1
🌟 Discover FreeLLM

You hear about a way to chat with smart AI helpers for free, without needing a credit card.

2
🚀 Launch your helper

Click a button to start your free AI gateway online, or run it easily on your computer.

3
🔗 Connect free services

Link accounts from friendly AI companies like Groq or Google so your helper always has options.

4
💻 Update your app

Change one line in your chat program to point to your new free helper.

5
🗣️ Start chatting

Type a question and watch your AI respond quickly, switching services if one is busy.

6
📊 Check the dashboard

Open a simple screen to see which service answered, speeds, and usage stats.

🎉 Free AI magic

Enjoy endless smart conversations and help for your projects, all at zero cost.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 16 to 16 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is freellm?

Freellm is a TypeScript OpenAI-compatible API gateway that aggregates six free LLM providers—Groq, Gemini, Mistral, Cerebras, NVIDIA NIM, and Ollama—delivering 25+ models via one endpoint. Developers replace their OpenAI base URL to get automatic failover, circuit breakers, and meta-models like `free-fast` for speed or `free-smart` for reasoning, stacking keys for ~360 free RPM without 429 errors. Run it locally with Docker or deploy to Render/Railway for instant free LLM API access.

Why is it gaining traction?

Unlike scattered free LLM APIs, freellm acts as a drop-in OpenAI-compatible gateway, working with any AI SDK, LangChain, or Vercel AI setup—no code changes needed. Response caching hits ~23ms on duplicates, a real-time dashboard tracks health and usage, and features like browser tokens plus virtual sub-keys enable safe multi-tenant free LLM chat. It's the GitHub free LLM hub for zero-cost routing with smart failover.

Who should use this?

Indie hackers prototyping AI side projects or MVPs needing free LLM for coding without OpenAI bills. Frontend devs adding OpenAI-compatible chat to static sites via browser tokens. Backend teams building dify-like apps or OpenClaw tools wanting a reliable free LLM online gateway.

Verdict

Grab it for free LLM experimentation—Docker quickstart and docs shine—but with 16 stars and 1.0% credibility, validate failover in staging first. Maturity lags on tests; ideal for personal use, not high-scale yet.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.