yuseferi

OpenCode plugin for LiteLLM proxy support with auto-detection and dynamic model discovery

10
0
100% credibility
Found Apr 29, 2026 at 10 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

Drop-in plugin for OpenCode that automatically detects a LiteLLM proxy, discovers its models, and registers them in the app's model picker without manual configuration.

How It Works

1
🔍 Find the Handy Helper

You hear about a simple add-on that makes your AI chat app automatically discover all available AI brains from your model hub without manual lists.

2
📥 Pick It Up

You grab this lightweight helper and add it to your AI app's setup in just one easy line.

3
🔌 Connect Automatically

It smartly finds your nearby model hub on usual spots like your computer and links up securely.

4
Watch Models Appear

Fire up your model hub if needed, then launch your AI app – suddenly all the AI brains show up beautifully in your picker, ready to go.

5
🗣️ Chat with Any Brain

Browse the full list of models, pick your favorite, and start creating or asking questions effortlessly.

🎉 AI Magic Unlocked

Now your assistant has instant access to every model, no tedious updates or restarts – just smooth, powerful conversations.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 10 to 10 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is opencode-litellm?

opencode-litellm is a TypeScript plugin that adds drop-in LiteLLM proxy support to OpenCode, auto-detecting your running instance on ports like 4000 or 8000 and dynamically pulling models from `/v1/models`. It solves the pain of manually listing models in `opencode.json`—just install via npm, add `"plugin": ["opencode-plugin-litellm@latest"]`, and every model from your `litellm config.yaml` appears in OpenCode's picker with formatted names and inferred modalities. Supports opencode litellm api key via env vars and custom baseURLs for remote proxies.

Why is it gaining traction?

Unlike static litellm vs opencode setups, it offers zero-config discovery, smart formatting (e.g., `anthropic/claude-3-5-sonnet` becomes "Claude 3 5 Sonnet"), and auto-routing for reasoning models like o1/o3 via `/v1/responses` to avoid tool-calling errors. Non-blocking 5s timeouts and non-destructive merges preserve your hand-curated entries, making opencode litellm integration seamless for dynamic proxies. The opencode litellm provider handles auth, owners, and multi-modal types out of the box.

Who should use this?

AI engineers building opencode github agents or apps that route through LiteLLM for providers like Claude, Ollama, or Bedrock. Teams using opencode github copilot models or enterprise proxies who hate restarting OpenCode after `model_list` tweaks. Perfect for opencode litellm setup in local dev or opencode github action workflows with mixed chat/embedding/image models.

Verdict

Grab it for quick opencode litellm proxy wins if you're in the stack—excellent docs and TypeScript strictness punch above 10 stars and 1.0% credibility. Early maturity means test in non-prod first, but it delivers real time savings today.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.