EricXu20266

Codex + Claude → DeepSeek or other llm 本地代理 | Agent Protocol Translation Proxy

12
1
100% credibility
Found May 10, 2026 at 12 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
AI Summary

LLMProxy is a desktop application for Windows and macOS that runs a local protocol translation proxy service to enable seamless integration of Codex Desktop (Responses API) and Claude Cowork (Anthropic Messages API) with DeepSeek and other OpenAI-compatible third-party large model providers.

Star Growth

See how this repo grew from 12 to 12 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is llm-proxy?

llm-proxy is a local proxy server that translates API protocols from Codex and Claude tools to DeepSeek or other LLMs, letting you swap expensive proprietary models for cheaper alternatives without changing your workflow. Developers point their claude code cursor or codex claude code apps at this proxy, and it handles the agent protocol conversion on your machine. It's a lightweight llm api key proxy github solution for running codex claude alternatives locally.

Why is it gaining traction?

It stands out as a codex claude code alternative by enabling seamless swaps in tools like Cursor or GitHub integrations, avoiding vendor lock-in while cutting costs on API calls. The hook is its simplicity—no heavy setup, just proxy your codex github copilot or claude code plugin requests to open models like DeepSeek. Devs trying codex claude code comparison notice the speed gains from local routing over clunky cloud dependencies.

Who should use this?

AI engineers building codex github actions or apps who want codex claude plugin flexibility without OpenAI bills. Backend devs handling codex github issues and code reviews, seeking a codex github integration that proxies to custom LLMs. Frontend teams using codex claude skill in editors, tired of rate limits on premium models.

Verdict

With 12 stars and a 1.0% credibility score, llm-proxy is early-stage—docs are minimal, no tests visible—making it risky for production. Worth a quick test as a lite llm proxy github experiment if you're prototyping codex claude.md workflows, but wait for more polish before relying on it.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.