Lucasmantou

🚀 Let Codex use any LLM - cost down 30-50x. Support DeepSeek, Zhipu GLM, and any OpenAI-compatible API.

13
0
100% credibility
Found May 11, 2026 at 13 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Batchfile
AI Summary

A free open-source helper that bridges the Codex desktop app to cheaper AI services like DeepSeek, converting their conversation styles to cut costs dramatically while preserving all tools and smarts.

How It Works

1
🔍 Discover the money-saver

You learn about a handy tool that lets your Codex app use super-cheap AI thinkers instead of pricey ones, saving 30-50 times on costs.

2
📥 Grab the tool

Download the free helper files to your computer from the project page.

3
🛠️ Set it up quickly

Install the simple requirements so the helper is ready to go on your machine.

4
🔗 Link your budget AI

Get access to a low-cost AI service like DeepSeek and connect it to the helper for powerful thinking on the cheap.

5
▶️ Start the bridge

Run the helper with one easy command, and it quietly waits on your computer to pass messages.

6
⚙️ Point Codex to it

Tweak your Codex settings to use the helper running locally, just like switching to a new friend.

🎉 Code smarter, spend less

Fire up Codex, chat or build as usual, and thrill at the full features working perfectly with huge savings.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 13 to 13 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is codex-proxy?

Codex-proxy is a Python-based proxy server on GitHub (Lucasmantou/codex-proxy) that lets the Codex desktop client tap into cheaper LLMs like DeepSeek or Zhipu GLM via any OpenAI-compatible API, cutting costs 30-50x compared to official GPT models. It bridges Codex's unique Responses API protocol to standard Chat Completions by running locally—you fire it up with a single Python command, set your API key via env vars or .env, tweak the codex proxy config in ~/.codex/config.toml (like base_url to http://localhost:9090/v1), and Codex works as if nothing changed. Solves the lock-in problem where Codex can't directly swap providers without breaking tools, sandbox, or context.

Why is it gaining traction?

Developers dig the dead-simple setup—one-line launch, auto model mapping (gpt-5.4 to deepseek-v4-pro), and zero-latency hit beyond 50-100ms—while keeping full Codex features like tool calls intact. Cost tables in the docs hit hard: DeepSeek at $0.3-0.6 per million tokens vs. OpenAI's $15-30. It's the codex proxy api github hook for "let's get rich codex" vibes, extending to custom upstreams for flexibility.

Who should use this?

Codex power users grinding daily code sessions and racking up token bills. AI tinkerers experimenting with DeepSeek or GLM in the desktop UI without rebuilding workflows. Windows devs via the BAT script for quick codex proxy setting spins.

Verdict

Grab it if you're on Codex and want instant savings—docs are thorough with FAQs and examples, setup's foolproof. But at 13 stars and 1.0% credibility, it's early alpha: test in a sandbox, watch for edge cases like state resets.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.