ZhiYi-R

ZhiYi-R / moon-bridge

Public

moon-bridge是一个转发层,用于将Anthropic Messages格式的Provider转换成Codex可用的OpenAI Responses API

19
1
100% credibility
Found Apr 29, 2026 at 19 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Go
AI Summary

Moon Bridge proxies OpenAI's Responses API to Anthropic-compatible providers like DeepSeek and Kimi, adding caching, web search, visual tools, and model catalog features.

How It Works

1
📖 Discover Moon Bridge

You stumble upon this handy tool on GitHub that lets different AI chat services work together seamlessly.

2
🏗️ Set up your bridge

Copy the ready-made settings file and add details for your favorite AI accounts to connect them.

3
🚀 Start your bridge

Run one easy command to launch it right on your computer.

4
Pick your setup
💻
Local fun

Keep it personal on your computer for quick tests.

☁️
Online access

Make it available anywhere with a simple online launch.

5
🔗 Link your AI helper

Point your coding buddy or chat app to your new bridge.

6
🧠 Chat with superpowers

Enjoy smarter talks with built-in web lookups, picture smarts, and memory for speedy replies.

🎉 AI magic unlocked

Your projects get faster, cleverer help from combined AI brains!

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 19 to 19 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is moon-bridge?

Moon Bridge is a lightweight Go proxy server that bridges Anthropic Messages API providers—like DeepSeek, Kimi, or Anthropic itself—to the OpenAI Responses API expected by the Codex CLI. Developers configure it via YAML with provider keys, routes, and caching rules, then point Codex at its /v1 endpoint (default: localhost:38440) for seamless model access, including reasoning levels, web search, and tool calls. CLI flags generate Codex config.toml and models_catalog.json automatically, handling conversions without client changes.

Why is it gaining traction?

It unlocks high-context models like DeepSeek V4 Pro (1M tokens) in Codex workflows, with built-in caching (explicit/hybrid modes, TTLs), web search (Tavily/Firecrawl), and extensions for prompt reinforcement or visual inputs. Unlike direct provider proxies, it emits Codex-native outputs like local_shell_call or reasoning summaries, preserving tool history and streaming fidelity. The Docker Compose setup and 95% test coverage make local testing straightforward.

Who should use this?

Codex CLI power users experimenting with cost-effective Anthropic-compatible LLMs for agentic coding tasks. Teams building multi-provider setups needing fallback routes, prompt caching, or injected web search during long sessions. Go devs proxying APIs for internal tools without rewriting clients.

Verdict

Try it for Codex if you want DeepSeek-scale context on a budget—solid for prototypes despite 19 stars and 1.0% credibility score. Docs rely on examples; watch for production scaling as it's early-stage but actively tested. (187 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.