ssyhape-pixel

Production-grade orchestration layer for horizontally-scalable OpenCode Server instances with Temporal workflows, K8s sandbox management, and full observability.

47
1
100% credibility
Found Mar 27, 2026 at 47 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Go
AI Summary

opencode-scale is a scalable orchestration system for running thousands of AI coding agent sessions in isolated environments with workflow management and monitoring.

How It Works

1
🔍 Discover the tool

You find this helpful system on GitHub that lets you run lots of AI coding helpers at once without hassle.

2
💻 Set up on your computer

You install a few simple things like Go and Docker, then run one easy command to start everything locally.

3
🚀 Everything starts up

With a quick 'make compose-up', your personal AI coding farm comes alive and is ready to use.

4
📝 Try sample tasks

You add test coding jobs like 'sort a list' with 'make seed' to see it work right away.

5
Submit your own idea

You send a simple request like 'write a sorting function' and watch the AI think and respond live.

6
📊 Check progress anytime

You peek at task status or stream updates to see how your coding job is going step by step.

🎉 Get your code results

Your AI delivers working code fast and safely, ready for your projects or team to use at scale.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 47 to 47 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is opencode-scale?

opencode-scale is a production-grade orchestration layer for horizontally-scalable OpenCode Server instances, handling hundreds to thousands of concurrent agent sessions for coding tasks. Built in Go, it uses Temporal workflows for reliable task execution, K8s sandbox management with gVisor isolation, and full observability via OpenTelemetry, Prometheus, and Grafana. Users get a simple API to submit prompts, poll status, or stream SSE updates, with automatic pooling, session affinity, and LiteLLM proxying for multi-provider LLMs.

Why is it gaining traction?

It stands out as a production-grade agentic AI system on GitHub, delivering Kubernetes production-grade container orchestration without the usual ops headache—Helm charts and Kustomize overlays make dev-to-prod seamless. Developers notice the warm pools for low-latency allocation, audit logs, API key auth, and pre-built Grafana dashboards for pool utilization and task latencies. The rate-limiting tests with key rotation hook those scaling LLM-heavy workloads.

Who should use this?

AI engineers building production-grade agentic AI GitHub apps or RAG pipelines needing isolated, scalable sandboxes. K8s ops teams managing OpenCode fleets for code generation services. Devs prototyping horizontally-scalable instances before committing to custom orchestration.

Verdict

Worth evaluating for K8s-heavy agentic setups—solid docs, Makefile quickstarts, and race-detected tests show polish despite 47 stars and 1.0% credibility score. Early maturity means watch for community growth, but it's a strong base for production-grade scale.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.