OpenLoadBalancer

High-performance zero-dependency L4/L7 load balancer written in Go. Single binary with Web UI, clustering, MCP/AI integration. 8.5K RPS, 39 E2E tests.

10
1
100% credibility
Found Mar 17, 2026 at 10 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Go
AI Summary

OpenLoadBalancer is a high-performance load balancer that distributes incoming traffic across multiple backend servers using various algorithms, with security features, monitoring, and easy configuration.

How It Works

1
🔍 Discover a simple way to balance your website traffic

You hear about OpenLoadBalancer, a tool that spreads visitors evenly across your servers so none get overwhelmed.

2
📥 Get it with one easy command

Run a quick download script and it's ready on your computer, no complicated setup needed.

3
✏️ Tell it about your servers

Make a short list of your server addresses and how to share the load, like giving directions to a helpful assistant.

4
🚀 Start your balancer

Launch it with one command and watch your traffic flow smoothly to the right places.

5
📊 See everything working perfectly

Open the dashboard to watch requests zip evenly to your servers, with built-in protection keeping things safe.

6
🔄 Add more servers anytime

Easily update your list and reload – no downtime, just more power.

Your site handles huge crowds effortlessly

Visitors love the fast, reliable experience while your servers share the work happily.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 10 to 10 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is olb?

olb is a Go-based L4/L7 load balancer that ships as a single zero-dependency binary under 10MB, proxying HTTP/HTTPS, WebSockets, gRPC, TCP, and UDP with SNI routing. It balances across backends using 12 algorithms like round-robin, least connections, Maglev, and consistent hashing, plus health checks, circuit breakers, and a 6-layer WAF blocking SQLi, XSS, and bots. Users configure via YAML/TOML/HCL/JSON, reload hot with SIGHUP or CLI, and monitor via Web UI dashboard, TUI (`olb top`), or Prometheus metrics.

Why is it gaining traction?

Zero external deps and stdlib-only design make it dead simple to deploy—no Docker bloat or vendoring hell—while hitting 15k RPS on consumer hardware with <3% middleware overhead, rivaling C++ high performance GitHub proxies. The 30+ CLI commands (status, drain, validate), Raft clustering, service discovery (Consul/Docker/DNS), and MCP for AI config gen stand out for ops automation. 90% test coverage and 56 E2E tests verify every feature end-to-end.

Who should use this?

DevOps engineers replacing NGINX/HAProxy in microservices or edge proxies, needing quick L7 routing without Kubernetes ingress overhead. SREs at startups handling 10k+ RPS spikes with sticky sessions and rate limits. Backend teams prototyping high performance backends on GitHub, avoiding Java/Rust dep chains for lean prod.

Verdict

Promising for perf-critical proxies but too green at 10 stars and 1.0% credibility—docs shine, benchmarks impress, yet real-world battle scars are absent. Try in staging if zero-deps perf trumps ecosystem maturity. (198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.