benoitc

benoitc / hornbeam

Public

WSGI/ASGI HTTP server powered by the BEAM. Mix the best of Python (AI, web apps) with Erlang (distribution, concurrency, resilience).

116
4
100% credibility
Found Feb 17, 2026 at 39 stars 3x -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Erlang
AI Summary

Hornbeam runs Python web applications and machine learning services with high scalability and concurrency using Erlang.

How It Works

1
🌐 Discover Hornbeam

You hear about Hornbeam while searching for a way to make your Python website or AI tool handle way more visitors without crashing.

2
📥 Get it ready

Download Hornbeam and point it to your simple Python code for the website or smart service you want to run.

3
🚀 Launch with one go

Start your service in seconds – it springs to life, ready to serve pages or crunch data super quickly!

4
👥 Handle crowds easily

Send tons of people or requests at once, and it stays smooth and fast, no matter how busy it gets.

5
💾 Share info safely

Keep track of things like visitor counts or saved smart results that everyone sees instantly and correctly.

6
🔗 Team up machines

Link several computers together so your service grows even bigger and shares work across them seamlessly.

🎉 Thriving service

Your website or AI tool now runs lightning-fast, scales to millions, and feels rock-solid reliable.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 39 to 116 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is hornbeam?

Hornbeam runs WSGI and ASGI Python apps—like FastAPI or Starlette—on the Erlang BEAM VM, mixing Python's AI models and web apps with BEAM's concurrency, distribution, and resilience. It solves Python's GIL limits for high-scale HTTP servers, handling millions of connections via lightweight processes, shared state, and cluster RPC. Start it with a single Erlang command like `hornbeam:start("myapp:application")` and drop in your existing code.

Why is it gaining traction?

Benchmarks show it beating Gunicorn 8-9x on throughput and latency under heavy concurrency, powered by BEAM's no-GIL model—ideal for real-time apps with WebSockets, HTTP/2, and pub/sub. Python devs get Erlang perks like atomic counters, distributed calls to remote nodes, and ML result caching without extra infrastructure. The hook: same Python code, massive scale.

Who should use this?

Python API builders hitting concurrency walls in production web services or chat apps. ML teams deploying inference endpoints across clusters, needing shared caching and RPC to GPU nodes. Distributed system devs mixing Python apps with Erlang for fault-tolerant, hot-reloadable backends.

Verdict

Grab it if benchmarks match your workload—it's a concurrency beast for Python servers. But with 17 stars and 1.0% credibility, treat as experimental; solid docs and examples, but needs real-world battle-testing before prime time.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.