commaai

commaai / miniray

Public

Minimal library for distributed python work. Can efficiently run CPU and GPU tasks across 100s of machines.

112
2
100% credibility
Found Feb 04, 2026 at 25 stars 4x -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

Miniray is a library that lets you run Python computations across many computers as if using a simple parallel tool on one machine.

How It Works

1
🔍 Hear about Miniray

You learn about Miniray when your heavy calculations take too long on one computer and you want to spread the work across many machines for speed.

2
📦 Add to your project

You easily include Miniray in your work, just like adding a helpful tool, and define a simple function for what you want to compute.

3
Start your computing team

With one easy step, you launch a shared workspace that connects computers ready to team up on your tasks.

4
📤 Send out the work

You hand off batches of calculations to the team, and they automatically divide and conquer across available machines.

5
Watch it happen

You relax as the system shows progress, handling everything smoothly without you lifting a finger.

🎉 Get lightning results

All your answers arrive quickly and correctly, letting you finish big jobs in a fraction of the time.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 25 to 112 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is miniray?

Miniray is a minimal Python library for distributing CPU and GPU tasks across hundreds of machines via Redis queues. It mirrors Python's concurrent.futures API, so you submit or map functions over iterables in a context manager, and it handles execution on remote workers with automatic resource limits for CPU threads, memory, GPUs, and timeouts. Perfect for scaling embarrassingly parallel workloads without Ray's overhead.

Why is it gaining traction?

This github minimal api delivers Ray-like power with multiprocessing simplicity—no YAML configs or cluster managers needed. Users get GPU-aware scheduling, Triton inference support, and progress logging via tqdm, standing out from verbose alternatives like Dask. Its minimal library background echoes lean tools like minimal js library or minimal webgl library, hooking devs who want datacenter scale without bloat.

Who should use this?

ML engineers parallelizing numpy or torch batches across GPU fleets. Data scientists mapping heavy functions like simulations or feature extractions on 100+ node clusters. Skip if you're doing local jobs or small teams—use multiprocessing instead.

Verdict

Solid for high-scale Python compute if you run workers; 97 stars and 1.0% credibility score signal early maturity with basic README docs. Contact harald@comma.ai to test—could evolve into a miniray 100 or miniray 2000 staple.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.