monaccode

monaccode / astromesh

Public

Multi-model AI agent runtime. Define agents in YAML, connect 6 LLM providers, orchestrate with ReAct/Plan&Execute/Fan-Out/Pipeline/Supervisor/Swarm patterns, and deploy as REST/WebSocket API with RAG, memory, MCP tools, guardrails, and OpenTelemetry observability.

19
1
100% credibility
Found Mar 12, 2026 at 12 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

Astromesh is an open-source runtime platform for building, orchestrating, and running AI agents with multi-model routing, tools, memory, RAG, and declarative configuration.

Star Growth

See how this repo grew from 12 to 19 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is astromesh?

Astromesh is a Python runtime for multi-model AI agents, letting you define agents in YAML and wire them to six LLM providers like Ollama, vLLM, and OpenAI-compatible endpoints. It handles orchestration with ReAct, Plan&Execute, Supervisor, and Swarm patterns, plus built-in memory backends, RAG pipelines, 18+ tools, and deployment as REST/WebSocket APIs with guardrails and OpenTelemetry. Developers get a full agent platform without reinventing model routing, observability, or tool execution.

Why is it gaining traction?

Its declarative YAML config and CLI (`astromeshctl new agent`, `astromeshctl run`) slash boilerplate for multi-model agentic AI, unlike fragmented frameworks needing custom loops. The model router auto-selects by cost/latency, mesh enables distributed swarms, and the dashboard/CLI show traces/metrics instantly—hooks for rapid prototyping multi-model LLM agents or multi-agent systems.

Who should use this?

AI engineers building copilots, support bots, or workflows like document processing; teams deploying multi-model AI agents with RAG/tools over WhatsApp or APIs. Ideal for backend devs evaluating multi-model GitHub tools for agent runtimes, not one-off scripts.

Verdict

Promising for multi-model agent experiments with solid docs and Docker stacks, but at 10 stars and 1.0% credibility, it's early—test in dev before prod. Pair with mature providers for reliability.

(187 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.