yagna-1

yagna-1 / astragraph

Public

Policy-enforced observability and fail-closed guardrails for MCP/A2A multi-agent systems.

24
1
100% credibility
Found Feb 17, 2026 at 22 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Rust
AI Summary

AstraGraph intercepts and enforces policies on AI agent communications while building visual maps of their interactions for monitoring.

How It Works

1
🔍 Discover safe AI teams

You hear about AstraGraph, a tool that watches your AI agents to keep them safe and show what they do.

2
🚀 Start everything

Run a simple command to launch the monitoring system on your computer.

3
🧑‍💻 Test your agents

Connect example AI agents and watch them work while the system checks rules.

4
📊 Open the dashboard

Visit a web page to see a map of agent actions and decisions.

5
⚠️ Spot issues

Review blocked actions and reasons why agents were stopped.

Safe workflows

Your AI agents run securely with full visibility into their teamwork.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 22 to 24 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is astragraph?

AstraGraph delivers policy-enforced observability and fail-closed guardrails for MCP/A2A multi-agent systems in Rust. It proxies agent tool calls and task handoffs, evaluates actions against YAML policies, and constructs causal graphs with audit trails queryable via REST APIs and a React dashboard. Users get workflow reconstruction, violation timelines, and SLO metrics without agent changes.

Why is it gaining traction?

Zero-overhead proxying for MCP tools/call and A2A tasks/send endpoints, with Python connectors for LangGraph, CrewAI, and AutoGen. Fail-closed defaults block unsafe calls pre-execution, verifier queues on outage, and policy rollouts include canary promotions. E2E script proves gates like trace-missing blocks in one run.

Who should use this?

Platform engineers deploying MCP/A2A agents in production, needing guardrails for risky tools like data exports. Security ops auditing multi-agent handoffs in finance or compliance workflows. DevRel teams demoing observable LLM systems with quickstart Docker stacks.

Verdict

Prototype it for MCP/A2A observability—Helm charts and policy simulator ease Kubernetes deploys. 21 stars and 1.0% credibility signal early maturity, but solid evals, CI gates, and docs make it production-ready for small fleets; watch for external DB backing.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.