shawnpetros

shawnpetros / salazar

Public

A harness for invoking long running agent loops.

17
1
100% credibility
Found Apr 02, 2026 at 17 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

An AI agent system that autonomously plans, generates, validates, and evaluates code features from natural language specifications, with CLI interface, live dashboard, and support for single or multi-service projects.

How It Works

1
🔍 Discover the builder

You hear about a friendly tool that turns your app ideas into working code automatically.

2
🧙 Set up in minutes

Run it the first time and answer a few simple questions to connect smart helpers that do the thinking.

3
📝 Describe your app

Write a plain description of what your app should do, like features and how it works.

4
🚀 Start the magic

Hit go and watch it plan, build, test, and improve your app step by step.

5
📊 Follow the progress

See a live dashboard showing features completing, costs, and smart checks passing.

6
📈 Review past builds

Check history of your projects anytime to see what worked and reuse successes.

🎉 Your app is ready

Enjoy your fully working app, complete with tests and commits, built just how you imagined.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 17 to 17 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is salazar?

Salazar is a TypeScript CLI harness for running long-running AI agent loops, turning markdown feature specs into autonomous code generation workflows. You feed it a spec file like `salazar run features.md`, and it spins up agents to plan, generate, validate, and evaluate code changes with real-time TUI progress, cost tracking, and a companion Next.js dashboard over SSE and Redis. Developers get a full agent harness on GitHub that handles onboarding, prereqs like Python 3.11 and Claude CLI, and session history without manual orchestration.

Why is it gaining traction?

It stands out with a polished Ink-based terminal dashboard showing live feature progress, timelines, evaluator scores, and commit feeds, plus seamless GitHub integration hooks like webhooks and status checks for CI triggers. The multi-orchestrator supports brownfield mode for existing codebases, parallel runs, and hardening levels, making agent loops reliable for real projects. Devs love the zero-config start via wizard and history CLI for reviewing past runs with costs and pass rates.

Who should use this?

AI workflow engineers prototyping agentic coding pipelines, or backend devs automating feature implementation from specs without babysitting prompts. Ideal for teams using Claude models in brownfield repos needing regression guards, or indie hackers testing autonomous agents before GitHub Actions deployment.

Verdict

Try it for agent experiments—solid CLI and dashboard make loops tangible—but at 17 stars and 1.0% credibility, it's early alpha with thin docs; expect bugs in edge cases until more runs stabilize it.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.