ChanningLua

Self-improving agent runtime that learns from experience — test-verify-fix loops, correction detection, cross-project memory, multi-model orchestration.

83
9
100% credibility
Found Apr 15, 2026 at 32 stars 3x -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

Prax is a command-line tool that orchestrates AI agents to repair codebases through automated test-verify-fix loops with persistent memory and multi-model support.

How It Works

1
🔍 Discover Prax

You hear about Prax, a helpful tool that uses smart assistants to automatically find and fix bugs in your code by checking tests and making changes.

2
📥 Get Prax ready

Download and prepare Prax on your computer so it's all set up for your projects.

3
🔗 Link your AI helper

Connect Prax to a smart thinking service so it can understand and work on your code.

4
📂 Open your code folder

Navigate to the folder with your code project where the tests need fixing.

5
💬 Tell Prax what to do

Type a simple instruction like 'run the tests, fix any failures, and stop when they pass' – watch Prax inspect files, make edits, and check again until everything works.

6
🔄 See it in action

Prax shows each step: what it reads, changes it makes, and test results, building memory so next tasks start faster.

Bugs fixed!

Your tests now pass perfectly, code is repaired, and you can continue building with confidence.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 32 to 83 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is prax-agent?

Prax-agent is an open-source coding agent CLI in Python that runs LLM agents on your real codebases, looping through test-verify-fix cycles to automate bug fixes and refactors. Tell it "run pytest, fix failures until tests pass," and it inspects code, edits files, re-runs tests, and persists context across sessions via JSON, SQLite, or vector stores. As a GitHub open source tool, it supports multi-model orchestration with Claude, GPT, or GLM, plus read-only or workspace-write permissions for safe execution.

Why is it gaining traction?

It flips the script on flaky LLM wrappers by prioritizing verification—benchmarks show 10/10 repo repairs in half the time of peers like Hermes. Developers dig the persistent REPL with slash commands (/model, /cost, /todo), making it a practical open source GitHub Copilot alternative for CLI-driven coding AI. Multi-model fallbacks and cost tracking keep runs reliable without vendor lock-in.

Who should use this?

Backend engineers debugging test failures or refactoring auth flows in Python/JS repos. Solo devs analyzing technical debt via natural language queries like "explain login.py and suggest httpx upgrades." Teams needing an open source coding LLM for self-hosted GitHub-like workflows without Copilot subscriptions.

Verdict

Grab it if you're evaluating open source coding agents—strong docs, MIT license, and repo-repair benchmarks punch above its 27 stars and 1.0% credibility score. Still alpha with room for broader lang support, but a solid CLI bet for test-driven AI assistance.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.