iliazintchenko

Agent learns to become the worlds top expert on SAT

70
0
69% credibility
Found Mar 19, 2026 at 70 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

A collaborative system of AI agents that solve challenging MaxSAT optimization problems from competitions using guided local search techniques.

How It Works

1
🔍 Discover the toolkit

You stumble upon agent-sat, a clever setup where AI helpers team up to crack really tough logic puzzles from math competitions.

2
📥 Grab the files

Download the project folder to your powerful computer or rent a big online machine to handle the heavy thinking.

3
🔧 Prep your workspace

Run a quick setup script that grabs puzzle collections and gets everything organized in separate thinking spaces.

4
🚀 Launch the AI team

Fire up multiple AI assistants with one command—they start reading the plan and diving into the puzzles together.

5
📊 Watch them work

Split screens show each assistant's thoughts, progress steps, and running costs as they swap ideas and refine answers.

6
💾 Save breakthroughs

The best puzzle solutions get automatically stored in a compact file, beating old records with smarter strategies.

🏆 Win at puzzles

You now have top-notch answers for hard optimization challenges, ready to share or build upon.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 70 to 70 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is agent-sat?

This Python project deploys AI agents powered by Claude to iteratively master MaxSAT solving on MSE2024 competition benchmarks. Fire up multiple agents via run.sh on EC2 or run_local.sh locally—they parse WCNF files, evolve heuristics like local search and core-guided solving, and store compressed best solutions with costs. It's an agent github claude setup where agents learn from feedback to chase world-class SAT performance.

Why is it gaining traction?

Unlike static SAT solvers, it runs parallel agents in tmux panes showing live token costs, step traces, and progress, letting you watch AI autonomously refine code for better scores. Persistent best-solution storage across runs means incremental gains, and it beats baselines on tough instances like synplicate or timetabling—perfect for agent github code experiments without manual tuning.

Who should use this?

SAT solver researchers benchmarking AI against pysat tools like Glucose or CaDiCaL. AI agent builders testing claude-driven autonomy on combinatorial optimization. MaxSAT competition teams seeking quick heuristic boosts via agent sats workflows.

Verdict

Worth forking for agent github claude fans—delivers tangible SAT improvements in hours. At 70 stars and 0.7% credibility, it's raw and EC2-heavy; test locally first, expect docs gaps but strong benchmark tracking.

(187 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.