kunchenguid

Can your agent use the right skills for the right tasks?

18
0
100% credibility
Found Apr 16, 2026 at 18 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

superpowers-bench is a benchmarking tool that measures how effectively various AI coding agents autonomously select and invoke appropriate skills for coding tasks, with or without contextual hints.

How It Works

1
🔍 Discover the benchmark

You hear about a fun test that checks if AI coding helpers pick the right tricks for different jobs without being told exactly what to do.

2
💻 Set it up

You get the tool ready on your computer so it's all set to start testing.

3
📥 Download skills

You grab the collection of helpful skills that the AI helpers can choose from.

4
Pick your test
🎯
Quick single test

Test one specific coding job to see what skills the AI picks.

📈
Full comparison

Run tests on lots of jobs at once to compare different AI helpers side by side.

5
🚀 Run the tests

Hit start and watch as the AI helpers tackle the jobs, picking skills on their own or with gentle hints.

📊 See the scores

Get a clear report showing which AI did best at choosing the right skills, with charts and numbers to compare.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 18 to 18 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is superpowers-bench?

Superpowers-bench is a TypeScript CLI benchmark that tests if coding agents like Claude Code, Codex, or OpenCode pick the right skills from the superpowers repo for given tasks, without explicit instructions. You clone it, fetch skills, run tasks via `npm run bench -- matrix`, and get F1 scores on skill selection precision/recall—baseline vs. with subtle hints. It clones a sample repo per run, executes agents in isolation, and spits out markdown reports comparing agent github claude, codex, or opencode performance.

Why is it gaining traction?

It delivers apples-to-apples evals across agents on the same 20 tasks, focusing purely on skill discovery—not code quality—via programmatic grading. The matrix mode parallelizes runs, auto-handles multi-turn chats, and quantifies hint impact, making it dead simple to swap in your agent github copilot cli or custom setup. Devs digging agent github action workflows love the isolated workspaces and repeatable results.jsonl output.

Who should use this?

AI agent builders tweaking agent github copilot vscode extensions or agent github code interpreters. Teams evaluating "what is the best agent right now" for repo tasks like debugging or TDD. Researchers comparing agent github claude vs. copilot intellij on skill invocation in github agent repo benchmarks.

Verdict

Grab it if you're deep into agent github copilot reddit debates or building agent rights-aware tools—solid for quick evals despite 18 stars and 1.0% credibility score signaling early maturity. Docs are crisp, but expect tweaks for production-scale runs; run a matrix first to test your setup.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.