HwangTaehyun

Oh My Agentic Score (OMAS) — Measure and visualize AI agent thread-based engineering capabilities. 4-dimension scoring: More (Parallelism), Longer (Autonomy), Thicker (Density), Fewer (Trust).

11
2
100% credibility
Found Mar 06, 2026 at 11 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

Oh My Agentic Score analyzes logs from AI-assisted coding sessions to score and visualize performance in parallelism, autonomy, density, and trust using an interactive dashboard.

How It Works

1
🚀 Discover and install easily

You find this tool that measures how well you team up with AI for coding, and set it up with one simple command.

2
🔍 Scan your coding history

Tell it to look at your past AI coding chats, and it gathers everything automatically.

3
📊 See your performance scores

Get instant reports showing your strengths in running tasks in parallel, letting AI work alone longer, packing more work into time, and trusting AI with fewer check-ins.

4
🎛️ Launch colorful dashboard

Open a beautiful web view with charts, trends, and breakdowns to explore your progress visually.

5
Share scores or keep private?
👥
Join rankings

Sign in once with GitHub to upload safely and see where you stand among developers.

🔒
Stay local

Everything stays on your computer, fully private with no sharing needed.

📈 Track your growth

Watch your agentic coding skills improve over time with clear charts and tips to level up.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 11 to 11 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is oh-my-agentic-score?

Oh My Agentic Score (OMAS) analyzes Claude Code session logs to deliver a 4-dimension agentic AI score—More (parallelism), Longer (autonomy), Thicker (density), Fewer (trust)—visualizing your AI agent engineering capabilities. Python CLI with Next.js dashboard scans ~/.claude/projects/, scores threads from Base chats to Z-thread autonomy, and exports trends via `omas scan`, `omas report`, `omas dashboard`. Track agentic AI scorecard progress without code exposure.

Why is it gaining traction?

It hooks developers by turning vague "agentic workflows" into concrete metrics like tool calls per human message or concurrent sessions, outperforming generic logging tools. Radar charts, pie breakdowns, and cloud rankings (privacy-safe, hashed paths) make agentic github coding and github agentic workflows measurable and shareable. Simple one-liner install via curl, pip, or Homebrew seals the deal.

Who should use this?

Claude Code users refining agentic github copilot prompts, github agentic chat sessions, or agentic sdlc pipelines. Suited for backend engineers boosting autonomy in long-running agents, or teams benchmarking density in github agentic rag knowledge graphs and parallel github agentic actions.

Verdict

Solid alpha (v0.8.7, 11 stars, 1.0% credibility) with polished docs and MIT license, but low adoption signals early risks—test on non-critical workflows. Worth it for Claude heavyweights chasing agentic gains.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.