AndrewNgGirl

Open-source self-hosted web tool for evaluating Agent Skills with rubric scores, Deep Review, and improvement suggestions.

10
1
100% credibility
Found May 07, 2026 at 10 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

SkillLens is an open-source web tool that analyzes AI agent skills from uploaded files, delivering scores across value, market fit, costs, reliability, and documentation quality along with improvement advice.

How It Works

1
🔍 Discover SkillLens

You hear about SkillLens, a helpful tool that checks if your AI assistant skill is ready to share, and visit the simple web page.

2
📁 Upload Your Skill

Drag your skill description file, folder, or zipped package onto the page, and it instantly loads for review.

3
📊 See Quick Scores

Right away, you get a preview of basic checks like structure and costs, with a colorful radar chart showing strengths and gaps.

4
🎯 Tune Your Review

Adjust the importance of different areas like value or reliability to match what matters most to you.

5
🚀 Run Deep Review

Click to launch the full smart analysis, which thoughtfully reviews usefulness, market fit, and stability using AI insights.

6
💡 Get Actionable Report

Enjoy a detailed breakdown with pillar scores, evidence, top suggestions, and even similar projects from the web.

Improve and Share

Follow the clear fixes to polish your skill, export the report, and confidently publish it knowing it's solid and valuable.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 10 to 10 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is SkillLens?

SkillLens is an open-source self-hosted web tool that evaluates Agent Skills for ecosystems like Cursor, Claude, and OpenClaw. Upload a SKILL.md file, skill folder, or zip package, and it delivers a 100-point rubric score across five pillars—value, market fit, runtime cost, reliability, and writeup quality—plus LLM-powered deep reviews, GitHub market signals on competitors, and prioritized improvement suggestions. Built in TypeScript for the web UI with a Python CLI option, it runs locally without needing API keys in preview mode.

Why is it gaining traction?

It goes beyond basic format checks to assess real-world viability: Does the skill save time, beat generic LLMs or GitHub Copilot alternatives, stay cheap to run, and handle edge cases reliably? Customizable weights, bilingual English/Chinese support, radar charts, and exportable PDF/JSON reports make iteration fast. As an open-source self-hosted GitHub alternative for agent skill auditing, it fills a gap in structured feedback before marketplace submission or open-sourcing.

Who should use this?

Agent skill authors prepping Cursor rules, Claude artifacts, or OpenClaw packages for release. Teams building internal AI workflows who need consistent quality gates on metadata, prompts, scripts, and costs. Devs evaluating open-source self-hosted project management tools or ticketing systems adapted as agent skills.

Verdict

Early days with just 10 stars and 1.0% credibility score—docs are solid (bilingual READMEs, quickstart), but expect light testing and occasional rough edges. Worth self-hosting now if you're in agent skills; fork and contribute to mature this open-source GitHub Copilot alternative.

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.