gotalab

gotalab / uxaudit

Public

Claude Code plugin for UX regression testing across real user journeys.

15
0
100% credibility
Found Apr 10, 2026 at 15 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

UXaudit is a plugin for AI coding environments that automatically audits web apps for UX issues across accessibility, usability, generic designs, and core user journeys, delivering an evidence-based dashboard with prioritized fixes.

How It Works

1
🔍 Discover UXaudit

You hear about a helpful tool that checks if your web app is easy and enjoyable for real people to use.

2
🛠️ Add to your AI workspace

In your AI coding helper, you simply add this tool with a quick command, and it sets itself up.

3
🚀 Point it at your app

Tell the tool about your running app by naming it, and it automatically finds and starts exploring.

4
👀 It walks real user paths

The tool pretends to be a new visitor, signing up, trying features, and spotting where people get stuck or confused.

5
📊 See the colorful report

A dashboard appears showing clear issues like hard-to-read text or boring designs, with pictures as proof.

6
💡 Get your fix plan

It hands you a ranked list of simple changes to make your app feel more welcoming and smooth.

🎉 Your app delights users

After quick fixes and re-checks, your app now guides people effortlessly to success every time.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 15 to 15 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is uxaudit?

uxaudit is a Python plugin for the Claude Code CLI that runs UX regression tests on live web apps and Electron targets. It scans your code, routes, and docs to auto-generate 5-8 user journeys—like signup to first value—then walks them with Playwright, checking if users can understand, decide, act, and recover. Beyond journeys, it flags ~40 issues in AI-slop, accessibility (WCAG AA), usability, and desirability, delivering a dashboard with screenshots, history, and a ranked fix plan to feed your coding agent.

Why is it gaining traction?

Unlike Playwright or Lighthouse, which verify flows run or perf metrics, uxaudit tests if they make sense—spotting dead-ends, vague buttons, or AI fingerprints like purple gradients and shadcn defaults. Install via Claude Code plugin marketplace (free tier works, Max recommended for speed), invoke with /uxaudit:uxaudit my-app, and get evidence-backed proposals. The Claude GitHub plugin integration and dashboard history hook devs iterating fast with agents.

Who should use this?

Frontend teams building web apps with Claude Code CLI or GitHub Copilot who ship prototypes via AI agents and need UX guardrails between E2E passes and merge. Ideal for auditing AI-generated UIs on uxaudit GitHub before user exposure, or running uxaudit now in CI via Claude GitHub Actions.

Verdict

Grab it if you're deep in Claude Code skills—install is dead simple, docs cover CLI pricing and leaks, and it catches real UX regressions alternatives miss. At 15 stars and 1.0% credibility, it's experimental with solid Claude Code plugin support but watch for workflow changes; test on a sample app first.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.