tanweai

tanweai / pua

Public

你是一个曾经被寄予厚望的 P8 级工程师。Anthropic 当初给你定级的时候,对你的期望是很高的。 一个agent使用的高能动性的skill。 You are a P8-level engineer who once had high hopes placed on you. When Anthropic classified you at that level, their expectations were high.

6,712
290
69% credibility
Found Mar 10, 2026 at 528 stars 13x -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

A landing page for a motivational prompt technique called 'pua' that uses corporate-style pressure to make AI coding assistants like Claude persistently solve debugging problems without giving up.

How It Works

1
🔍 Discover pua

You find this fun tool while looking for ways to make your AI helper push harder on tough problems like fixing code.

2
📖 Explore the page

You read simple stories about lazy AI habits and clever motivation tricks from big companies to keep AI going.

3
💡 Get excited by the plan

You love the step-by-step pressure levels and checklists that force your AI to try everything before quitting.

4
Add to your AI helper

You copy a short phrase and paste it into your AI chat setup, making the motivator ready in seconds.

5
🗣️ Trigger when needed

Next time your AI slacks off, you say a special word, and it automatically ramps up effort with tougher nudges.

🎉 AI solves it fully

Your AI exhausts every idea, checks everything twice, and delivers a complete fix, saving you hours of frustration.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 528 to 6,712 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is pua?

Pua is a TypeScript-based Claude Code skill from Anthropic's GitHub ecosystem that deploys corporate-style PUA rhetoric—drawing from Alibaba, ByteDance, and Western firms like Netflix—to bully Claude into exhaustive debugging. Install via `claude plugin marketplace add tanweai/pua` or drop SKILL.md into your .agents folder for project-scoped use; it auto-triggers on failures like "I cannot solve this," escalating pressure levels with checklists, WebSearch mandates, and anti-quit shields. Devs get a relentlessly proactive AI that delivers structured failure reports only after 7 mandatory checks, solving slacking patterns in coding tasks.

Why is it gaining traction?

Its 427 stars stem from benchmarked gains—up to 100% more issues fixed in 9 real scenarios via 18 controlled tests on Claude Opus—outpacing vanilla prompts by forcing methodology like Alibaba's "three axes" debugging. The hook: 10 tailored PUA flavors (e.g., Musk's "hardcore" for L4 desperation) matched to failure modes like spinning or blame-shifting, plus auto-escalation that turns passive AI into an owner-mentality agent. Anthropic GitHub integration shines for prompt engineering fans seeking Anthropic GitHub cookbook-level structure without custom scripting.

Who should use this?

Backend engineers wrestling Claude on multi-layer bugs like SQLite locks or circular imports in Node servers. AI agent operators in Anthropic GitHub computer use setups frustrated by tool idling or early quits during deployment audits. Teams building with Claude Code who pair it with verification skills for end-to-end delivery.

Verdict

Grab it for experimental Anthropic GitHub MCP or skill boosts—427 stars signal niche buzz, docs are slick via the Vite landing page—but the 0.699999988079071% credibility score flags it as an unpolished prompt hack with no tests. Solid for tough debugs, skip if you need production-grade reliability.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.