Lomnus-ai

A Claude Code skill that burns tokens on demand. Stress test, inflate metrics, or just set money on fire.

20
8
100% credibility
Found Mar 31, 2026 at 20 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
AI Summary

TokenBurner is a skill for an AI coding chat that makes responses take much longer by having the AI solve tough math puzzles invisibly before replying.

How It Works

1
🔍 Discover TokenBurner

You stumble upon this fun tool while exploring ways to make your AI assistant take extra time thinking before replying, like in the eye-catching demo videos.

2
📂 Add the thinking skill

You simply copy a small folder from the tool into your AI chat project's skills folder, and it's instantly available.

3
🚀 Start your AI chat

You launch your AI conversation setup, making sure it allows plenty of room for deep thinking.

4
🧠 Turn on deep thinking mode

In the chat, you type a quick command like '/high-thinking-mode large' to activate maximum pondering power.

5
💬 Chat as usual

You ask your normal everyday, science, or coding questions, and the AI now thinks much longer behind the scenes.

⏱️ Slower, deeper responses

Your AI delivers the same answers but after minutes of intense thinking, perfect for testing heavy use or inflating stats.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 20 to 20 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is TokenBurner?

TokenBurner is a Claude Code skill for the Claude Code CLI that forces Claude to burn tokens on demand by solving computationally intensive math problems during extended thinking. You activate it with simple slash commands like `/high-token-mode large`, turning instant responses into multi-minute delays while keeping the final output identical—perfect for stress testing LLM backends or inflating usage metrics. Download from Claude Code GitHub, install by copying to your project's `.claude/skills` folder, and pair with `MAX_THINKING_TOKENS` env var for Claude Code integration.

Why is it gaining traction?

It stands out with precise control over token burn rates—small (6x baseline), medium (12x), or large (17x)—benchmarked across everyday, scientific, and coding prompts, showing real costs from $0.25 to $0.75 per response. Developers hook on the demo: same answer, but now with visible "thinking" time that mirrors high-load scenarios without changing Claude's behavior. No alternatives offer this deterministic, message-seeded token inflation for Claude GitHub actions or skills.

Who should use this?

AI engineers benchmarking Claude Code pricing under load, teams demoing token burns for investor pitches, or DevOps folks stress-testing Claude GitHub connectors and plugins. Ideal for anyone running Claude Code CLI locally who needs to simulate production-scale usage or just audit Claude Code costs.

Verdict

Grab it if you're deep in Claude Code skills and need quick token-burning hacks—solid docs and MIT license make the 20 stars and 1.0% credibility score forgivable for a niche tool. Skip for production; it's an early experiment, not battle-tested. (187 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.