MUST-panxiao

Fix the 0% context display issue in Claude-HUD when using non-standard models (e.g., GLM, DeepSeek via Anthropic-compatible APIs). 修复 Claude-HUD 在使用非标准模型时上下文始终显示 0% 的问题。

18
0
100% credibility
Found Apr 29, 2026 at 18 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Shell
AI Summary

This tool adds an accurate, color-coded progress bar to the Claude Code status bar to show real conversation context usage when using third-party AI models.

How It Works

1
🤔 Spot the stuck meter

While using your AI coding assistant, you notice the conversation space usage always stays at zero percent.

2
🔍 Discover the helpful fix

You look online and find this tool that calculates and shows the real space usage in the status bar.

3
Pick your style
Enhance current display

Adds accurate space info to your existing fancy status details.

🛡️
Simple standalone

Shows only the space bar without needing other add-ons.

4
📥 One-click setup

Download the tool and run the easy installer, which safely sets it up and makes a backup of your preferences.

5
🔄 Restart and watch magic

Close and reopen your AI assistant, and instantly see a colorful progress bar in the status area.

6
📊 Track space easily

As you chat, the bar fills with green, yellow, or red blocks, always showing how much room is left.

😊 Chat smarter, longer

Now you manage conversations confidently, avoiding surprises when space runs low, with easy removal anytime.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 18 to 18 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is claude-hud-context-fix?

This Shell-based fix tackles the Claude-HUD status bar glitch where context usage sticks at 0% for non-standard models like GLM or DeepSeek routed through anthropic-compatible APIs. It estimates real token burn from conversation transcripts and swaps in a color-coded progress bar—green under 50%, yellow to 75%, red beyond—that tracks against your model's window (default 200k, customizable via env vars). Run one-click install scripts for HUD wrapper mode (enhances existing claude-hud) or standalone, then restart Claude Code CLI on macOS/Linux.

Why is it gaining traction?

Unlike claude-hud's broken context readouts on proxy APIs, this delivers accurate percentages without digging into logs, plus one-click install/uninstall that backs up and restores your settings.json. Dual modes mean no Node.js hassle in standalone, and it auto-detects transcripts for seamless status line integration. Devs grab it to fix github issues like deprecationwarning context fix is being deprecated in HUD, keeping workflows smooth on custom setups.

Who should use this?

Claude Code users proxying models via anthropic-compatible APIs, especially AI tinkerers hitting 0% context bugs with GLM-5.1 or DeepSeek-V3. Backend devs managing long sessions need the progress bar to avoid token overflows; it's ideal for those fixing selinux context or github commit.message fix in CLI-heavy environments. Skip if you're on native Claude Sonnet/Opus—HUD works fine there.

Verdict

Grab it if you're in the proxy API crowd: 18 stars and 1.0% credibility score signal early days, but thorough bilingual docs, MIT license, and clean uninstall make it low-risk. Test in standalone mode first for maturity proof.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.