WizisCool

面向 OpenAI 兼容模型接口的模型状态监控面板,支持公开状态页、后台管理、SQLite 持久化、重试策略和 Docker 部署。

15
1
89% credibility
Found Mar 31, 2026 at 15 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

A self-hosted web dashboard that monitors the availability, latency, and performance of AI language models from OpenAI-compatible providers.

How It Works

1
🕵️ Discover the monitor

You hear about a simple tool that watches your AI chat services to see if they're working well.

2
🚀 Start it up

You get the tool running on your computer or server in just a few moments.

3
🔗 Connect your services

You add the web addresses and private codes for your AI providers so it knows what to check.

4
Models appear

The tool pulls in the list of available AI models and begins testing them automatically.

5
📊 Check the dashboard

You open a colorful page showing which models are fast, slow, or down right now.

6
⚙️ Tweak as needed

You adjust settings like check frequency or hide certain models from the main view.

🎉 Always informed

Now you have a live view of your AI services' health, shared easily with your team.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 15 to 15 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is model-status?

Model-status is a TypeScript dashboard that monitors OpenAI-compatible LLM APIs across multiple upstreams, like proxies or self-hosted endpoints. Add API base URLs and keys, and it auto-syncs model catalogs, probes each with a simple "ok" completion request, then tracks model status—availability, connectivity latency, time-to-first-token, and total response time—over ranges like 90m to 30d. Users get a public status page for sharing uptime (e.g., model status meaning for teams), plus an admin panel for configs, manual syncs/probes, and SQLite-backed persistence, all via Docker deploy.

Why is it gaining traction?

It stands out for self-hosted simplicity—no cloud SaaS fees or vendor limits—while handling retries, concurrency, and score-based classification (up/degraded/down) out of the box. Devs dig the public dashboard for quick model status 4 checks akin to gurobi model status, plus admin tweaks for probe intervals or thresholds. OpenAI github integration shines for proxy monitoring, beating generic tools with LLM-specific metrics like TTFT.

Who should use this?

SREs at AI startups tracking self-hosted vLLM or OpenAI-compatible clusters need this for model status goa-style dashboards. Teams building openai agents sdk sqlite sessions or openai github copilot proxies will value multi-upstream probing without custom scripts. Ideal for ops evaluating model status dona paula across providers.

Verdict

Grab it if you need a lightweight, Docker-ready monitor for OpenAI-compatible model status today—15 stars show early maturity, but solid tests and docs make it production-viable for small setups. Credibility score of 0.9% flags watch-for-updates, yet it's a pragmatic win over from-scratch tooling.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.