Jorwnpay

Jorwnpay / API-Police

Public

API Police 是一个命令行工具,帮助你验证所使用的 AI API 是否真的调用的是卖家声称的模型。

14
0
100% credibility
Found Mar 18, 2026 at 14 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

A verification tool that tests AI services to confirm if they are truly using the powerful model they advertise by checking capabilities, knowledge, identity, and special behaviors.

How It Works

1
🔍 Discover the Checker

You hear about AI services promising super-smart models but suspect they might be faking it, so you find this handy verification tool.

2
📥 Get the Tool Ready

You grab the simple checker program and set it up on your computer in just a few moments.

3
🔗 Share Service Details

You enter the web address of your AI service, your private login pass, and the exact model name they claim to use.

4
▶️ Launch the Tests

You start the checker, and it quietly asks your AI service a bunch of clever questions to test its true abilities.

5
📊 View the Results Report

A colorful summary pops up showing how the AI did on smarts, facts, honesty, and special refusal checks.

Trust with Confidence

You get a clear verdict—real deal, suspicious, or fake—so you know exactly what you're paying for and feel secure.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 14 to 14 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is API-Police?

API-Police is a Python CLI tool that verifies if an OpenAI-compatible AI API endpoint is truly running the model it claims, like gpt-4o or claude-3-5-sonnet. You feed it a base URL, api key, and model name via a simple command—`api-police --base-url https://api.example.com/v1 --api-key sk-... --model gpt-4o`—and it runs probes for reasoning capability, factual knowledge, self-identification, and Anthropic-specific refusals. It spits out a rich terminal report with an overall authenticity score and verdict: AUTHENTIC, SUSPICIOUS, or LIKELY FAKE, helping you spot model substitution scams.

Why is it gaining traction?

In a world of shady proxies and cheap LLM resellers, it stands out with targeted tests that real frontier models pass but fakes flunk, like multi-step math or magic string refusals. The weighted confidence score and exit code (non-zero on failure) make it scriptable for CI checks on api github repos or enterprise setups. Devs dig the no-setup install via pip and verbose mode for debugging api github rate limit issues during tests.

Who should use this?

AI integration engineers auditing third-party providers before plugging in api github copilot workflows or api github enterprise deployments. DevOps teams verifying api github token access to hosted models in production. Startups dodging overpromised APIs in api github issues or releases, especially when budgeting api police payment equivalents for premium LLMs.

Verdict

Grab it for quick audits on suspicious endpoints—solid for Python shops despite 14 stars and 1.0% credibility score signaling early days with basic docs. Test it yourself on legit APIs first; expand probes as models evolve.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.