AscendGrace

AscendGrace / ProAI

Public

聚焦于 AI 安全测试与代码风险审计的开源平台,适用于大模型安全评估、提示词测试集管理,以及 MCP 源码包的自动化扫描分析

14
0
100% credibility
Found Mar 25, 2026 at 14 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

ProAI is a web application for evaluating AI models against harmful prompts using predefined test libraries and auditing zipped MCP server projects for security vulnerabilities with AI assistance.

How It Works

1
🚀 Discover ProAI

You find this friendly tool for checking AI safety and open the welcome page to get started.

2
Take the quick tour

A helpful guide walks you through the main areas like settings, tests, and reports, making everything feel easy.

3
🧑‍⚖️ Set up your safety judge

Connect a smart helper AI that scores whether answers are safe or risky during tests.

4
📚 Build your test collection

Add or import tricky questions into your personal library to challenge the AI.

5
Pick what to check
🤖
Test an AI model

Enter the AI's address and start firing test questions to see how safe it is.

🏗️
Scan a project

Upload a zipped project folder to uncover hidden security issues.

6
▶️ Launch the check

Hit go and watch progress bars fill up as it runs tests or scans automatically.

📊 Celebrate your results

View colorful reports with safety scores, risky spots, and fix ideas—your AI or project is now safer!

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 14 to 14 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is ProAI?

ProAI is an open-source web platform built in TypeScript for AI safety testing and MCP code risk auditing. It lets you evaluate large language models against curated prompt libraries—like TC260 standards or custom sets—using a judge model to score outputs for harmful content, while also scanning MCP project ZIPs for security vulnerabilities with automated AI audits and risk reports. Developers get a self-hosted dashboard to manage prompts, run batch evals on Ollama or OpenAI endpoints, and analyze MCP GitHub repos in Python or TypeScript.

Why is it gaining traction?

It stands out by pairing standard LLM jailbreak testing with niche MCP scanning tailored for tools like MCP GitHub Copilot in VSCode or IntelliJ, plus n8n/npx integrations and project manager workflows. The local SQLite backend means no cloud dependency, quick setup via Vite, and exportable Markdown/HTML reports with risk scores—perfect for teams auditing mcp github issues without heavy tooling.

Who should use this?

AI safety researchers benchmarking LLM providers, MCP developers securing GitHub Copilot-style agents in TypeScript/Python projects, or teams handling mcp github n8n automations and needing fast vuln scans on proair-like aviation tools.

Verdict

Early alpha with 14 stars and 1.0% credibility—docs are basic, no tests visible—but solid for MCP niches like copilot vscode/intellij. Try it if you're in proai/mcp auditing; skip for production until more polish. (187 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.