fastino-ai

fastino-ai / GLiGuard

Public

Fastino's LLM guardrail

14
2
100% credibility
Found May 14, 2026 at 15 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
AI Summary

GLiGuard is a lightweight AI model that evaluates user prompts and AI responses for safety, toxicity, refusals, and jailbreak attempts using a flexible schema in a single efficient pass.

How It Works

1
🔍 Discover GLiGuard

You hear about a handy safety tool that checks messages to keep AI chats from going off the rails with harmful or tricky content.

2
📥 Get the Safety Checker

You easily download and set up this compact guard on your computer to start protecting your AI conversations.

3
🛡️ Activate Your Guard

You wake up the safety checker, and it's ready to scan prompts or replies in one quick go, feeling super efficient.

4
💬 Enter Text to Check

You paste in a user question or AI answer, tell it what kinds of issues to watch for like toxicity or sneaky tricks, and hit go.

5
📊 See Clear Results

Right away, you get simple labels like 'safe' or 'unsafe', plus details on harms or jailbreak attempts, making moderation a breeze.

6
🔄 Handle More at Once

For busier days, you feed it batches of messages and get results for everything together, saving tons of time.

🎉 Safer AI Chats

Now your AI interactions are guarded against risks, letting you chat confidently with peace of mind.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 15 to 14 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is GLiGuard?

GLiGuard, from Fastino AI on GitHub, is a compact 0.3B-parameter guardrail for LLMs that scans prompts and responses for safety issues like toxicity, jailbreaks, and refusals—all in a single fast encoder pass via the GLiNER2 Python library. You pip install gliner2, load the fastino/gliguard-LLMGuardrails-300M checkpoint, and classify text with schema-driven calls like classify_text("prompt", {"prompt_safety": ["safe", "unsafe"]}). It solves the bloat of heavy decoder-based guards by delivering multi-task verdicts without autoregressive generation.

Why is it gaining traction?

Developers dig its 23x-90x smaller footprint and 16x throughput gains over 7B+ rivals like LlamaGuard, plus 87%+ F1 on benchmarks for prompt/response harm. The schema lets you mix tasks (safety + toxicity + jailbreak) in one API call with adjustable thresholds, batching for production, and easy prefixing like "Response: {text}"—no prompt engineering hassles. Fastino's GLiGuard stands out for local inference speed on consumer hardware.

Who should use this?

LLM app builders integrating safety into chatbots or APIs, like backend devs filtering user prompts before generation or response-side checks in RAG pipelines. Prod teams at startups needing quick jailbreak detection without cloud costs, or AI safety researchers prototyping guardrails for custom fine-tunes.

Verdict

Promising for lightweight LLM guardrails, with solid docs and examples, but at 14 stars and 1.0% credibility, it's early-stage—test thoroughly before prod. Worth a spin if you need fast, schema-flexible moderation over bloated alternatives.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.