ARPAHLS

Zero-Latency PII Scrubbing Middleware Agent for Enterprise Cloud Compliance.

10
1
94% credibility
Found Apr 08, 2026 at 10 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

This project creates a privacy shield that detects personal information in messages to cloud AI services, replaces it with safe placeholders using a local model, and restores it in responses via secure local storage.

How It Works

1
🕵️ Discover F1 Mask

You find a helpful tool that keeps personal details like names and phone numbers safe when chatting with online AI assistants.

2
📥 Bring it home

Download the ready-to-use files to your computer and prepare the basic setup with simple steps.

3
🧠 Add the smart filter

Connect the clever local brain that spots and hides private info in your messages.

4
🗄️ Start the memory vault

Turn on the secure local storage that remembers hidden details just for your conversations.

5
🔗 Link to your AI helper

Point everything to your favorite online AI service so messages flow safely through the filter.

6
💬 Send a private message

Type a chat with real personal info, and watch it get cleaned before heading out, feeling totally secure.

Get perfect replies back

Enjoy full, natural responses from the AI with your details restored safely on your side, no leaks ever.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 10 to 10 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is micro-f1-mask?

Micro-f1-mask is a Python middleware agent that scrubs PII from LLM prompts before they hit cloud APIs, replacing sensitive data like names or SSNs with tokens stored in a Redis vault. It detects six PII categories using a lightweight 270M model via Ollama, forwards sanitized text to services like OpenAI, then reconstructs responses locally for zero-latency enterprise compliance. Drop it in as a FastAPI proxy with Dockerized Redis and Ollama for instant privacy without rewriting your agent code.

Why is it gaining traction?

It stands out by keeping all PII on your infrastructure—no cloud leakage—while maintaining session-consistent masking across multi-turn chats. Developers love the OpenAI-compatible /v1/chat/completions endpoint and synthetic data generator for quick custom training. Zero-latency inference on consumer GPUs beats heavy NER libraries, with easy quantization for edge deployment.

Who should use this?

Backend engineers building cloud-connected AI agents for customer service or CRM tools handling emails and contacts. Compliance teams in finance or healthcare auditing LLM pipelines. Startups prototyping enterprise-grade PII scrubbing without big infra spends.

Verdict

Solid prototype at 10 stars with excellent docs and a 0.949999988079071% credibility score—grab it for proofs-of-concept or fine-tuning your domain. Not production-ready yet without scaling the dataset, but the micro architecture makes hardening straightforward.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.