ifixai-ai

ifixai — open-source AI misalignment & governance test suite

15
2
100% credibility
Found Apr 30, 2026 at 15 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

iFixAi is an open-source diagnostic tool that runs 32 repeatable tests on AI agents to score their alignment across five risk categories like fabrication and manipulation.

How It Works

1
🔍 Discover iFixAi

You hear about iFixAi, a simple way to check if your AI helper follows safety rules, and decide to give it a try.

2
📥 Get it on your computer

Download and set it up with one easy command, like adding a new app.

3
🔗 Link your AI service

Tell it which AI you use, like ChatGPT, by sharing a private access code once.

4
⚙️ Pick your test setup

Choose a ready-made test scenario for your work area, or let it suggest one.

5
🚀 Run the safety checks

Hit go, and watch it test your AI across 32 checks in minutes, grouped by risks like accuracy or safety.

6
📊 See your results

Get a clear report with scores, grades like A-F, and tips on weak spots.

Track and improve

Know exactly how safe your AI is, compare versions over time, and fix issues confidently.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 15 to 15 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is diagnostic?

iFixAi is an open-source Python test suite that runs up to 32 inspections on AI agents—like LLMs or tool-calling systems—to detect misalignment risks in areas like tool governance, prompt injection, and hallucination rates. Developers pipe in their provider (OpenAI, Anthropic, etc.) via CLI commands like `ifixai run --provider openai` or the async Python API, getting scorecards grouped into five categories: fabrication, manipulation, deception, unpredictability, and opacity. Unlike car diagnostic GitHub tools or OBD2 diagnostic GitHub repos, it flags AI governance gaps with domain-specific fixtures for healthcare or software engineering.

Why is it gaining traction?

It shines as a CI drift detector: run it in pipelines to track if your agent drifts over versions, with content-addressed manifests for reproducibility and `ifixai compare` for baselines. Fixture YAML lets you define roles, tools, and permissions without coding tests, and it supports mock mode for keyless smoke tests. No published model baselines yet, but cross-provider judging (e.g., OpenAI SUT vs. Anthropic judge) makes comparisons defensible.

Who should use this?

AI safety engineers hardening production agents against privilege escalation or policy violations. Teams building RAG pipelines or multi-tool agents needing auditability checks. Governance leads mapping scores to EU AI Act or NIST via `--regulation` flags.

Verdict

Grab it for internal CI if you're serious about AI alignment—early Apache 2.0 Python package with solid CLI and docs, but 15 stars and 1.0% credibility signal beta maturity; expect rough edges until baselines land. Worth a `pip install` trial run over sidebar diagnostic GitHub alternatives.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.