qelos-io

qelos-io / testai

Public

The testing framework for skills, MCPs, commands, subagents, and LLM models! Docs at https://testingai.ai

14
0
100% credibility
Found May 14, 2026 at 16 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

A TypeScript library for testing AI agents, skills, and developer projects using Claude's SDK, with mock servers for services like Datadog, Slack, Figma, and Linear, plus preset templates for common stacks.

How It Works

1
📚 Discover TestAI

You hear about a helpful tool that makes testing AI assistants super easy for everyday developers.

2
🛠️ Add the tools

You bring the testing kit into your own project with a quick setup.

3
Pick your project
📁
Use your folder

Point it at the files you already have.

New sample

Get a ready-made example like a simple website or backend.

4
🔌 Connect pretend services

Hook up fake versions of real tools like chat apps or design software so your AI can practice safely.

5
🚀 Run the AI test

Give your AI a task on the project and watch it think, edit files, and respond just like in real use.

6
👀 Check the results

Review what files changed, what your AI said, and confirm everything worked as expected.

AI tested successfully

You now know your assistant behaves perfectly and is ready for the real world.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 16 to 14 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is testai?

Testai is a TypeScript testing framework for AI agents, skills, MCP tools, commands, and LLM models, letting you simulate Claude Code sessions in isolated workspaces. Spin up a testing environment with local projects or presets for React, Nuxt, or FastAPI, wire in mock MCP servers for Datadog, Slack, Figma, or Linear, then query with prompts to capture file changes, tool calls, agent responses, and traces. It handles git diffs, retries flaky LLM calls, and supports third-party Anthropic gateways for reliable testing frameworks for TypeScript, JS, or LLM workflows.

Why is it gaining traction?

Ready-made MCP stubs mirror exact vendor tool names—like Slack's search_messages or Figma's generate_design—without real API hits, beating generic testing frameworks like Pest vs PHPUnit by focusing on agentic flows. Git worktrees and file snapshots give precise change reports, while query timeouts and env vars like CLAUDE_MODEL make it CI-friendly via GitHub Actions. Devs dig the low-boilerplate setup for testing framework for Angular or Rust equivalents in AI contexts.

Who should use this?

AI engineers building Claude Code skills or subagents for tools like Linear issue tracking. Frontend devs testing Nuxt/React projects with Figma MCPs for design-to-code. Backend teams validating FastAPI endpoints via Datadog mocks, or prototyping LLM agents for restaurant recommendations—like finding a restaurant Düsseldorf or restaurant Hamburg.

Verdict

Promising niche player for testing framework for LLM (14 stars), but 1.0% credibility score signals early maturity—docs at testingai.ai are clear, though presets lack full cloning. Try for MCP-heavy agents; skip for broad testing frameworks C# or Pest needs.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.