CopilotKit

CopilotKit / llmock

Public

Deterministic mock LLM server for testing *across processes* — fixture-based routing with SSE streaming

89
4
100% credibility
Found Mar 11, 2026 at 57 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

A lightweight mock server that imitates OpenAI, Anthropic Claude, and Google Gemini AI APIs to enable deterministic, fixture-driven testing of AI applications across multiple processes.

How It Works

1
🔍 Discover the testing helper

You learn about a simple tool that lets you test AI chat features in your app without using real AI services, making tests fast and reliable every time.

2
📦 Add to your project

You easily include this tool in your app's building blocks so it's ready to use.

3
✍️ Create pretend replies

You write down exactly what the fake AI should say or do for specific questions or tasks, like scripting a conversation.

4
🚀 Start the pretend AI

With one action, you launch a local pretend AI service on your computer that mimics real ones perfectly.

5
🔗 Point your app to it

You update your app to chat with the pretend AI instead of the real one, using a simple address change.

6
🧪 Run your tests

You play your app tests, watching them zoom through with instant, repeatable results and a log of what happened.

Tests always succeed

Your AI features are thoroughly tested, saving time, money, and hassle with perfect consistency.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 57 to 89 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is llmock?

llmock is a TypeScript mock LLM server that delivers deterministic responses for testing across processes using fixture-based routing and SSE streaming. Point your OpenAI, Claude, or Gemini clients at it via base URL env vars like OPENAI_BASE_URL, and it streams real API formats—text, tool calls, even errors—from simple JSON fixtures. Zero runtime deps means instant, reproducible LLM mocks without flakiness in multi-service setups.

Why is it gaining traction?

Unlike MSW's in-process patching, llmock runs a real HTTP server that catches calls from child processes, browsers, or agents—perfect for E2E testing. Built-in SSE for all three providers saves manual event crafting, and JSON fixtures plus CLI (`llmock -p 5555 -f fixtures`) make setup fixture-based and shareable. Developers love the request journal for debugging and one-shot error injection for edge cases.

Who should use this?

E2E testers with Playwright or Vitest running LLM agents like CopilotKit, LangGraph, or Mastra in Next.js apps with worker processes. Agent framework devs needing cross-provider mocks without SDK tweaks. Anyone mocking LLM APIs in microservices or browser-controlled flows where single-process tools fail.

Verdict

Grab it for E2E LLM testing if you hit MSW limits—docs and examples shine, tests pass, CLI works great despite 17 stars and 1.0% credibility score. Still early; watch for broader adoption before production pipelines.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.