CyrilPeng

一个面向 AIRP、AI 游戏、AI 桌宠与 OpenAI 兼容客户端的本地长期记忆核心

34
2
89% credibility
Found May 02, 2026 at 18 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

KokoroMemo is a local proxy that adds persistent long-term memory and dynamic conversation state to AI role-playing applications by injecting relevant context into chat requests.

How It Works

1
🔍 Discover endless roleplay adventures

You find KokoroMemo, the magic helper that makes your AI chats remember everything forever, turning short talks into epic stories.

2
🚀 Launch your memory companion

Download and start the friendly app on your computer—it opens a simple dashboard ready to connect.

3
🧠 Connect your AI thinking partner

Link your favorite AI service like a smart brain, so it can recall past chats and stay in character.

4
🎮 Hook up your roleplay game

Point your roleplay app (like a chat buddy) to the local helper with one easy address change.

5
💭 Chat and feel the memory magic

Start talking—watch as old promises, scenes, and feelings come alive automatically in every reply!

6
📝 Review and shape memories

Peek at new memory ideas in the inbox, approve the gems, and tweak your story state anytime.

Epic stories that never forget

Your AI roleplay now flows endlessly with perfect recall, relationships that deepen, and worlds that live on.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 18 to 34 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is KokoroMemo?

KokoroMemo is a Python-based local proxy server that adds persistent long-term memory to OpenAI-compatible AI clients, like those for role-playing games, desktop pets, or AIRP apps. It intercepts chat completions requests, automatically extracts key facts from conversations into a searchable memory store, and injects relevant memories back into prompts for consistent AI behavior over sessions. Developers get a drop-in OpenAI API endpoint with SQLite/LanceDB storage and a web dashboard for browsing/editing memories.

Why is it gaining traction?

It stands out by handling the full memory lifecycle—extraction, vector search, injection, and state tracking—without touching your LLM provider, supporting any OpenAI-compatible backend like DeepSeek or local models. The Tauri-powered GUI lets you mount memory libraries per conversation, import SillyTavern logs, and tweak policies like retrieval gating or auto-approval, saving hours of custom RAG plumbing. Early adopters praise the config-driven setup and real-time WebSocket updates for inbox reviews.

Who should use this?

AI roleplay scripters building persistent characters in SillyTavern or custom bots, game devs needing NPC memory for quests/relationships, and indie makers of AI companions who want local, private recall without API costs. Skip if you're doing one-off chats or need enterprise-scale vector DBs.

Verdict

Solid for prototyping memory-aware AI apps—install via pip, point your client at localhost:14514/v1, and it just works. With 17 stars and a 0.90% credibility score, it's early but battle-tested for roleplay; expect more polish on docs and tests as it matures. Try it if local persistence fits your stack.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.