OmniDimen

OmniDimen / omemo

Public

AI 记忆系统

19
4
100% credibility
Found Mar 08, 2026 at 19 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

Omni Memory is a proxy service that equips AI chat models with long-term memory features, enabling them to retain and recall conversation details across sessions in a format compatible with standard chat applications.

How It Works

1
🕵️ Discover Omni Memory

You hear about a handy tool that makes AI chatbots remember conversations forever, like a friend who never forgets your name or favorite topics.

2
📥 Get it set up

Download the files and start the service on your computer with one easy script, just like launching any simple app.

3
🔗 Connect your AI helper

Tell it which AI service to use by adding a simple connection, so your chats get supercharged with smarts.

4
⚙️ Pick your memory style

Choose if the AI adds memories automatically or uses a side helper to summarize chats, fitting how you want it to learn.

5
🚀 Turn it on

Click to launch, and now your AI lives at a web address ready for action, feeling instant and effortless.

6
💬 Chat with lasting memory

Send messages through your apps or the built-in web page, and watch the AI recall details from chats long ago, like magic!

7
🧠 Manage what it remembers

Use the friendly web dashboard to add, edit, or erase memories anytime, keeping everything personal and up to date.

🎉 Your smart companion

Now your AI truly knows you, making every conversation deeper and more helpful, just like having a perfect memory buddy.

Sign up to see the full architecture

6 more

Sign Up Free

Star Growth

See how this repo grew from 19 to 19 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is omemo?

Omemo is a Python proxy server that adds long-term memory to LLMs, turning stateless chat APIs into persistent companions that recall user details across sessions. Swap your OpenAI client's base_url to http://localhost:8080/v1, configure upstream providers like Anthropic or OpenAI in endpoints.json, and it auto-injects memories—full history or RAG-selected—into prompts. No app changes needed; it proxies /v1/chat/completions and /v1/models while offering a web UI at / for memory CRUD.

Why is it gaining traction?

It stands out with dual modes: builtin lets models tag memories via blocks in responses (e.g., "- [YYYY-MM-DD] user likes Python"), while external uses a separate LLM every N turns for summaries. RAG injection grabs relevant facts like "one moment in time" details without token bloat, plus model aliases dodge conflicts across providers. Devs hook it for zero-effort continuity in tools mimicking OneNote for AI chats.

Who should use this?

AI backend devs building chatbots that remember user prefs, like personalized tutors tracking progress or customer support bots recalling "one more light" on past issues. Python teams proxying Qwen/Claude via OpenAI SDKs for agents needing history without prompt hacks. Skip if you're on one-shot queries or need enterprise scale.

Verdict

Solid for local prototyping—spin up with ./run.sh, tweak memory_settings.json, and get persistent LLM chats in minutes. At 18 stars and 1.0% credibility, it's early (thin tests, file-based storage), but crisp docs and web UI make it worth a one more time test before betting big.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.