JinHo-von-Choi

Fragment-Based Memory MCP Server — AI 중장기 기억 시스템

47
9
100% credibility
Found Feb 28, 2026 at 20 stars 2x -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
JavaScript
AI Summary

Memento MCP provides a persistent memory system for AI agents by storing knowledge as small, categorized fragments with smart search and automatic cleanup.

How It Works

1
📰 Discover Memento

You hear about Memento, a simple way to give your AI assistant a real memory that lasts between chats, so it remembers facts, mistakes, and your preferences.

2
📦 Set up storage

You prepare two safe spots to keep memories: one quick notepad for recent thoughts and a sturdy notebook for long-term knowledge.

3
🚀 Start the memory keeper

With one easy launch, your memory system comes alive on the web, ready to help your AI think smarter.

4
🤝 Link your AI assistant

You connect your AI to the memory keeper using a secure private password, so it can save and pull up memories anytime.

5
💭 Chat and build memories

As you talk to your AI, it automatically saves key facts, fixes errors, notes your likes, and learns new steps to follow.

🎉 AI remembers forever

Your AI now recalls past lessons, avoids old mistakes, and personalizes help perfectly, feeling like a true companion that grows with you.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 20 to 47 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is memento-mcp?

Memento-mcp is a JavaScript memento mcp server delivering fragment-based persistent memory to stateless language model agents via the Model Context Protocol (MCP). It breaks agent knowledge into atomic, typed fragments—like facts, decisions, errors, or preferences—stored across sessions in Postgres with optional Redis caching, letting agents recall relevant context without cramming full histories into prompts. Deploy it on port 56332 for MCP tools like remember, recall, link, and reflect over HTTP or SSE.

Why is it gaining traction?

Its three-tier search—Redis keywords, Postgres filters, pgvector semantics—delivers fast, precise retrieval without token waste, plus auto-linking and consolidation that detects contradictions via NLI and Gemini. Background workers handle evaluation, decay, and session-end reflection, building a graph of related memories agents can explore. MCP multi-version support and OAuth make it plug-and-play for agent servers.

Who should use this?

AI agent developers wiring up MCP-compatible tools in Node.js apps, especially those tracking user prefs, debugging histories, or workflows across chats. Suited for backend teams building long-running LLM assistants that need to "learn" from past interactions without stateless resets.

Verdict

Grab it for MCP prototypes—robust docs and features outweigh 15 stars and 1.0% credibility score, but watch deps like OpenAI embeddings and test Redis fallbacks. Prod-ready architecture, early polish needed.

(187 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.