vineetkishore01

🧠 A Memory System for AI Agents - Rust-based semantic memory with embeddings, LLM consolidation, and MCP support

13
1
100% credibility
Found Mar 15, 2026 at 13 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Rust
AI Summary

Chetna is a standalone memory manager for AI agents that stores categorized experiences with semantic search, emotional scoring, automatic decay, and context building for smarter responses.

How It Works

1
💡 Find your AI's memory helper

You discover Chetna, a smart tool that helps AI assistants remember facts, likes, and experiences just like a real brain.

2
🚀 Start it up easily

Download the ready-to-go package and launch it – a friendly web page opens in your browser at a local address.

3
🔗 Connect a thinking helper

Point it to your local AI service so it can understand the meaning behind words and ideas.

4
🧠 Save special memories

Add notes like 'You love dark mode' or 'Pizza is your favorite' – it scores them by importance and tags them smartly.

5
🔍 Search by meaning

Type a question like 'user likes?' and get matching memories with summaries ready for your AI chats.

6
⚙️ Tune the forgetting curve

Pin forever-memories, let unimportant ones fade naturally, or clean up with one click.

🎉 Your AI knows you deeply

Now your assistant recalls preferences, rules, and stories perfectly, making conversations feel personal and smart.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 13 to 13 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is Chetna?

Chetna is a standalone Rust server that gives AI agents a semantic memory system, storing facts, preferences, and experiences with automatic embeddings from Ollama, OpenAI, or Gemini. It handles semantic search, LLM-based importance re-scoring, Ebbinghaus decay for forgetting low-value memories, and context building for prompts—all via HTTP API, web dashboard, or MCP protocol. Like a github memory manager for agents, it solves the problem of stateless LLMs forgetting user context across sessions.

Why is it gaining traction?

Its Rust speed scales semantic search to millions of memories without OOM issues, batching embeddings and using LRU session caching for 90% hit rates. Developers hook it easily with Python SDK (`memory.remember("User likes dark mode")`), Docker compose for Ollama integration, and MCP tools for agents like Wolverine or OpenClaw. The web UI with toast notifications, category filters, and deleted memory recovery beats clunky vector DB wrappers.

Who should use this?

AI agent builders integrating memory github llm or memory github mcp into prototypes, like chatbot devs needing persistent user prefs without Pinecone costs. Local LLM tinkerers running Ollama who want github memory optimizer for prompt context. Avoid if you're debugging memory system unavailable not in park errors on your Jeep Grand Cherokee.

Verdict

Early alpha with 13 stars and 1.0% credibility—docs are solid with curl examples, but test coverage is light and skills/procedures were cut for focus. Grab it for agent experiments if you need a memory system in computer that's dead simple to deploy; skip for production until more battle-testing.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.