Rangle2

Rangle2 / mda

Public

Memory system for LLMs that remembers everything you teach it during conversation. No reindexing, no context window limits. CPU by default, GPU optional.

14
3
69% credibility
Found May 03, 2026 at 14 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

MDA is a token-free associative memory system that enables large language models to learn and retain knowledge online during inference without backpropagation or retraining.

How It Works

1
🔍 Discover MDA

You hear about a smart memory tool that helps AI chatbots remember facts and learn during conversations, like giving your assistant a real brain.

2
📦 Install Easily

With one simple command, you add MDA to your computer, and it's ready to boost any AI helper you use.

3
🤖 Connect Your AI

Link MDA to your favorite AI chatbot, like a local one or online service, so it can share memories instantly.

4
📚 Teach Facts

Tell MDA simple facts, like 'Solaris Station was founded by Dr. Mira Voss in 2041,' and it learns them right away without forgetting old ones.

5
💭 Chat Smarter

Ask questions like 'Who founded Solaris?' and MDA pulls up the exact facts with confidence scores, making answers accurate and connected.

🎉 AI Remembers Forever

Your AI now reasons across long talks, connects ideas automatically, and gets smarter with every chat—no more repeating yourself!

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 14 to 14 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is mda?

MDA is a Python memory system for LLMs that stores everything you teach it mid-conversation as entity networks, bypassing context window limits and reindexing. Plug it into Ollama, OpenAI, or Anthropic models via a simple API or CLI like `mda --model qwen3:4b`, and it injects relevant facts with confidence scores into prompts. It's a persistent layer on top of RAG, handling online learning without backprop or tokenizers.

Why is it gaining traction?

It crushes RAG baselines in benchmarks for multi-hop reasoning (+20%) and incremental learning (+60%), using 3x less context while retaining 92% accuracy over 200 turns. Developers dig the CPU-first design with optional GPU speedup, plus seamless Open WebUI integration—no server needed. As a memory system ai tackling github memory limit woes in long chats, it feels like a lightweight github memory manager that actually evolves.

Who should use this?

AI engineers building conversational agents or multi-turn apps where LLMs forget details, like customer support bots or code assistants needing memory system for claude code. Ideal for indie devs prototyping with Ollama who hit RAG's update pains, or teams wanting a drop-in memory github copilot enhancer without vector DB overhead.

Verdict

Grab it for experiments if you're prototyping LLM memory—benchmarks impress despite 14 stars and 0.7% credibility score signaling early days. Docs are solid with quickstarts, but expect tweaks as it's pre-1.0; SSPL license suits research over enterprise. Worth a spin for memory test scenarios.

(187 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.