toby-bridges

LLM-powered agent memory with 6-category classification and L0/L1/L2 tiered structure

63
7
100% credibility
Found Feb 19, 2026 at 15 stars 4x -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

A plugin that enables AI agents to automatically capture, organize into categories, deduplicate, and recall long-term memories from user conversations.

How It Works

1
🧠 Discover smart memory for your AI helper

You find a helpful tool that lets your AI assistant remember important details from chats forever.

2
📦 Add the memory tool to your assistant

You easily include this memory feature into your AI setup so it can start learning.

3
🔗 Link your AI thinking service

You connect a smart AI service that helps the tool understand and organize memories securely.

4
🚀 Launch your remembering assistant

With one simple start, your AI is now ready to chat and build a memory bank.

5
💬 Have natural conversations

You talk to your AI about your life, preferences, projects, and experiences.

6
💾 It quietly saves key memories

Behind the scenes, it picks out the most useful info and stores it neatly by type.

7
✨ Get personalized recall

Next time you chat, it brings back just the right past details to make responses spot-on.

🎉 Your AI feels like an old friend

Now your assistant remembers who you are, what you like, and your shared history perfectly.

Sign up to see the full architecture

6 more

Sign Up Free

Star Growth

See how this repo grew from 15 to 63 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is epro-memory?

epro-memory is a TypeScript plugin that adds persistent, LLM-powered memory to agents in frameworks like Clawbot. It auto-extracts insights from conversations, classifies them into 6 categories (profile, preferences, entities, events, cases, patterns) with L0/L1/L2 tiers—one-sentence abstracts for quick recall, structured summaries, full narratives—and stores them in LanceDB for vector search. Developers get stateful LLM-powered agents that remember user prefs, past events, and reusable patterns without manual logging.

Why is it gaining traction?

In building LLM-powered applications on GitHub, it stands out with automatic capture on agent_end hooks, LLM-driven deduplication (create/merge/skip), and smart recall injecting category-grouped context before queries. No server needed thanks to embedded LanceDB, plus configurable thresholds for recall limits and similarity scores make agents feel smarter over sessions. Solid bilingual docs and 106 tests across extraction pipelines build trust for LLM-powered autonomous agent systems.

Who should use this?

Agent framework users building LLM-powered ai agents for industry apps, like multi-agent collaborative frameworks or autonomous agents navigating complex tasks (video editing assistance, historical cadastre exploration). Ideal for backend devs integrating memory into LLM-powered gui agents or phone automation bots needing long-term user context.

Verdict

Grab it if you're prototyping LLM-powered agents—features deliver real statefulness fast via pnpm install and JSON config. At 14 stars and 1.0% credibility, it's early but mature in docs/tests; monitor for adoption before production.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.