QingGo / engram-peft
Public🚀 Engram-PEFT: An unofficial implementation of DeepSeek Engram. Inject high-capacity conditional memory into LLMs via sparse retrieval PEFT without increasing inference FLOPs / DeepSeek Engram 架构的非官方实现。通过参数高效微调 (PEFT) 为大语言模型注入超大规模条件记忆,支持稀疏更新且不增加推理开销。
Engram-PEFT is an open-source Python library that implements a parameter-efficient method to add scalable, sparse memory retrieval to transformer language models, closely following the DeepSeek Engram research paper.
How It Works
You hear about a clever way to give AI models a huge boost in remembering facts without slowing them down.
You easily add this memory enhancer to your setup with a quick download.
You pick a ready-made AI brain, like a small language helper, to start with.
You sprinkle in special memory layers at key spots so the AI can pull facts on demand.
You show the AI examples and let it learn, focusing just on building its memory or everything.
Your AI now recalls tons of info lightning-fast, performs better, and you're thrilled with the results.
Star Growth
Repurpose is a Pro feature
Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.
Unlock RepurposeSimilar repos coming soon.