EvanZhouDev

EvanZhouDev / umr

Public

The Unified Model Registry for all your local AI apps.

16
5
100% credibility
Found Apr 10, 2026 at 16 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
TypeScript
AI Summary

UMR provides a centralized registry for managing and sharing a single copy of local AI models across multiple desktop applications to save disk space and simplify usage.

How It Works

1
🔍 Discover shared AI models

You learn about a simple way to keep one copy of big AI brains shared across all your local AI apps, saving precious disk space.

2
📥 Get the helper tool

You quickly add the free sharing assistant to your computer with one easy step.

3
Add your first model

You bring in an AI model from online storage or your own files, and it safely joins your central collection.

4
Choose how to add
🌐
From online

Search for a model online and let it download to your shared spot.

💾
From your files

Point to a model file you already have on your computer.

5
👀 See your collection

Check your list of shared models, their sizes, and which apps use them.

6
🔗 Connect to apps

Link a model to your favorite AI apps like a magic shortcut, and it appears ready to use instantly.

🎉 Share and save space

Your apps now all use the same model copies, freeing up gigabytes of space while everything works smoothly.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 16 to 16 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is umr?

umr is a TypeScript CLI that acts as a unified model registry for local AI apps, letting you add GGUF models from Hugging Face repos or local files once and link them instantly to tools like LM Studio, Ollama, or Jan. It eliminates duplicate downloads across apps, saving disk space while centralizing management—run `umr add hf repo-name` to grab a model, then `umr link ollama model-name` to share it everywhere. You get raw paths for other runtimes too, like llama.cpp.

Why is it gaining traction?

In a world of fragmented local AI setups, umr stands out with near-instant hardlink-based sharing that leverages your existing HF cache, plus built-in integrity checks via `umr check --fix`. Developers dig the no-fuss workflow: one registry, zero extra copies, and progress bars for adds. It's a lightweight almost unified solution for juggling models without app-specific headaches.

Who should use this?

Local AI tinkerers running Ollama alongside LM Studio or Jan, who hate redundant 10GB+ downloads. Experimenters pulling GGUF quants from HF and scripting with llama.cpp paths. Teams testing unified model workflows before scaling to cloud.

Verdict

Worth installing via `npm i -g umr-cli` if you run multiple local AI apps—solid CLI, thorough docs, and CLI tests show promise despite 16 stars and 1.0% credibility score. Early maturity means watch for edge cases, but it solves a real pain point today.

(187 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.