uncSoft

Local LLM Testing & Benchmarking for Apple Silicon

47
4
100% credibility
Found Feb 11, 2026 at 9 stars 5x -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Swift
AI Summary

Anubis is a native macOS app for benchmarking, comparing, and managing local large language models on Apple Silicon with real-time hardware telemetry.

How It Works

1
👀 Discover Anubis

You hear about a handy Mac app that lets you test and compare smart AI helpers running on your computer, showing exactly how they perform with live stats.

2
📥 Get the app

Download the open-source app from GitHub and open it on your Apple Silicon Mac – it launches smoothly like any other app.

3
🔌 Link your AI helper

Connect to your local AI service like Ollama so the app can talk to your downloaded models.

4
🚀 Run your first test

Pick a model, type a question or use a ready prompt, hit run, and watch live charts light up with speed, power use, and hardware stats in real time.

5
Test one or compare two
📊
Deep benchmark

Focus on one model's full performance dashboard with charts and history saves.

🏆
Side-by-side battle

Run two models at once, vote on the winner, and see which thinks faster and smarter.

6
📦 Manage your collection

Browse, inspect details, pull new models, or unload ones hogging memory right from the app.

🎉 Export your insights

Save beautiful charts, reports, or raw data to share your AI performance discoveries with friends or online.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 9 to 47 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is anubis-oss?

Anubis OSS is a native macOS app built in Swift for Apple Silicon that lets you benchmark, compare, and manage local LLMs from backends like Ollama, MLX, LM Studio, or any OpenAI-compatible server. It streams responses with real-time hardware telemetry—tracking GPU/CPU/ANE/DRAM power, memory, thermals, and tokens/sec—so you see exactly how models perform on your M-series chip. Export charts, CSV data, or Markdown reports directly, no screenshots needed.

Why is it gaining traction?

Unlike CLI tools like asitop or chat wrappers that ignore hardware context, Anubis correlates inference speed with power draw and utilization in a clean SwiftUI dashboard. The arena mode runs A/B model battles sequentially or in parallel, with voting and history; the vault aggregates models across backends for easy inspection and unloading. Developers love pulling Ollama models in-app and getting retina-ready exports for local LLM benchmark sharing on Reddit or reports.

Who should use this?

Apple Silicon users tuning local LLMs for coding assistants, home assistant integrations, or edge inference—think ML engineers comparing Q4 vs Q8 quantizations, or indie devs optimizing models for local hardware before deployment. It's ideal if you're experimenting with local LLM models on M1-M4 Macs and need precise metrics without terminal hassle.

Verdict

Grab it if you're on Apple Silicon and serious about local LLM benchmarking—early promise in a polished native app, despite 11 stars and 1.0% credibility signaling it's fresh from source. Build from Xcode for full features; watch for stability as it matures.

(187 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.