Ssenseii

Ssenseii / ariana

Public

πŸ€– Find out which AI models your hardware can run

24
3
100% credibility
Found Feb 02, 2026 at 18 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

A tool that scans your computer's hardware and matches it against hundreds of AI models to recommend which ones you can run locally.

How It Works

1
πŸ” Find the AI Checker

You hear about a handy free tool that checks what smart AI helpers your everyday computer can run right at home.

2
πŸ“₯ Bring It Home

Download the tool to your computer and follow a few easy steps to get it ready to use.

3
πŸš€ Kick Off the Check

Open the tool and press start, letting it peek at your computer's insides.

4
πŸ’» Feel Your Computer's Power

It measures your processor speed, memory amount, graphics strength, and storage space to understand what you have.

5
🧠 Learn About AI Helpers

The tool gathers info on over 200 different AI brains and their needs.

6
πŸ“Š See Perfect Matches

It compares everything and sorts out which AI helpers will run smoothly on your setup.

πŸ“„ Get Your Personal Guide

Receive a clear report with top recommendations, tips to start small, and ways to speed things up.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 18 to 24 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is ariana?

Ariana scans your system's CPU cores, RAM, GPU VRAM (NVIDIA or AMD), and disk space, then cross-references against 200+ AI models from Ollama sources like Llama, Mistral, and Qwen to flag which ones run locally. Run `python main.py` and get a text report (`ai_capability_report.txt`) with compatibility scores, runnable counts (e.g., 158/217), bottlenecks, and optimization tips. It solves endless trial-and-error downloads on mismatched hardware, estimating needs for quantized variants like Q4 or Q5.

Why is it gaining traction?

Unlike static charts or manual calcs, Ariana fetches live model data with fallbacks, scores fits via RAM/VRAM ratios, and sorts recommendations by performance tiers (excellent to marginal). Devs dig the quick CLI output and hardware-specific advice, like GPU offloading or quantization tweaks, without digging through Ollama docs or guessing GitHub repos for model specs.

Who should use this?

AI hobbyists testing Ollama on desktops before local inference. ML devs evaluating laptops for edge deployment, like running Phi-2 on integrated graphics. Teams auditing hardware for tools like CodeLlama without enterprise GitHub setups.

Verdict

Grab it for fast hardware-model matchingβ€”docs are thorough, MIT-licensed, cross-platform Python. But 22 stars and 1.0% credibility signal early maturity; treat estimates as starting points, not gospel, and verify with real runs.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.