noosed

noosed / NTTuner

Public

GUI tool to QLoRA/LoRA-fine-tune LLMs and deploy to Ollama. Broad GPU support (NVIDIA/AMD/Intel/Apple) + CPU fallback.

15
0
100% credibility
Found Feb 07, 2026 at 14 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

NTTuner is a graphical desktop tool for fine-tuning AI language models with user-provided data and automatically adding them to the local Ollama library.

How It Works

1
🔍 Discover NTTuner

You find this friendly desktop tool that lets everyday people customize AI chatbots using their own examples, no coding needed.

2
📥 Set Up Your Computer

Download the program and install the free Ollama app so you can run AI models right on your machine.

3
🚀 Open the App

Launch it and the tool automatically checks your computer's graphics setup to use the fastest training method available.

4
🤖 Choose a Base AI

Pick from a handy list of popular AI models that match what you want to customize.

5
📝 Add Your Data

Simply drag and drop a file of your own conversations or examples, like questions and helpful answers.

6
▶️ Start Customizing

Tweak a few easy settings like how much to learn, then hit start and watch the live progress as it trains.

🎉 Enjoy Your Personal AI

Your one-of-a-kind AI model pops up in Ollama, ready to chat exactly how you trained it with your style.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 14 to 15 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is NTTuner?

NTTuner is a Python-based GitHub GUI app that lets you LoRA-fine-tune large language models from your desktop and deploy them straight to Ollama. It handles dataset loading via drag-and-drop, config saving as JSON, real-time training logs, and automatic GGUF conversion with quantization options like q4_k_m. With broad GPU support for NVIDIA, AMD, Intel, Apple Silicon, plus CPU fallback, it simplifies local model customization without CLI wrestling.

Why is it gaining traction?

It stands out with automatic backend detection—picking CUDA, ROCm, MPS, Vulkan, or OpenCL based on your hardware—making fine-tuning accessible beyond NVIDIA users. The DearPyGui toolkit delivers a responsive interface with background training, model discovery from HuggingFace or Ollama, and one-click imports, saving hours on setup. Developers dig the no-fuss workflow on Linux, Ubuntu, or any OS, even guiding Intel/AMD GPU tweaks.

Who should use this?

ML tinkerers on AMD or Intel GPUs tired of cloud dependency, indie devs building custom chatbots with 7B models, or researchers testing LoRA on Apple Silicon without Unsloth lock-in. It's for anyone needing quick local fine-tuning on datasets in JSONL format, especially with limited VRAM via batch tweaks or CPU mode.

Verdict

Try it if you want broad hardware flexibility in a GitHub GUI Python tool, but temper expectations—1.0% credibility score and 14 stars signal early-stage maturity with solid docs yet unproven stability. Solid for experiments, skip for production until more battle-testing.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.