orailnoor

Run uncensored AI models locally on your Android phone. One-script setup via Termux + llama.cpp. No root, no cloud, no internet needed.

11
2
100% credibility
Found Apr 18, 2026 at 11 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Shell
AI Summary

This repository provides a one-script installer to run local, offline AI language models on Android devices via Termux.

How It Works

1
๐Ÿ“ฑ Discover phone AI

You hear about a cool way to run your own smart AI helper right on your Android phone, completely private and offline.

2
๐Ÿ›’ Get Termux app

Download the free Termux app from F-Droid or the app store โ€“ it's like opening a little computer window on your phone.

3
๐Ÿง  Add AI brains

Make a folder named AI_Models in your phone's storage and drop in a downloaded AI model file, or let the setup grab one for you.

4
Run easy setup
๐Ÿ—‚๏ธ
Pick your model

Choose from models you already saved if any are there.

๐Ÿ“ฅ
Grab a free one

Select a recommended uncensored model to download automatically.

5
๐Ÿš€ Setup finishes

Sit back as it builds your AI engine โ€“ it takes 10-30 minutes the first time, then you're ready to go.

6
โ–ถ๏ธ Start chatting

Type one quick command in Termux to launch, then open your phone's web browser and visit the local chat page.

๐Ÿ’ฌ Your private AI

Enjoy talking to your smart, uncensored AI companion anytime โ€“ everything stays on your phone, no internet needed.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 11 to 11 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is termux-llm?

termux-llm lets you run uncensored local LLMs like Gemma-2-2B or Phi-3.5 on your Android phone via a single Shell script in Termux, powered by llama.cpp. Drop a GGUF model in /sdcard/AI_Models/, run the installer, and chat via browser at 127.0.0.1:8080 or hit the OpenAI-compatible API at /v1โ€”no root, cloud, or internet needed after setup. It solves offline AI access on mobile, keeping everything on-device for privacy.

Why is it gaining traction?

One-command setup skips MLC LLM Termux hassles, auto-building llama.cpp and serving a web UI plus API that apps like SillyTavern can use. Uncensored "abliterated" models refuse nothing, and it auto-detects RAM to suggest fits, delivering 2-10 tokens/sec on mid-range phones. Devs dig the no-fuss path to termux local LLM without compiling from scratch.

Who should use this?

Android hackers wanting termux run LLM for offline coding assists or note-taking on the go. Privacy nuts ditching cloud LLMs for on-phone inference. Field devs in low-connectivity spots needing quick AI queries without data leaks.

Verdict

Grab it if you're experimenting with termux LLM on Androidโ€”solid docs and easy model swaps make it beginner-friendly despite 11 stars and 1.0% credibility score. Still early; test on non-critical setups as it's single-script simple but unproven at scale.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.