stampby / bleeding-edge
Publichalo-ai CORE bleeding edge — MLX Engine ROCm on Strix Halo. The fastest local LLM inference on consumer hardware.
Bleeding-edge provides experimental high-performance software for running AI language models locally on specific high-end AMD computers with unified memory.
How It Works
You hear about this project in AI communities promising super-quick AI chats on your AMD computer.
See if your powerful AMD setup with lots of memory matches what it needs to run at top speed.
Follow one simple instruction to download and prepare everything in seconds.
Open the chat and talk to smart AI helpers that respond lightning-fast, like magic.
Run quick checks to see impressive performance numbers on your own machine.
Make it start up automatically so your AI is ready whenever you need it.
Now you have a blazing-fast personal AI brain running locally, chatting smoothly without waiting.
Star Growth
Repurpose is a Pro feature
Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.
Unlock RepurposeSimilar repos coming soon.