tinyBigGAMES / VindexLLM
PublicVindexLLM is a pure Delphi, GPU-powered LLM inference engine that uses Vulkan compute shaders to run GGUF models entirely on the GPU. It performs full transformer inference without relying on Python, CUDA, or other external runtimes, requiring only vulkan-1.dll, which is typically included with modern GPU drivers.
VindexLLM is a Windows program that runs AI language models from standard files directly on your graphics hardware for fast text generation without Python or special toolkits.
How It Works
You stumble upon VindexLLM, a clever way to chat with powerful AI right on your own Windows computer using its built-in graphics power, no extra complicated software needed.
You download one of the ready-tested AI model files from the safe links provided, like a digital brain ready to think and talk.
You open the program files in your Delphi app, tweak the path to your AI file, build it once, and everything is ready to go.
You type a fun prompt like 'Explain how a computer works' and hit start, feeling the excitement as the AI wakes up.
Word by word, the AI streams back smart, helpful responses super fast, all powered by your computer's graphics, and you can keep chatting endlessly.
Star Growth
Repurpose is a Pro feature
Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.
Unlock RepurposeSimilar repos coming soon.