Percepta-Core

Compile programs directly into transformer weights. Includes a 2D convex-hull KV cache with O(log n) inference.

47
9
100% credibility
Found Mar 26, 2026 at 47 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

Transformer VM is a transformer model with analytically computed weights that exactly simulates a WebAssembly virtual machine to execute arbitrary compiled C programs.

How It Works

1
🔍 Discover Transformer VM

You stumble upon this fascinating project that turns a smart AI model into a tiny computer capable of running simple programs.

2
📦 Get ready

You grab a few easy tools like Python and a helper app to set up your playground.

3
⚙️ Install with one click

Run a single friendly command to bring the AI computer to life on your machine.

4
🚀 Watch it run examples

Hit go and see it breeze through fun challenges like math puzzles, hello world, and even solving Sudoku step by step.

5
✏️ Try your own program

Write a tiny C program, feed it in, and watch the AI execute it perfectly.

🎉 AI becomes a computer

Your programs run flawlessly inside the AI, proving large language models can truly compute like real machines.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 47 to 47 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is transformer-vm?

Transformer-vm compiles C programs (on Linux, Windows, or online via GitHub repos) directly into transformer model weights that simulate a WebAssembly virtual machine. Using Python CLI tools like `wasm-compile` for C-to-tokens, `wasm-build` for weights, and `wasm-run` for inference at 30K tok/s, it executes arbitrary WASM bytecode autoregressively—including loops, conditionals, and memory ops. Specialize models for single programs to cut input size, or run universally.

Why is it gaining traction?

It proves transformers can be deterministic computers, not just predictors, with exact WASM execution via analytical weights—no training needed. The O(log n) convex-hull KV cache handles million-token traces (e.g., Sudoku solver) without slowdowns, and MILP scheduling optimizes layers for minimal params. Devs dig the one-command workflow: compile GitHub projects online, run Python programs or safety checks through a transformer VM.

Who should use this?

AI researchers probing transformer limits as compilers or mechanistic interpretability. Embedded devs compiling WiringPi programs or C safety checks into neural runtimes. Hobbyists experimenting with "how to compile Java/C programs in Linux/Windows" via transformer weights, or GitHub source to exe-like execution.

Verdict

Grab it for proofs-of-concept—docs shine with examples, tests cover end-to-end, but 47 stars and 1.0% credibility scream early alpha. Solid for research; wait for prod hardening if scaling matters.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.