kavishka-dot

Minimal real-time OS for STM32H7, built from scratch for edge ML inference and on-device training

10
0
100% credibility
Found May 15, 2026 at 10 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
C
AI Summary

Vulcan OS is a lightweight real-time operating system built from scratch for STM32H7 microcontrollers to enable running and training machine learning models directly on edge hardware.

How It Works

1
๐Ÿ“ฐ Discover Vulcan OS

You hear about Vulcan OS, a tiny operating system made for running smart AI models on small computer boards like the Nucleo-H743ZI2.

2
๐Ÿ›’ Get Your Hardware

You order a Nucleo-H743ZI2 board, a powerful little device perfect for edge AI projects.

3
๐Ÿ’ป Prepare on Your Computer

You install easy tools on your computer to build and send the OS to your board.

4
๐Ÿš€ Build and Launch

With a few simple steps, you create the OS and load it onto your board, feeling the excitement as it comes to life.

5
๐Ÿ”Œ Connect and Power Up

You plug the board into your computer via USB, open the debug window, and see startup messages and blinking lights.

6
๐Ÿ“Š Watch It Work

You see tasks handling sensors, running simple AI inferences, and reporting stats, with green, yellow, and red lights dancing.

๐ŸŽ‰ Edge AI Running

Your tiny board now runs machine learning models smoothly on the edge, ready for real-world smart projects.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 10 to 10 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is vulcan-os?

Vulcan OS is a minimal real-time operating system in C for STM32H7 microcontrollers, built from scratch to run edge ML inference and on-device training without FreeRTOS, vendor HALs, or TFLite. It delivers preemptive multitasking, continuous ADC sampling via DMA for sensors, SPI for flash or SDR, UART logging, and tensor arenasโ€”all clocked at 480 MHz on Cortex-M7. Developers deploy it via CMake and arm-none-eabi-gcc to Nucleo-H743ZI2 boards, getting debug output over virtual COM for tasks like sensor processing and mock inference.

Why is it gaining traction?

It stands out with a tiny footprint and full control over STM32H7 peripherals, skipping bloated libraries for cmix mixing with minimal real-time asymmetric cryptographic operations if needed later. The hook is its ML-ready allocators and DMA pipelines that feed data straight to future INT8 ops like matmul and conv2d, plus a clean build like a minimal GitHub Actions workflow. No vendor lock-in means predictable latency for edge AI, unlike heavier stacks.

Who should use this?

Embedded engineers targeting STM32H7 for battery-powered ML sensors, like anomaly detection in IMUs or audio classifiers. Ideal for teams prototyping on-device training with SGD/Adam, needing GPIO toggles, SPI loopback, and arena resets without HAL overhead. Suited for those extending to model loading from external flash.

Verdict

Promising kernel and HAL for niche edge ML on STM32H7, but at 10 stars and 1.0% credibility score, it's earlyโ€”phase 2 complete, ML ops planned. Try the demo build if you want a minimal GitHub README-style RTOS baseline; skip for production until tests and docs mature.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.