Incept5

MLX benchmark: Gemma 4 + Qwen 3.5 on Apple Silicon with TurboQuant KV cache

17
4
100% credibility
Found Apr 12, 2026 at 17 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
HTML
AI Summary

A tool for testing the speed and memory efficiency of Gemma 4 and Qwen 3.5 AI models on Apple Silicon Macs across various text lengths, producing detailed charts and reports.

How It Works

1
🔍 Discover the benchmark

You hear about a handy tool that tests how fast new AI thinking models run on Apple Mac computers.

2
📥 Gather AI models

You download the specific AI models like Gemma and Qwen to a folder on your Mac.

3
✏️ Set your preferences

You update a simple settings note to tell the tool exactly where your AI models are saved.

4
Launch the speed tests

You start the tests, and it measures how quickly each model processes long texts and remembers details.

5
📊 Create visual reports

You generate easy-to-read charts and tables showing speeds and memory use for different text lengths.

Get clear insights

You now know which AI model performs best on your Mac for quick thinking and handling big conversations.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 17 to 17 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is gemma4-benchmark?

This Python tool runs apple mlx benchmarks on Gemma 4 and Qwen 3.5 vision-language models using mlx-vlm on Apple Silicon, testing prefill and decode throughput from 4k to 256k context lengths. It quantifies TurboQuant KV cache effects—2.5-bit compression that boosts decode speeds up to 19% on long contexts without extra memory. Users get interactive HTML charts, markdown tables, and raw JSON data after a quick pip install and model setup in their LM Studio directory.

Why is it gaining traction?

In the github mlx community, it stands out for mlx lm benchmarks tailored to M5 mlx benchmark hardware like Apple M5 Max, comparing 4bit/8bit/bf16 quants and TurboQuant vs standard runs—ideal for mlx vs gguf benchmark or mlx vs pytorch benchmark debates. The YAML config lets you tweak models, contexts, and sampling, with subprocess isolation for precise peak memory stats. Developers grab it for mlx github examples on long-context LLM perf, including mlx benchmark llm insights beyond short prompts.

Who should use this?

Apple ML engineers tuning Gemma or Qwen models for mlx github apple workflows, especially those hitting KV cache bottlenecks at 128k+ tokens. Github mlx lm enthusiasts validating TurboQuant in production, or researchers comparing mlx whisper benchmark styles to denser setups. Vision-language devs on M5 needing hard numbers before deploying to mlx github openai server or mlx github swift examples.

Verdict

Solid starter for mlx lm benchmark on Apple hardware—run your own tests in minutes with clean reports—but at 17 stars and 1.0% credibility score, it's early days; fork and contribute if you need broader model support or tests. Worth it for M5 owners eyeing Gemma 4 efficiency.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.