aikoschurmann

aikoschurmann / zog

Public

⚡ A blisteringly fast, zero-allocation JSONL search engine in Zig. Query and aggregate massive datasets at 4.0 GB/s using SIMD-accelerated "Blind Scanning." Up to 50x faster than jq.

22
1
100% credibility
Found Feb 26, 2026 at 22 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Zig
AI Summary

Zog is a high-performance command-line tool for rapidly querying, filtering, extracting fields from, and aggregating data in large JSONL files.

How It Works

1
🕵️‍♀️ Discover zog

You hear about a super-fast tool that searches huge lists of records in data files lightning-quick, perfect for digging through big logs without waiting forever.

2
📥 Grab the tool

Download the simple ready-to-run program for your computer and place it where you can easily use it.

3
📁 Pick your data file

Choose the big file full of records you want to explore, like server logs or customer data.

4
Ask your question

Tell it what to find, like 'show people over age 30 with positive balance', and watch it blast through the file at amazing speed.

5
📊 Get exactly what you need

Pick specific details to pull out, or get smart summaries like counts or totals from matching records.

6
🚀 See instant results

Your screen fills with perfectly filtered data or quick stats, way faster than any other tool you've tried.

🎉 Master huge datasets

Now you can analyze massive files in seconds, spotting patterns and insights effortlessly.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 22 to 22 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is zog?

zog is a Zig-built CLI tool for querying JSONL datasets with simple conditions like `age eq 30` or `balance gt 100`, extracting fields via SELECT, or aggregating with count, sum, min, max—all at blisteringly fast speeds up to 3.8 GB/s. It handles pipes or files, outputs TSV/CSV/JSON, and skips full JSON parsing for grep-like throughput on massive logs. Think jq meets ripgrep, but 50x faster for structured extraction.

Why is it gaining traction?

It crushes jq (20-50x faster) and beats ripgrep on complex logic or aggregations, processing datasets at 1.9-3.8 GB/s with zero allocations via blind scanning. Devs love the Unix-friendly pipelining for live tails or chains with sort/wc, plus type hints like `n:100` for precise numeric matches. No bloat—just install a static binary and query billions of lines instantly.

Who should use this?

SREs filtering prod logs for errors or 5xxes, security analysts hunting audit trails, data engineers pre-filtering TB JSONL before warehouses. Ideal for CLI workflows on flat-ish JSONL like access logs or metrics exports—not deep nesting or schema validation.

Verdict

Grab it for high-volume JSONL if speed trumps full JSON smarts; benchmarks deliver on 50x claims. With 21 stars and 1.0% credibility, it's early—solid docs but light tests; watch for nesting fixes before prod prime time.

(187 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.