getbeton

getbeton / dryfit

Public

Python project for generating synthetic analytics databases and hidden benchmark truth so agentic systems can be tested on product-signal discovery tasks

27
0
100% credibility
Found Apr 15, 2026 at 19 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

Dryfit generates synthetic product analytics event data with embedded positive and negative signal paths, plus ground truth benchmarks, for testing AI agents on discovery tasks.

How It Works

1
🔍 Discover dryfit

You find this handy tool on GitHub that lets you create realistic fake customer activity data to test your analytics ideas.

2
📥 Get it set up

You easily prepare everything on your computer so it's ready to use right away.

3
📋 Pick a story

You choose a real-world scenario like growing teams or tracking payments to match what you want to test.

4
Make test data

With one simple go, it generates a bunch of lifelike events hiding special success patterns just for your test.

5
📊 Explore the data

You open a friendly dashboard to browse and play with the events, seeing timelines and details.

6
🔓 Reveal the secrets

You peek at the hidden answers file to know exactly which patterns are the winners and losers.

🎉 Test your helpers

Now your AI assistants can practice spotting those patterns perfectly, getting smarter every time.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 19 to 27 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is dryfit?

Dryfit is a Python tool that generates synthetic analytics event data in PostgreSQL tables, complete with hidden ground truth JSON for benchmarking AI agents on discovering product signals amid noise. You pick from prebuilt YAML configs modeling SaaS scenarios like seat-based pricing, usage metering, or transaction volume—think PostHog-style events with positive/negative signal paths ending in success events. Run the CLI with `uv run dryfit -c config.yaml` for instant datasets, artifacts, and optional Grafana dashboards via Docker Compose; perfect for dryfit shirt-like wicking away real data hassles in python github projects.

Why is it gaining traction?

It stands out with ready-to-run business model scenarios (dri fit nike hose for hybrid seat+usage, dryfit a500 for credits tokens) that proxy real metrics like seat growth or GMV trends, plus noise injection for realism—far beyond generic faker libs. Developers hook on the zero-setup local Postgres scripts, Docker inspection stack, and python github actions-friendly structure for reproducible benchmarks. No more mocking data manually; get signal discovery tests in minutes.

Who should use this?

AI engineers building agentic systems for product analytics, like testing LLMs on PostHog event streams or Telegram chat signals. PostHog teams validating dri fit socken-style freemium funnels, or ML researchers needing python project ideas with python github api clients for evals. Ideal for python github copilot users prototyping in clean python project folder structures.

Verdict

Early alpha with 15 stars and 1.0% credibility—docs shine via README workflows, but expect tweaks for production. Grab it for python github download if agent benchmarks are your jam; skips real PII risks.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.