foodvision-bench

Open reproducible benchmarks for food-image recognition models and APIs.

14
0
100% credibility
GitGems finds repos before they trend -- Star growth, AI reviews, and architecture deep-dives -- free with GitHub.
Sign Up Free
AI Analysis
Python
AI Summary

Foodvision Bench is an open-source Python package that benchmarks food recognition apps and systems for calorie estimation accuracy against a standardized set of 180 USDA-weighed meals, producing leaderboards for photo-based and manual-entry tiers.

How It Works

1
🔍 Discover Foodvision Bench

You hear about this free tool that fairly tests how well food photo apps guess calories in meals.

2
📥 Get the tool ready

You download and set up the simple benchmark tool on your computer in just a few minutes.

3
📱 Pick a food app to test

You choose from popular photo apps or manual trackers like PlateLens or MyFitnessPal to see how they do.

4
🍽️ Run the meal test

The tool checks the app against 180 real weighed meals from different cuisines to measure accuracy.

5
📊 See your results

You get a clear score showing how close the calorie guesses are, like 1.4% error for top apps.

6
🏆 View the leaderboard

You compare your app's score to others in photo or manual categories to find the best one.

Track meals smarter

Now you know which app gives the most accurate calorie info for your healthy eating journey!

Sign up to see the full architecture

5 more

Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is foodvision-bench?

Foodvision-bench is a Python package for running reproducible benchmarks on food-image recognition models and APIs, using a fixed 180-meal USDA-weighed test set called mini-180. It scores systems on mean absolute percentage error for calorie estimates and top-1 category accuracy, with CLI commands like `foodvision-bench evaluate --system clip-vit-l --test-set mini-180` delivering instant results and 95% confidence intervals. Developers get github reproducible research out of the box, complete with leaderboards splitting photo-based systems from manual-entry apps.

Why is it gaining traction?

It stands out with independent replications of vendor claims—no more trusting unverified API benchmarks—plus open-source baselines like CLIP and SigLIP for fair apples-to-apples comparisons across cuisines. The tiered leaderboards prevent photo systems from dominating easier manual-entry workflows, and adding new models or APIs is straightforward via a simple interface. Users love the JSON snapshots for github reproducible builds, ensuring results match across machines.

Who should use this?

Computer vision engineers tuning food-recognition models against real-world USDA ground truth. Nutrition app developers benchmarking third-party APIs like Foodvisor or PlateLens before integration. Researchers in reproducible brain charts github-style evals for dietary AI, needing per-cuisine breakdowns on Western, East Asian, and Mediterranean meals.

Verdict

Grab it if you need reliable, reproducible foodvision benchmarks in Python—docs are thorough, CI passes, and the mini-180 set is solid. At 14 stars and 1.0% credibility score, it's early beta with room to grow, but perfect for teams prioritizing verifiable results over polish.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.