serycjon

serycjon / WOFTSAM

Public

The official implementation of WOFTSAM and SAM-H from the "Accurate Planar Tracking With Robust Re-Detection" paper.

24
2
100% credibility
Found Feb 28, 2026 at 20 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

WOFTSAM is a research tool for accurately tracking flat objects in videos by combining smart segmentation with robust motion estimation to excel on challenging benchmarks.

How It Works

1
🔍 Discover WOFTSAM

You stumble upon this clever video tracking tool that follows flat shapes like signs or cards perfectly, even if they twist, turn, or vanish briefly.

2
🛠️ Set up easily

Follow friendly steps to prepare your computer, like creating a simple workspace.

3
🧠 Add the smart model

Download a ready-made AI brain that powers the shape-following magic.

4
🎥 Try the demo

Run a quick sample video and watch it track smoothly in just minutes, creating an output clip.

5
📂 Use your videos

Point it to your own video clips and mark the four corners of the flat shape at the start.

6
🚀 Launch tracking

Press go, and it follows the shape frame by frame, handling spins, scales, and losses.

🏆 Perfect results

Enjoy spot-on tracking paths you can measure for accuracy, ready for your projects or analysis.

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 20 to 24 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is WOFTSAM?

WOFTSAM tracks planar objects like posters or billboards in videos with high accuracy, even after occlusions or motion blur, by combining segmentation with optical flow-based pose estimation. Developers get Python scripts to run state-of-the-art trackers on POT-210 and PlanarTrack benchmarks, plus a quick demo.py that outputs tracked videos from initial corner points. As the official implementation of the "Accurate Planar Tracking With Robust Re-Detection" paper, it delivers SOTA results using SAM 2.1 for masks and RAFT for flow.

Why is it gaining traction?

It outperforms priors on planar benchmarks via hybrid WOFTSAM (flow + SAM fallback) and SAM-H configs, with robust re-detection that handles losses better than plain SAM. The hook: editable configs for tweaking thresholds, one-command eval on P@5/P@15 metrics, and re-annotated PlanarTrack data for fair comparisons—no more benchmark fiddling.

Who should use this?

CV researchers benchmarking trackers on POT-210/PlanarTrack, AR devs needing stable planar pose from video, or robotics engineers tracking flat markers under real-world blur/occlusion.

Verdict

Grab it for planar tracking experiments—solid docs, easy eval, official implementation shines on benchmarks. But 19 stars and 1.0% credibility score signal early maturity; test thoroughly before production.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.