ljx1002

ljx1002 / TAPFormer

Public

TAPFormer is a model that fuses images and events for high-frame-rate tracking any point (pixel) .

24
3
100% credibility
Found Mar 10, 2026 at 19 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

TAPFormer is a research tool for accurately tracking arbitrary points in videos by fusing regular frames with high-speed event data.

How It Works

1
🔍 Discover TAPFormer

You hear about a cool tool that tracks tiny moving spots in videos super accurately, even in fast or dark scenes.

2
📥 Grab the files

Download the ready-made pack from the project page to your computer.

3
🛠️ Ready your setup

Follow simple steps to prepare your computer so everything runs smoothly.

4
📹 Add your videos

Drop in your video clips and special motion data folders where the tool expects them.

5
Magic prep moment

With one go, transform your videos into special maps that capture every tiny movement perfectly.

6
⚙️ Tweak preferences

Choose easy options like which clips to test and if you want videos shown.

7
▶️ Hit go

Click run and watch as it starts tracking points across your videos.

🎉 Perfect tracks appear

Enjoy watching colorful trails follow every point flawlessly, with scores showing how spot-on it is.

Sign up to see the full architecture

6 more

Sign Up Free

Star Growth

See how this repo grew from 19 to 24 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is TAPFormer?

TAPFormer is a Python model that fuses images and events for high-frame-rate tracking of any pixel or point, delivering robust trajectories even in low-light or fast-motion scenarios. It processes synchronized frame-event data to predict point paths over time, with pretrained weights and evaluation scripts for real-world benchmarks. Users get visualization videos, trajectory files, and metrics like mean error via simple YAML configs and CLI runs.

Why is it gaining traction?

It stands out by modeling temporal continuity between frames and events through asynchronous fusion, achieving state-of-the-art results on TAP and feature tracking datasets without needing custom hardware tweaks. Developers appreciate the plug-and-play event representation generators and support for datasets like EDS, EC, InivTAP, and DrivTAP, enabling quick high-speed tracking tests.

Who should use this?

Computer vision engineers building robotics SLAM or AR systems where standard cameras fail at high speeds. ML researchers evaluating event-based tracking on challenging real-world sequences. Robotics devs integrating event cameras for pixel-precise motion estimation in dynamic environments.

Verdict

Worth trying for event-image fusion research—SOTA claims hold up on provided benchmarks—but at 19 stars and 1.0% credibility, it's early-stage with basic docs; expect some data prep tweaks before production.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.