JJLibra

JJLibra / SALAD-Pan

Public

🤗 Official implementation for "SALAD-Pan: Sensor-Agnostic Latent Adaptive Diffusion for Pan-Sharpening" https://arxiv.org/abs/2602.04473

21
5
69% credibility
Found Feb 04, 2026 at 19 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

SALAD-Pan is a research tool that uses diffusion models to pan-sharpen satellite images by fusing high-resolution panchromatic and low-resolution multispectral inputs into detailed high-resolution multispectral outputs.

How It Works

1
📰 Discover sharper satellite views

You find SALAD-Pan, a tool that blends blurry color satellite photos with sharp black-and-white ones to create stunning high-detail images.

2
📦 Get ready to play

Download the project and add the simple helpers it needs to run on your computer.

3
Grab smart starters

Pick up the ready-made brains from a trusted sharing spot to speed things up.

4
🎓 Train the color matcher

Feed it example image pairs so it learns to match colors perfectly.

5
🔬 Train the sharpener

Show it more examples to master blending details into full-color views.

🌍 See crystal-clear Earth

Your satellite images pop with vivid colors and pinpoint details, ready for maps or studies.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 19 to 21 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is SALAD-Pan?

SALAD-Pan fuses low-resolution multispectral satellite images with high-res panchromatic ones to produce sharp, detailed high-res multispectral outputs, tackling pan-sharpening challenges across different sensors. This official GitHub repository provides the Python implementation using Hugging Face Diffusers for a two-stage process: VAE training followed by latent diffusion with adapters on Stable Diffusion. Developers get configs for training on custom datasets and planned Gradio demos for quick testing.

Why is it gaining traction?

It stands out for sensor-agnostic performance, delivering top visual quality and speed—3.36s latency per image on RTX 4090, beating rivals like PanDiff by 100x—without sensor-specific tweaks. The official GitHub release ties directly to a fresh arXiv paper, with benchmarks and Docker support via official GitHub Actions, appealing to those eyeing production-ready diffusion for geospatial tasks. Early adopters praise the adaptive fine-tuning that leverages pre-trained models efficiently.

Who should use this?

Remote sensing engineers processing WorldView-3 or QuickBird datasets for agriculture monitoring or urban mapping. Satellite imagery analysts needing full-res HRMS without hardware-specific models. Researchers in diffusion-based super-resolution experimenting with PAN-LRMS pairs.

Verdict

Promising for pan-sharpening fans, but skip for now—21 stars, mostly placeholder code (training/inference scripts empty), and 0.699999988079071% credibility score signal immaturity despite solid paper results. Watch the official GitHub repository for full release; pair with salad panache ingredients for that extra flair in demos.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.