BoyangGuo1789

BoyangGuo1789 / PLKR

Public

[TMM] Prompt Learning With Knowledge Regularization for Pre-Trained Vision-Language Models

19
0
100% credibility
Found Apr 11, 2026 at 19 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

An open-source toolkit for training vision-language models using prompt learning techniques with knowledge regularization to improve few-shot image classification and transfer across datasets.

How It Works

1
🔍 Discover PLKR

You stumble upon this GitHub project promising smarter image recognition with just a handful of examples, perfect for quick AI experiments.

2
💻 Prepare your setup

Install the free software tools it needs, like a math library for AI, so everything runs smoothly on your computer.

3
📁 Organize your images

Download picture collections of animals, cars, foods, or scenes and sort them into simple folders by category.

4
🚀 Teach with examples

Pick a few photos per category and run a quick session to train the AI on recognizing patterns.

5
Test on new sights

Challenge it with unseen images from the same or different collections to see its magic in action.

6
📊 Review your scores

Check simple reports showing accuracy percentages across tests, like a report card for your AI.

🎉 Master few-shot magic

Your AI now identifies objects accurately with minimal examples, ready for real-world photo sorting or research wins!

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 19 to 19 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is PLKR?

PLKR lets you train lightweight prompts on pre-trained vision-language models like CLIP to boost few-shot image classification and domain generalization. It tackles overfitting in prompt learning by adding knowledge regularization, improving zero-shot transfer from base to novel classes or across datasets like ImageNet to Oxford Pets. Built in Python on PyTorch, users run simple bash scripts for base-to-new training, few-shot evaluation (1-16 shots), and cross-dataset tests.

Why is it gaining traction?

Unlike plain CoOp or CoCoOp, PLKR's graph-based regularization enforces neighborhood consistency in prompt embeddings, yielding better novel-class accuracy without tuning the full model. Developers grab it for quick baselines on 14+ datasets (ImageNet variants, Flowers, Cars), with configs for rivals like MaPLe and IVLP. The TMM paper's reproducible results and MIT license make it a fast GitHub starter for prompt experiments.

Who should use this?

ML engineers adapting CLIP for few-shot classification on custom image datasets, especially base-to-novel splits. Researchers benchmarking prompt methods for domain generalization, like transferring from ImageNet to sketches or textures. Vision teams avoiding heavy fine-tuning on resource-limited hardware.

Verdict

Solid pick for prompt learning prototypes—plug in your data and scripts handle the rest—but with 19 stars and 1.0% credibility, treat it as research code, not production-ready. Polish the README examples for broader adoption.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.