Ammmob

Ammmob / PixelSmile

Public

PixelSmile: Fine-grained facial expression editing with continuous control, reduced semantic entanglement, and strong identity preservation.

89
1
100% credibility
Found Mar 30, 2026 at 89 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

PixelSmile is an open-source project for editing facial expressions in images of people or anime characters with fine control over intensity.

How It Works

1
🌐 Discover PixelSmile

You stumble upon this exciting tool that magically changes facial expressions in photos, like turning a frown into a big smile.

2
πŸ’» Get it on your computer

Download the free tool and follow simple steps to set it up on your machine.

3
πŸ“₯ Download the magic ingredients

Grab the special AI files that make the expression editing possible.

4
Pick your photo style
πŸ‘€
Real person

Works great on everyday photos of people.

🎨
Anime character

Perfect for cartoon faces and drawings.

5
πŸ–ΌοΈ Choose photo and feeling

Select your picture and pick a new emotion like happy, surprised, or shy.

6
βš™οΈ Tune the intensity

Slide to adjust how strong the new expression looks, from gentle to bold.

πŸ˜„ See the smiles appear

Your photos transform with the exact expressions you wanted, ready to share!

Sign up to see the full architecture

5 more

Sign Up Free

Star Growth

See how this repo grew from 89 to 89 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is PixelSmile?

PixelSmile lets you edit facial expressions in images with fine-grained, continuous control over intensity levels, like dialing a "happy" expression from subtle to extreme. It delivers strong identity preservation and reduced semantic entanglement, so only the face changes without warping backgrounds or unrelated details. Built in Python on diffusion models, you run inference via a simple CLI script after grabbing LoRA weights from Hugging Face.

Why is it gaining traction?

It stands out with precise pixel smile adjustments via scales (e.g., 0 to 1.5), beating generic editors that over-edit or lose faces. The Hugging Face demo and ComfyUI plugin make testing instant, while the arXiv paper backs its claims on expression editing quality. Developers dig the quick setup for human or anime faces without retraining.

Who should use this?

ML engineers building avatar customizers or AR filters needing expression control. Game devs prototyping character emotions from photos. Photo app makers wanting identity-safe facial edits for user uploads.

Verdict

Grab it for inference if you need controllable facial expression editing todayβ€”CLI works out of the box with preview weights. At 1.0% credibility and 89 stars, it's early (training code pending), so test the HF demo before committing.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.