RUCBM

RUCBM / G-OPD

Public

Official repository for the paper "Learning beyond Teacher: Generalized On-Policy Distillation with Reward Extrapolation"

31
2
100% credibility
Found Feb 17, 2026 at 21 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

An academic evaluation toolkit for testing AI model improvements in math reasoning and code generation using established benchmarks.

How It Works

1
🔍 Discover G-OPD

You find this helpful tool on GitHub while looking for ways to fairly test AI models on math problems and writing code.

2
📖 Read the guide

The clear instructions show how researchers improved AI helpers by testing them properly on math and coding challenges.

3
🧮 Test math skills

You easily check how well your AI solves math puzzles using the ready-made tests included.

4
💻 Evaluate code writing

Run quick checks on code your AI generates, seeing exactly how it performs on real problems.

Get trustworthy results

You now have solid scores showing your AI's true strengths in math and coding, ready to improve it further.

Sign up to see the full architecture

3 more

Sign Up Free

Star Growth

See how this repo grew from 21 to 31 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is G-OPD?

G-OPD is a Python framework for evaluating advanced distillation techniques in language models, introducing generalized on-policy distillation with reward scaling and flexible reference models. It powers ExOPD, which extrapolates rewards to outperform standard methods in same-size and strong-to-weak setups. Users get ready-to-run scripts for benchmarking distilled models on math reasoning and code generation tasks using EvalPlus and LiveCodeBench.

Why is it gaining traction?

As the official GitHub repository for the arXiv paper, G-OPD stands out by focusing on evaluation first—plug in your model path and run bash scripts for HumanEval+, MBPP+, or LiveCodeBench without setup hassle. Developers appreciate the Docker support and vLLM/OpenAI backends for quick tests, plus ties to verl for scalable RL training (coming soon). It's a low-friction way to validate distillation gains over baselines like OPD.

Who should use this?

RLHF researchers distilling code or math models from teachers like Qwen. Teams evaluating efficiency via EvalPerf on performance-exercising inputs. Anyone benchmarking LLMs on official repository benchmarks before fine-tuning.

Verdict

Promising for distillation eval, but with 19 stars and 1.0% credibility score, it's early—docs are paper-focused, no training code yet. Grab it if you're in G-OPD experiments; otherwise, watch the repo.

(178 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.