WillowHe

A set of solutions is provided, leveraging Openpangu - 7B as the base model for fine - tuning and application of large language models (LLMs) in operations research optimization tasks.

46
0
100% credibility
Found Mar 30, 2026 at 46 stars -- GitGems finds repos before they trend. Get early access to the next one.
Sign Up Free
AI Analysis
Python
AI Summary

A toolkit that fine-tunes a language model to automate creating math models, adding constraints, and pruning variables for operations research optimization problems.

How It Works

1
🔍 Discover EvoOpt-LLM

You hear about a helpful tool that turns everyday optimization puzzles into ready-to-solve math models using simple words.

2
📥 Get the Tool Ready

Download the project files and prepare your computer with the needed setup instructions from the guide.

3
🧠 Teach Your Assistant

Feed it examples of optimization problems so it learns to understand and handle your specific challenges.

4
💬 Describe Your Problem

Type a natural description of your optimization task, like scheduling or resource allocation, and watch it create the math model.

5
Add More Details

Ask it to generate new rules or simplify the model by spotting unnecessary parts, making everything run smoother.

Solve and Celebrate

Run your optimized model to get quick, efficient answers to tough problems, saving time and effort.

Sign up to see the full architecture

4 more

Sign Up Free

Star Growth

See how this repo grew from 46 to 46 stars Sign Up Free
Repurpose This Repo

Repurpose is a Pro feature

Generate ready-to-use prompts for X threads, LinkedIn posts, blog posts, YouTube scripts, and more -- with full repo context baked in.

Unlock Repurpose
AI-Generated Review

What is EvoOpt_oppangu_optimization_model?

This Python project fine-tunes Huawei's OpenPangu-7B LLM for operations research tasks, turning natural language into linear programming models, generating new constraints for existing setups, and pruning zero-valued variables to slim down problems. It provides all set solutions for end-to-end modeling, constraint extension, and model compression, optimized for Huawei Ascend NPU hardware. Users get scripts to train the base model with LoRA, run inference on LP files, and evaluate outputs against ground truth.

Why is it gaining traction?

It stands out by specializing OpenPangu-7B for OR workflows, delivering automated math modeling from text prompts and safe variable fixes that speed up solvers without losing optimality. Developers notice faster prototyping for supply chain or scheduling apps, plus bash scripts for quick setup like generating constraints or pruning results. The NPU focus hooks Huawei ecosystem users seeking LLM acceleration beyond generic tools.

Who should use this?

OR engineers building optimization apps, like supply chain planners extending LP models with resource constraints or analysts pruning large MILPs for faster solves. Ideal for teams on Ascend hardware fine-tuning LLMs for domain-specific tasks, or researchers evaluating LLM accuracy on numerical benchmarks with built-in execution and majority voting.

Verdict

Promising niche tool for OR LLM experiments, but 1.0% credibility and 46 stars signal early maturity—docs are README-heavy with example scripts, lacking broad tests. Try for Ascend setups; skip if you need polished, GPU-agnostic alternatives.

(198 words)

Sign up to read the full AI review Sign Up Free

Similar repos coming soon.