Cardinal-Operations/ORLM
ORLM: Training Large Language Models for Optimization Modeling
This project helps operations research professionals and industrial engineers translate real-world business problems into precise mathematical optimization models and executable Python code. You input a natural language description of an optimization problem, and it outputs a structured mathematical model along with a `coptpy` Python script to solve it. This is ideal for those who need to quickly prototype and solve complex resource allocation, scheduling, or logistics challenges.
237 stars. No commits in the last 6 months.
Use this if you need to transform business requirements into mathematical optimization models and Python code efficiently, without manually writing every formulation from scratch.
Not ideal if you prefer to use solvers other than COPT or require extremely fine-grained control over model formulation that goes beyond what an LLM can provide.
Stars
237
Forks
37
Language
Python
License
Apache-2.0
Category
Last pushed
Sep 18, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Cardinal-Operations/ORLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jncraton/languagemodels
Explore large language models in 512MB of RAM
microsoft/unilm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
haizelabs/verdict
Inference-time scaling for LLMs-as-a-judge.
albertan017/LLM4Decompile
Reverse Engineering: Decompiling Binary Code with Large Language Models
bytedance/Sa2VA
Official Repo For Pixel-LLM Codebase