microsoft/unilm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
This project offers a collection of advanced AI models that can understand and generate content across different formats, like text, images, and audio, and in many languages. It helps researchers and AI engineers develop new AI capabilities, taking in raw data (text, images, speech) and producing highly capable, specialized AI models. The typical end-user is an AI researcher or machine learning engineer looking to build or enhance foundation models.
22,042 stars.
Use this if you are an AI researcher or engineer working on developing or advancing large-scale AI models for various tasks and data types.
Not ideal if you are looking for a ready-to-use application or a simple tool for basic data analysis, as this focuses on foundational AI model development.
Stars
22,042
Forks
2,692
Language
Python
License
MIT
Category
Last pushed
Jan 23, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/microsoft/unilm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
jncraton/languagemodels
Explore large language models in 512MB of RAM
haizelabs/verdict
Inference-time scaling for LLMs-as-a-judge.
albertan017/LLM4Decompile
Reverse Engineering: Decompiling Binary Code with Large Language Models
bytedance/Sa2VA
Official Repo For Pixel-LLM Codebase
Cardinal-Operations/ORLM
ORLM: Training Large Language Models for Optimization Modeling