hplt-project/monolingual-multilingual-instruction-tuning
Monolingual or Multilingual Instruction Tuning: Which Makes a Better Alpaca
This project helps researchers and developers understand how to build better large language models (LLMs) that can work across many languages. It provides code and pre-translated datasets to compare different training methods for LLMs, showing what happens when you train a model using only one language versus multiple languages. Anyone working on making LLMs perform well in different global markets, or studying how language impacts AI, would find this useful.
Use this if you are a machine learning researcher or developer exploring how to create more effective multilingual large language models.
Not ideal if you are a non-technical user looking for a ready-to-use translated AI model or a simple translation tool.
Stars
9
Forks
1
Language
Python
License
—
Category
Last pushed
Feb 16, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/hplt-project/monolingual-multilingual-instruction-tuning"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
DaoD/INTERS
This is the repository for our paper "INTERS: Unlocking the Power of Large Language Models in...
declare-lab/instruct-eval
This repository contains code to quantitatively evaluate instruction-tuned models such as Alpaca...
Haiyang-W/TokenFormer
[ICLR2025 Spotlightš„] Official Implementation of TokenFormer: Rethinking Transformer Scaling...
hkust-nlp/deita
Deita: Data-Efficient Instruction Tuning for Alignment [ICLR2024]
kehanlu/DeSTA2
Code and model for ICASSP 2025 Paper "Developing Instruction-Following Speech Language Model...