bigscience-workshop/xmtf
Crosslingual Generalization through Multitask Finetuning
This project provides pre-trained language models that can understand and generate text across many languages. It helps developers who are building applications that need to process or create text in multiple languages, offering models that perform various tasks like summarization or question-answering with different language inputs and outputs. The end-users are AI/ML engineers and researchers who integrate these models into larger systems.
537 stars. No commits in the last 6 months.
Use this if you are a machine learning engineer or researcher looking for pre-trained, cross-lingual language models to integrate into your natural language processing applications or for further research.
Not ideal if you are an end-user without programming experience seeking a ready-to-use, off-the-shelf application for translation or content generation.
Stars
537
Forks
43
Language
Jupyter Notebook
License
Apache-2.0
Category
Last pushed
Sep 22, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/bigscience-workshop/xmtf"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
OptimalScale/LMFlow
An Extensible Toolkit for Finetuning and Inference of Large Foundation Models. Large Models for All.
adithya-s-k/AI-Engineering.academy
Mastering Applied AI, One Concept at a Time
jax-ml/jax-llm-examples
Minimal yet performant LLM examples in pure JAX
young-geng/scalax
A simple library for scaling up JAX programs
riyanshibohra/TuneKit
Upload your data → Get a fine-tuned SLM. Free.