bigscience-workshop/xmtf

Crosslingual Generalization through Multitask Finetuning

41
/ 100
Emerging

This project provides pre-trained language models that can understand and generate text across many languages. It helps developers who are building applications that need to process or create text in multiple languages, offering models that perform various tasks like summarization or question-answering with different language inputs and outputs. The end-users are AI/ML engineers and researchers who integrate these models into larger systems.

537 stars. No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher looking for pre-trained, cross-lingual language models to integrate into your natural language processing applications or for further research.

Not ideal if you are an end-user without programming experience seeking a ready-to-use, off-the-shelf application for translation or content generation.

natural-language-processing machine-learning-engineering cross-lingual-ai large-language-models multilingual-text-generation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

537

Forks

43

Language

Jupyter Notebook

License

Apache-2.0

Category

llm-fine-tuning

Last pushed

Sep 22, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/bigscience-workshop/xmtf"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.