microsoft/encoder-decoder-slm

Efficient encoder-decoder architecture for small language models (≤1B parameters) with cross-architecture knowledge distillation and vision-language capabilities

32
/ 100
Emerging

This project helps machine learning engineers and researchers deploy more efficient and performant small language models (SLMs) for specific tasks like question answering or summarization. It takes in either text or text and images, and outputs generated text. The end user is a machine learning practitioner who needs to optimize the performance and efficiency of language models under 1 billion parameters, especially for deployment on edge devices.

No commits in the last 6 months.

Use this if you are developing or deploying small language models (under 1 billion parameters) and need to maximize their performance, throughput, and memory efficiency, particularly for applications on resource-constrained devices.

Not ideal if you are working with very large language models (over 1 billion parameters) or if your primary goal is to compete with the latest state-of-the-art SLMs without architectural optimization concerns.

small-language-models edge-ai model-optimization natural-language-processing computer-vision
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

32

Forks

3

Language

Python

License

MIT

Last pushed

Feb 07, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/microsoft/encoder-decoder-slm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.