Haiyang-W/TokenFormer

[ICLR2025 SpotlightšŸ”„] Official Implementation of TokenFormer: Rethinking Transformer Scaling with Tokenized Model Parameters

41
/ 100
Emerging

This project offers a novel way to build large-scale AI models, particularly useful for tasks like understanding and generating human language or classifying images. It takes raw text or images as input and processes them using a flexible 'attention' mechanism that interacts with both the data and the model's internal knowledge. The output is a highly capable AI model that can perform various complex tasks. This is for AI researchers and engineers who develop and train foundation models.

588 stars. No commits in the last 6 months.

Use this if you are developing large AI models and need a highly flexible and scalable architecture that can be incrementally improved without retraining from scratch.

Not ideal if you are looking for an off-the-shelf solution for a specific application without delving into core AI model architecture.

foundation-models large-language-models computer-vision neural-network-architecture model-scaling
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

588

Forks

43

Language

Python

License

Apache-2.0

Last pushed

Feb 11, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Haiyang-W/TokenFormer"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.