HenryNdubuaku/pete

Parameter-efficient transformer embeddings replace learned embeddings with hardware-aware polynomial expansions of token IDs.

22
/ 100
Experimental

This project helps machine learning engineers and researchers build more efficient Transformer models. It replaces the largest part of a Transformer—the learned embedding layer—with a more streamlined mathematical approach. You provide your training data and get a Transformer model that trains faster and uses fewer computational resources while maintaining strong performance, especially for tasks like comparing sentence similarity.

Use this if you are a machine learning engineer working with Transformer models and need to reduce their memory footprint or speed up training times, especially when dealing with large vocabularies.

Not ideal if you are not working with Transformer models or if your primary goal is to achieve state-of-the-art performance without any efficiency constraints.

natural-language-processing deep-learning-optimization model-efficiency text-embeddings transformer-architecture
No License No Package No Dependents
Maintenance 10 / 25
Adoption 4 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

8

Forks

Language

Python

License

Last pushed

Feb 06, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/HenryNdubuaku/pete"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.