HenryNdubuaku/pete
Parameter-efficient transformer embeddings replace learned embeddings with hardware-aware polynomial expansions of token IDs.
This project helps machine learning engineers and researchers build more efficient Transformer models. It replaces the largest part of a Transformer—the learned embedding layer—with a more streamlined mathematical approach. You provide your training data and get a Transformer model that trains faster and uses fewer computational resources while maintaining strong performance, especially for tasks like comparing sentence similarity.
Use this if you are a machine learning engineer working with Transformer models and need to reduce their memory footprint or speed up training times, especially when dealing with large vocabularies.
Not ideal if you are not working with Transformer models or if your primary goal is to achieve state-of-the-art performance without any efficiency constraints.
Stars
8
Forks
—
Language
Python
License
—
Category
Last pushed
Feb 06, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/HenryNdubuaku/pete"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
MinishLab/model2vec
Fast State-of-the-Art Static Embeddings
AnswerDotAI/ModernBERT
Bringing BERT into modernity via both architecture changes and scaling
tensorflow/hub
A library for transfer learning by reusing parts of TensorFlow models.
Embedding/Chinese-Word-Vectors
100+ Chinese Word Vectors 上百种预训练中文词向量
twang2218/vocab-coverage
语言模型中文认知能力分析