model-architectures/GRAPE

[ICLR 2026] GRAPE: Group Representational Position Encoding (https://arxiv.org/abs/2512.07805)

38
/ 100
Emerging

This project offers a sophisticated framework for researchers and engineers developing advanced large language models (LLMs). It helps integrate positional information into sequence models, which is crucial for understanding the order of words and context. By providing a unified approach to handle relative positioning, it takes raw text datasets and outputs more efficient and powerful LLMs.

Use this if you are a researcher or engineer working on the core architectures of Transformer-based models and need to implement or experiment with state-of-the-art positional encoding mechanisms.

Not ideal if you are an end-user looking for an out-of-the-box LLM application or a data scientist who primarily uses existing model APIs without delving into architectural modifications.

natural-language-processing large-language-models deep-learning-research transformer-architecture ai-model-development
No Package No Dependents
Maintenance 10 / 25
Adoption 9 / 25
Maturity 13 / 25
Community 6 / 25

How are scores calculated?

Stars

79

Forks

3

Language

Python

License

Apache-2.0

Last pushed

Mar 10, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/model-architectures/GRAPE"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.