DatarConsulting/Vashista-Sparse-Attention
Reproducibility notebook for Vashista Sparse Attention : constant-in-context sparse decoding with exponential guarantees.
Stars
—
Forks
—
Language
Jupyter Notebook
License
—
Category
Last pushed
Feb 14, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/DatarConsulting/Vashista-Sparse-Attention"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
sandseb123/local-lora-cookbook
Fine-tune a local LLM on your own app's data in 15 minutes. Runs entirely on-device, zero...
avnlp/llm-blender
LLM-Blender: Ensembling framework that maximizes LLM performance via pairwise ranking. Employs...
RufelleEmmanuelPactol/Mixture-of-Experts-Transcript-Evaluator
A mixture of experts inspired transcript evaluator using LLM fine-tuning. Contains a routing...
gulabpatel/LLMs
Alpaca, Bloom, DeciLM, Falcon, Vicuna, Llama2, Zephyr, Mistral(MoE), RAG, Reranking, Langchain,...
abhisheksingh-7/cotrend
Extending Decoders with an Integrated Encoder, as Part of Llama-3 Hackathon