allenai/EmbeddingRecycling
Embedding Recycling for Language models
This project helps machine learning researchers evaluate a technique called "Embedding Recycling" for language models. It provides the necessary scripts and datasets to replicate experimental results for tasks like text classification, named-entity recognition, and question answering. Researchers focused on natural language processing and model efficiency would use this to understand and validate new approaches.
No commits in the last 6 months.
Use this if you are an NLP researcher looking to reproduce or build upon state-of-the-art results in language model efficiency, especially for classification, entity recognition, or question answering tasks.
Not ideal if you are an end-user seeking a ready-to-use application for text analysis or if you are not familiar with machine learning experimentation and Python environments.
Stars
38
Forks
5
Language
Python
License
Apache-2.0
Category
Last pushed
Jul 11, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/embeddings/allenai/EmbeddingRecycling"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
shibing624/similarities
Similarities: a toolkit for similarity calculation and semantic search....
explosion/sense2vec
🦆 Contextually-keyed word vectors
chakki-works/chakin
Simple downloader for pre-trained word vectors
sebischair/Lbl2Vec
Lbl2Vec learns jointly embedded label, document and word vectors to retrieve documents with...
pdrm83/sent2vec
How to encode sentences in a high-dimensional vector space, a.k.a., sentence embedding.