apartresearch/specificityplus
👩💻 Code for the ACL paper "Detecting Edit Failures in LLMs: An Improved Specificity Benchmark"
This project helps evaluate how well large language models (LLMs) can be updated to correct specific factual errors without introducing new, incorrect information. It takes in a trained LLM and a set of desired factual corrections, then measures if the edits are precise and don't "overcorrect." LLM researchers and engineers who are working on improving the reliability and accuracy of AI models would use this.
No commits in the last 6 months.
Use this if you are a developer or researcher focused on enhancing the precision and reliability of factual updates within large language models and need a robust way to benchmark those edits.
Not ideal if you are an end-user looking to simply apply an LLM for content generation or data analysis, as this tool is for evaluating the underlying model's editing capabilities.
Stars
20
Forks
4
Language
Python
License
—
Category
Last pushed
Jan 19, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/apartresearch/specificityplus"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
microsoft/NeMoEval
A Benchmark Tool for Natural Language-based Network Management
FudanSELab/ClassEval
Benchmark ClassEval for class-level code generation.
claws-lab/XLingEval
Code and Resources for the paper, "Better to Ask in English: Cross-Lingual Evaluation of Large...
HICAI-ZJU/SciKnowEval
SciKnowEval: Evaluating Multi-level Scientific Knowledge of Large Language Models
nicolay-r/RuSentRel-Leaderboard
This is an official Leaderboard for the RuSentRel-1.1 dataset originally described in paper...