voidism/DoLa
Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"
When working with large language models, this project helps improve the factual accuracy of the text they generate. You input a pre-trained LLaMA model and your dataset, and it outputs text that is less prone to 'hallucinations' or factual errors. This is ideal for researchers, AI developers, or anyone building applications that rely on truthful content from LLMs.
544 stars. No commits in the last 6 months.
Use this if you need your large language models to generate more factually correct information without extensive fine-tuning or external knowledge retrieval.
Not ideal if you are working with models other than the LLaMA family or if your primary concern is not factual accuracy but rather creativity or style.
Stars
544
Forks
68
Language
Python
License
—
Category
Last pushed
Jan 17, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/voidism/DoLa"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
THU-BPM/MarkLLM
MarkLLM: An Open-Source Toolkit for LLM Watermarking.(EMNLP 2024 System Demonstration)
git-disl/Vaccine
This is the official code for the paper "Vaccine: Perturbation-aware Alignment for Large...
zjunlp/Deco
[ICLR 2025] MLLM can see? Dynamic Correction Decoding for Hallucination Mitigation
HillZhang1999/ICD
Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced...
kaist-cvml/I-HallA-v1.0
[AAAI 2025] Official Implementation of I-HallA v1.0