voidism/DoLa

Official implementation for the paper "DoLa: Decoding by Contrasting Layers Improves Factuality in Large Language Models"

37
/ 100
Emerging

When working with large language models, this project helps improve the factual accuracy of the text they generate. You input a pre-trained LLaMA model and your dataset, and it outputs text that is less prone to 'hallucinations' or factual errors. This is ideal for researchers, AI developers, or anyone building applications that rely on truthful content from LLMs.

544 stars. No commits in the last 6 months.

Use this if you need your large language models to generate more factually correct information without extensive fine-tuning or external knowledge retrieval.

Not ideal if you are working with models other than the LLaMA family or if your primary concern is not factual accuracy but rather creativity or style.

AI Development Natural Language Processing Factual Accuracy LLM Evaluation Content Generation
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 8 / 25
Community 19 / 25

How are scores calculated?

Stars

544

Forks

68

Language

Python

License

Last pushed

Jan 17, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/voidism/DoLa"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.