HillZhang1999/ICD

Code & Data for our Paper "Alleviating Hallucinations of Large Language Models through Induced Hallucinations"

38
/ 100
Emerging

This project helps AI developers and researchers improve the factual accuracy of their large language models (LLMs). It takes an existing LLM that sometimes generates incorrect information and applies a decoding strategy to reduce 'hallucinations'. The output is a more truthful LLM that can produce content with fewer factual errors, suitable for applications where accuracy is critical.

No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher trying to reduce factual errors in the text generated by your large language models, especially open-source models like Llama2-7B-Chat or Mistral-7B-Instruct.

Not ideal if you need a solution for models other than large language models, or if you're not comfortable working with model decoding strategies and command-line scripts.

AI model development Natural Language Processing LLM fine-tuning Factuality improvement Generative AI research
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 16 / 25
Community 14 / 25

How are scores calculated?

Stars

69

Forks

10

Language

Python

License

MIT

Last pushed

Feb 27, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/HillZhang1999/ICD"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.