d-f/llm-summarization

LoRA supervised fine-tuning, RLHF (PPO) and RAG with llama-3-8B on the TLDR summarization dataset

19
/ 100
Experimental

This project helps developers and researchers evaluate different prompt engineering strategies for summarizing text. You input raw text documents and various prompt templates, and it outputs automatically generated summaries along with ROUGE metrics, allowing you to compare which prompts produce the best summaries against human-written examples. The primary users are machine learning engineers or data scientists working on text summarization tasks.

No commits in the last 6 months.

Use this if you are a machine learning engineer or data scientist evaluating the performance of different prompts for text summarization using Llama 3 models.

Not ideal if you are an end-user simply looking for a ready-to-use summarization tool without deep technical involvement in model training or prompt evaluation.

natural-language-processing large-language-models prompt-engineering text-summarization model-evaluation
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 6 / 25

How are scores calculated?

Stars

14

Forks

1

Language

Python

License

Last pushed

Feb 02, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/d-f/llm-summarization"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.