DanielSc4/RewardLM

Reward a Language Model with pancakes 🥞

13
/ 100
Experimental

This project helps machine learning engineers or researchers working with large language models to refine their models' behavior. It takes a pre-trained generative language model and specific datasets as input, allowing for fine-tuning or reinforcement learning to guide the model towards desired outputs. Additionally, it can assess the toxicity levels of the model's generated responses, providing metrics to understand and improve safety.

No commits in the last 6 months.

Use this if you need to adapt a generative language model to perform specific tasks or adhere to certain content guidelines, without extensive human feedback loops, and want to measure its output toxicity.

Not ideal if you are looking for a pre-packaged, ready-to-deploy solution for end-users, or if you don't have experience with language model training workflows.

generative-AI-fine-tuning language-model-safety AI-content-moderation machine-learning-engineering natural-language-processing-research
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 0 / 25

How are scores calculated?

Stars

12

Forks

Language

Jupyter Notebook

License

Last pushed

Sep 28, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/DanielSc4/RewardLM"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.