shreyansh26/LLM-Sampling

A collection of various LLM sampling methods implemented in pure Pytorch

24
/ 100
Experimental

This tool helps machine learning engineers and researchers fine-tune how large language models generate text. You input a prompt and select a generation method, and it outputs text generated by a Hugging Face model, allowing you to experiment with different sampling strategies to control creativity and coherence. It's designed for those who want to precisely control text generation for specific applications.

No commits in the last 6 months.

Use this if you are a machine learning engineer or researcher who needs to experiment with different text generation strategies for large language models to achieve specific output characteristics like creativity, factual accuracy, or JSON adherence.

Not ideal if you are looking for a no-code solution or simply want to use an LLM without needing to understand or control the underlying sampling mechanisms.

LLM-fine-tuning text-generation AI-research model-experimentation prompt-engineering
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 9 / 25

How are scores calculated?

Stars

28

Forks

3

Language

Python

License

Last pushed

Dec 09, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/shreyansh26/LLM-Sampling"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.