zjunlp/EasyInstruct
[ACL 2024] An Easy-to-use Instruction Processing Framework for LLMs.
This tool helps researchers and AI practitioners systematically experiment with different ways to prompt Large Language Models (LLMs). It takes in an initial set of instructions, then uses various techniques to generate new instructions, evaluate their quality, and create optimized prompts. The output is a refined set of instructions and prompts designed to get better results from LLMs like GPT-4 or Claude.
409 stars. No commits in the last 6 months. Available on PyPI.
Use this if you are developing or fine-tuning LLMs and need a structured way to create, evaluate, and select high-quality instructions for training or prompting.
Not ideal if you are an end-user simply looking to interact with an LLM for daily tasks, as this is a framework for developing with LLMs.
Stars
409
Forks
37
Language
Python
License
MIT
Category
Last pushed
Dec 23, 2024
Commits (30d)
0
Dependencies
21
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/zjunlp/EasyInstruct"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
TinyLLaVA/TinyLLaVA_Factory
A Framework of Small-scale Large Multimodal Models
rese1f/MovieChat
[CVPR 2024] MovieChat: From Dense Token to Sparse Memory for Long Video Understanding
haotian-liu/LLaVA
[NeurIPS'23 Oral] Visual Instruction Tuning (LLaVA) built towards GPT-4V level capabilities and beyond.
NVlabs/Eagle
Eagle: Frontier Vision-Language Models with Data-Centric Strategies
DAMO-NLP-SG/Video-LLaMA
[EMNLP 2023 Demo] Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding