pratyushasharma/laser
The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction
This project helps machine learning engineers and researchers improve the reasoning capabilities of large language models (LLMs) without extensive retraining. By applying a technique called Layer-Selective Rank Reduction (LASER), you can take an existing LLM and input specific parameters to adjust its internal structure. The output is a modified LLM that performs better on question-answering tasks and benchmarks, which is valuable for anyone working with or deploying LLMs for complex reasoning.
390 stars. No commits in the last 6 months.
Use this if you are a machine learning researcher or engineer looking to boost the performance of pre-trained large language models on specific reasoning tasks and benchmarks.
Not ideal if you are an end-user simply looking to use an improved language model without needing to modify its underlying architecture.
Stars
390
Forks
34
Language
Python
License
MIT
Category
Last pushed
Jul 09, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/pratyushasharma/laser"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
agentscope-ai/Trinity-RFT
Trinity-RFT is a general-purpose, flexible and scalable framework designed for reinforcement...
OpenRLHF/OpenRLHF
An Easy-to-use, Scalable and High-performance Agentic RL Framework based on Ray (PPO & DAPO &...
zjunlp/EasyEdit
[ACL 2024] An Easy-to-use Knowledge Editing Framework for LLMs.
huggingface/alignment-handbook
Robust recipes to align language models with human and AI preferences
hyunwoongko/nanoRLHF
nanoRLHF: from-scratch journey into how LLMs and RLHF really work.