pratyushasharma/laser

The Truth Is In There: Improving Reasoning in Language Models with Layer-Selective Rank Reduction

41
/ 100
Emerging

This project helps machine learning engineers and researchers improve the reasoning capabilities of large language models (LLMs) without extensive retraining. By applying a technique called Layer-Selective Rank Reduction (LASER), you can take an existing LLM and input specific parameters to adjust its internal structure. The output is a modified LLM that performs better on question-answering tasks and benchmarks, which is valuable for anyone working with or deploying LLMs for complex reasoning.

390 stars. No commits in the last 6 months.

Use this if you are a machine learning researcher or engineer looking to boost the performance of pre-trained large language models on specific reasoning tasks and benchmarks.

Not ideal if you are an end-user simply looking to use an improved language model without needing to modify its underlying architecture.

large-language-models LLM-optimization natural-language-processing model-fine-tuning AI-reasoning
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 15 / 25

How are scores calculated?

Stars

390

Forks

34

Language

Python

License

MIT

Last pushed

Jul 09, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/pratyushasharma/laser"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.