nuochenpku/LLaMA_Analysis

This is official project in our paper: Is Bigger and Deeper Always Better? Probing LLaMA Across Scales and Layers

26
/ 100
Experimental

This project offers an in-depth look into how Large Language Models like LLaMA actually 'think' and process information internally. Instead of just looking at what the model generates, it uses specially designed multiple-choice questions to test LLaMA's core abilities in areas like calculation, reasoning, and factual knowledge. Researchers and AI developers can use the insights to better understand model strengths and weaknesses, informing the design of future LLMs.

No commits in the last 6 months.

Use this if you are an AI researcher or developer trying to understand the intrinsic capabilities of large language models like LLaMA, beyond just their final outputs.

Not ideal if you are looking for a tool to directly improve or fine-tune an LLM for a specific application, as this is primarily an analytical and research-oriented project.

LLM-research natural-language-processing model-analysis AI-development computational-linguistics
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 8 / 25
Community 11 / 25

How are scores calculated?

Stars

31

Forks

4

Language

Python

License

Last pushed

Jan 13, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/nuochenpku/LLaMA_Analysis"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.