nuochenpku/LLaMA_Analysis
This is official project in our paper: Is Bigger and Deeper Always Better? Probing LLaMA Across Scales and Layers
This project offers an in-depth look into how Large Language Models like LLaMA actually 'think' and process information internally. Instead of just looking at what the model generates, it uses specially designed multiple-choice questions to test LLaMA's core abilities in areas like calculation, reasoning, and factual knowledge. Researchers and AI developers can use the insights to better understand model strengths and weaknesses, informing the design of future LLMs.
No commits in the last 6 months.
Use this if you are an AI researcher or developer trying to understand the intrinsic capabilities of large language models like LLaMA, beyond just their final outputs.
Not ideal if you are looking for a tool to directly improve or fine-tune an LLM for a specific application, as this is primarily an analytical and research-oriented project.
Stars
31
Forks
4
Language
Python
License
—
Category
Last pushed
Jan 13, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/nuochenpku/LLaMA_Analysis"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
hkproj/pytorch-llama
LLaMA 2 implemented from scratch in PyTorch
4AI/LS-LLaMA
A Simple but Powerful SOTA NER Model | Official Code For Label Supervised LLaMA Finetuning
luchangli03/export_llama_to_onnx
export llama to onnx
ayaka14732/llama-2-jax
JAX implementation of the Llama 2 model
harleyszhang/lite_llama
A light llama-like llm inference framework based on the triton kernel.