eminorhan/llm-memory
Memory experiments with LLMs
This project helps researchers and scientists studying large language models (LLMs) to understand how these models learn and retain information, similar to human memory. By inputting specific text data, you can train or finetune an LLM, then evaluate its ability to recognize previously seen information or recall it. This is primarily used by AI/NLP researchers interested in the cognitive science aspects of LLMs.
No commits in the last 6 months.
Use this if you are an AI researcher or cognitive scientist looking to conduct experiments on the memory capabilities (recognition, recall, retention) of large language models using few-shot learning.
Not ideal if you are looking for a tool to apply LLMs for general content generation, classification, or other typical NLP tasks outside of memory research.
Stars
10
Forks
1
Language
Python
License
—
Category
Last pushed
Mar 31, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/eminorhan/llm-memory"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jncraton/languagemodels
Explore large language models in 512MB of RAM
microsoft/unilm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
haizelabs/verdict
Inference-time scaling for LLMs-as-a-judge.
albertan017/LLM4Decompile
Reverse Engineering: Decompiling Binary Code with Large Language Models
bytedance/Sa2VA
Official Repo For Pixel-LLM Codebase