eminorhan/llm-memory

Memory experiments with LLMs

20
/ 100
Experimental

This project helps researchers and scientists studying large language models (LLMs) to understand how these models learn and retain information, similar to human memory. By inputting specific text data, you can train or finetune an LLM, then evaluate its ability to recognize previously seen information or recall it. This is primarily used by AI/NLP researchers interested in the cognitive science aspects of LLMs.

No commits in the last 6 months.

Use this if you are an AI researcher or cognitive scientist looking to conduct experiments on the memory capabilities (recognition, recall, retention) of large language models using few-shot learning.

Not ideal if you are looking for a tool to apply LLMs for general content generation, classification, or other typical NLP tasks outside of memory research.

LLM-research cognitive-modeling few-shot-learning AI-experimentation natural-language-processing-research
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 7 / 25

How are scores calculated?

Stars

10

Forks

1

Language

Python

License

Last pushed

Mar 31, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/eminorhan/llm-memory"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.