IAAR-Shanghai/FastMem

Fast Memorization of Prompt Improves Context Awareness of Large Language Models (Findings of EMNLP 2024)

22
/ 100
Experimental

This tool helps researchers and AI practitioners improve how Large Language Models (LLMs) understand and use contextual information. By efficiently fine-tuning a small part of the LLM, it helps the model "memorize" prompt details without overfitting. The result is an LLM that is better at responding accurately based on the given context in tasks like Q&A and summarization.

No commits in the last 6 months.

Use this if you are working with Large Language Models and need to significantly boost their ability to comprehend and accurately follow context from prompts in tasks like question answering or summarization.

Not ideal if you are looking for a general-purpose LLM fine-tuning solution that modifies the entire model, rather than focusing on context awareness through targeted prompt memorization.

Large Language Models Natural Language Processing Contextual AI Generative AI Model Fine-tuning
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

24

Forks

Language

Python

License

Apache-2.0

Last pushed

Oct 22, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/IAAR-Shanghai/FastMem"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.