joeljang/continual-knowledge-learning
[ICLR 2022] Towards Continual Knowledge Learning of Language Models
This project helps machine learning researchers and engineers reproduce the experimental results for 'Towards Continual Knowledge Learning of Language Models' from the ICLR 2022 paper. It allows you to download specific datasets and pre-trained model checkpoints. The output is a trained language model that can be evaluated on various benchmarks, demonstrating continual learning capabilities.
No commits in the last 6 months.
Use this if you are a researcher or ML engineer interested in continually updating large language models with new information without forgetting old knowledge.
Not ideal if you are looking for a plug-and-play solution for an application or a library to integrate into existing systems without deep knowledge of model training.
Stars
91
Forks
7
Language
Python
License
—
Category
Last pushed
Oct 11, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/joeljang/continual-knowledge-learning"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
ExtensityAI/symbolicai
A neurosymbolic perspective on LLMs
TIGER-AI-Lab/MMLU-Pro
The code and data for "MMLU-Pro: A More Robust and Challenging Multi-Task Language Understanding...
deep-symbolic-mathematics/LLM-SR
[ICLR 2025 Oral] This is the official repo for the paper "LLM-SR" on Scientific Equation...
microsoft/interwhen
A framework for verifiable reasoning with language models.
zhudotexe/fanoutqa
Companion code for FanOutQA: Multi-Hop, Multi-Document Question Answering for Large Language...