UIC-Liu-Lab/CPT
[EMNLP 2022] Continual Training of Language Models for Few-Shot Learning
This project helps researchers and engineers continually update language models with new domain-specific knowledge without losing their existing capabilities. By taking an existing language model and a sequence of unlabeled text corpora from different domains, it produces an enhanced model ready for few-shot learning on various tasks. This is ideal for machine learning practitioners who work with evolving datasets or need to adapt models to new, specialized fields.
No commits in the last 6 months.
Use this if you need to incrementally add knowledge to a pre-trained language model from new datasets while maintaining its performance on previously learned tasks, especially for improving few-shot learning.
Not ideal if you are looking for a pre-trained model for a single, static task without the need for incremental domain adaptation or continual learning.
Stars
44
Forks
1
Language
Python
License
—
Category
Last pushed
Feb 13, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/UIC-Liu-Lab/CPT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
galilai-group/stable-pretraining
Reliable, minimal and scalable library for pretraining foundation and world models
CognitiveAISystems/MAPF-GPT
[AAAI-2025] This repository contains MAPF-GPT, a deep learning-based model for solving MAPF...
UKPLab/gpl
Powerful unsupervised domain adaptation method for dense retrieval. Requires only unlabeled...
larslorch/avici
Amortized Inference for Causal Structure Learning, NeurIPS 2022
svdrecbd/mhc-mlx
MLX + Metal implementation of mHC: Manifold-Constrained Hyper-Connections by DeepSeek-AI.