jaehyun513/P2T
Official implementation of Tabular Transfer Learning via Prompting LLMs (COLM 2024).
This project helps data scientists and machine learning engineers apply knowledge from one tabular dataset to improve predictions on a new, related tabular dataset. It takes your existing tabular data and a new dataset, then uses large language models (LLMs) to generate better predictive models for the new data. This is useful for anyone working with structured data who needs to build accurate predictive models quickly, even with limited data.
No commits in the last 6 months.
Use this if you need to build a predictive model for a new dataset, but you have similar data from a past project that could help improve performance.
Not ideal if you primarily work with unstructured data like text or images, or if you don't have any related prior datasets to leverage.
Stars
13
Forks
—
Language
Jupyter Notebook
License
—
Category
Last pushed
Aug 06, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/jaehyun513/P2T"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
genlm/genlm-control
Controlled text generation with programmable constraints
Intelligent-CAT-Lab/AlphaTrans
Artifact repository for the paper "AlphaTrans: A Neuro-Symbolic Compositional Approach for...
madaan/self-refine
LLMs can generate feedback on their work, use it to improve the output, and repeat this process...
PCI-ORG/PCI-Personnel
Policy Change Index for Personnel (PCI-Personnel)
hemangjoshi37a/o1-meta-prompt
This project aims to emulate some of the advanced reasoning capabilities seen in models like...