Xiaohao-Yang/LLM-ITL
[ACL 2025 Main] Neural Topic Modeling with Large Language Models in the Loop
This project helps researchers and data analysts improve the interpretability of topics discovered in large collections of text documents. It takes raw text data and uses advanced AI to extract core themes, then uses large language models to refine these themes into more understandable and coherent topics. This is ideal for those who need to make sense of vast amounts of unstructured text and communicate clear insights, such as social scientists, market researchers, or content strategists.
No commits in the last 6 months.
Use this if you need to extract clear, human-understandable themes from large text datasets and want to leverage the power of LLMs to make those themes more meaningful.
Not ideal if your primary goal is only document classification or basic keyword extraction, as this tool is specifically designed for enhanced topic interpretability.
Stars
11
Forks
1
Language
Python
License
—
Category
Last pushed
Jun 24, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/Xiaohao-Yang/LLM-ITL"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jncraton/languagemodels
Explore large language models in 512MB of RAM
microsoft/unilm
Large-scale Self-supervised Pre-training Across Tasks, Languages, and Modalities
haizelabs/verdict
Inference-time scaling for LLMs-as-a-judge.
albertan017/LLM4Decompile
Reverse Engineering: Decompiling Binary Code with Large Language Models
bytedance/Sa2VA
Official Repo For Pixel-LLM Codebase