zzxslp/RadBERT
Code and models for Paper RadBERT: Adapting transformer-based language models to radiology
This project offers specialized language models for processing and understanding radiology reports. It takes unstructured medical text from radiology reports as input and provides a deeper understanding of the medical language compared to general biomedical models. Radiologists, medical researchers, and AI developers in healthcare can use this to extract meaningful insights from diagnostic imaging text.
No commits in the last 6 months.
Use this if you need to analyze large volumes of radiology reports to identify key medical findings, conditions, or procedures with higher accuracy than general biomedical language models.
Not ideal if your primary focus is on medical texts outside of radiology, such as general patient notes or scientific literature in other biomedical fields.
Stars
8
Forks
—
Language
—
License
—
Category
Last pushed
Oct 18, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/nlp/zzxslp/RadBERT"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
fidelity/textwiser
[AAAI 2021] TextWiser: Text Featurization Library
RandolphVI/Multi-Label-Text-Classification
About Muti-Label Text Classification Based on Neural Network.
ThilinaRajapakse/pytorch-transformers-classification
Based on the Pytorch-Transformers library by HuggingFace. To be used as a starting point for...
ntumlgroup/LibMultiLabel
A library for multi-class and multi-label classification
xuyige/BERT4doc-Classification
Code and source for paper ``How to Fine-Tune BERT for Text Classification?``