CaseDrive/publaynet-models
Trained Detectron2 object detection models for document layout analysis based on PubLayNet dataset
This project offers pre-trained models that can automatically analyze the layout of research papers and articles. You input an image of a document page, and it identifies and outlines distinct elements like text blocks, lists, figures, and tables. This is ideal for researchers, librarians, or data scientists working with large collections of academic papers who need to extract or categorize content based on its visual structure.
No commits in the last 6 months.
Use this if you need to automatically identify and categorize different layout elements (text, figures, lists) within scanned or digitized research papers.
Not ideal if your primary goal is to extract the actual text content (OCR) without needing to understand the document's visual structure.
Stars
29
Forks
2
Language
Python
License
—
Category
Last pushed
Apr 16, 2023
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/CaseDrive/publaynet-models"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
Psarpei/Multi-Type-TD-TSR
Extracting Tables from Document Images using a Multi-stage Pipeline for Table Detection and...
Layout-Parser/layout-parser
A Unified Toolkit for Deep Learning Based Document Image Analysis
Sudhanshu1304/table-transformer
🔍 Table Extraction Tool: A powerful open-source solution combining OCR and computer vision for...
asagar60/TableNet-pytorch
Pytorch Implementation of TableNet
ses4255/Versatile-OCR-Program
Multi-modal OCR pipeline optimized for ML training (text, figure, math, tables, diagrams)