jeremyarancio/VLM-Batch-Deployment
Batch Deployment for Document Parsing with AWS Batch & Qwen-2.5-VL
This project helps operations managers and data processing specialists automatically extract specific information from large volumes of documents like reports and invoices. You provide a collection of digital document images, and it outputs structured data, typically in a JSONL file, containing the key details extracted from each document. This is designed for anyone needing to process many documents to pull out key fields efficiently.
No commits in the last 6 months.
Use this if you need to automatically extract structured data from a high volume of digital document images, such as invoices or reports, and want to leverage powerful vision language models for accuracy.
Not ideal if you need to process unstructured text documents without images, or if your primary goal is real-time, single-document processing rather than batch processing.
Stars
49
Forks
20
Language
Jupyter Notebook
License
—
Category
Last pushed
Apr 28, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/mlops/jeremyarancio/VLM-Batch-Deployment"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
mosecorg/mosec
A high-performance ML model serving framework, offers dynamic batching and CPU/GPU pipelines to...
SemiAnalysisAI/InferenceX-app
Dashboard for InferenceX™, Open Source Continuous Inference
amanparuthi8/gpu-llm-india-2026
Should you buy a DGX Spark or rent H100s? Run on Mac Mini or TAALAS cluster? Full cost &...
DunaSpice/JetsonMind
Production-ready AI inference system for NVIDIA Jetson devices with MCP integration, Docker...