patil-suraj/onnx_transformers

Accelerated NLP pipelines for fast inference on CPU. Built with Transformers and ONNX runtime.

46
/ 100
Emerging

This tool helps developers who are building applications that use natural language processing (NLP) to make them run much faster, especially on standard computer processors. You provide text data for tasks like sentiment analysis or question answering, and it gives you the results with significantly reduced processing time. It's ideal for developers integrating advanced text understanding into their software.

127 stars. No commits in the last 6 months.

Use this if you are a developer looking to accelerate the performance of your text-based AI applications, such as those performing sentiment analysis or question answering, on CPU hardware.

Not ideal if you are not a developer or if you need to deploy your NLP models on hardware other than CPUs, as this tool is specifically optimized for CPU inference.

natural-language-processing text-analytics machine-learning-deployment performance-optimization
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 20 / 25

How are scores calculated?

Stars

127

Forks

27

Language

Jupyter Notebook

License

Apache-2.0

Last pushed

Dec 05, 2020

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/patil-suraj/onnx_transformers"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.