iwatake2222/play_with_tflite
Sample projects for TensorFlow Lite in C++ with delegates such as GPU, EdgeTPU, XNNPACK, NNAPI
This project offers sample C++ code to run TensorFlow Lite machine learning models efficiently across various platforms like Linux, Windows, and Android. It helps embedded systems developers and IoT engineers integrate pre-trained AI models into their applications. You provide an image, video, or camera feed as input, and the models perform tasks like image classification, with results outputted directly by the application.
381 stars. No commits in the last 6 months.
Use this if you are a C++ developer working on embedded systems, IoT devices, or mobile applications and need to deploy TensorFlow Lite models with optimized hardware acceleration.
Not ideal if you are looking for a high-level Python library or a ready-to-use application, as this project requires C++ development and compilation.
Stars
381
Forks
79
Language
C++
License
Apache-2.0
Category
Last pushed
Jul 18, 2022
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/iwatake2222/play_with_tflite"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
microsoft/onnxruntime
ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
onnx/onnx
Open standard for machine learning interoperability
PINTO0309/onnx2tf
Self-Created Tools to convert ONNX files (NCHW) to TensorFlow/TFLite/Keras format (NHWC). The...
NVIDIA/TensorRT
NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This...
onnx/onnxmltools
ONNXMLTools enables conversion of models to ONNX