surajiitd/NVIDIA_Jetson_Inference
This repo contains model compression(using TensorRT) and documentation of running various deep learning models on NVIDIA Jetson Orin, Nano (aarch64 architectures)
This project helps you take advanced AI models, like those for understanding images or mapping environments, and run them efficiently on small, specialized NVIDIA Jetson computers. It shows you how to optimize these models so they can perform complex tasks, such as robotic navigation or real-time object detection, directly on devices like drones or smart cameras. Operations engineers, robotics developers, or researchers working with edge AI devices would use this.
No commits in the last 6 months.
Use this if you need to deploy sophisticated deep learning models onto compact, energy-efficient NVIDIA Jetson hardware for real-time applications.
Not ideal if you are looking to train deep learning models or run them on cloud servers or standard desktop GPUs.
Stars
9
Forks
3
Language
Makefile
License
—
Category
Last pushed
May 26, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/surajiitd/NVIDIA_Jetson_Inference"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
roboflow/inference
Turn any computer or edge device into a command center for your computer vision projects.
roboflow/roboflow-python
The official Roboflow Python package. Manage your datasets, models, and deployments. Roboflow...
dusty-nv/jetson-inference
Hello AI World guide to deploying deep-learning inference networks and deep vision primitives...
hailo-ai/tappas
High-performance, optimized pre-trained template AI application pipelines for systems using Hailo devices
Apra-Labs/ApraPipes
A pipeline framework for developing video and image processing application. Supports multiple...