dusty-nv/jetson-inference
Hello AI World guide to deploying deep-learning inference networks and deep vision primitives with TensorRT and NVIDIA Jetson.
This guide helps robotics engineers and embedded system developers deploy deep learning models on NVIDIA Jetson devices for real-time vision tasks. It takes pre-trained deep learning models or models you've trained yourself and optimizes them to run efficiently on Jetson GPUs. The output is a highly performant AI application capable of tasks like object detection, image classification, or pose estimation from live camera feeds or other sensor data.
8,750 stars.
Use this if you need to integrate and optimize AI vision capabilities, such as identifying objects or classifying images, directly onto an NVIDIA Jetson embedded device for applications like robotics, smart cameras, or industrial automation.
Not ideal if you are looking for a cloud-based AI solution or if your target hardware is not an NVIDIA Jetson device.
Stars
8,750
Forks
3,091
Language
C++
License
MIT
Category
Last pushed
Oct 16, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/dusty-nv/jetson-inference"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Related frameworks
roboflow/inference
Turn any computer or edge device into a command center for your computer vision projects.
roboflow/roboflow-python
The official Roboflow Python package. Manage your datasets, models, and deployments. Roboflow...
hailo-ai/tappas
High-performance, optimized pre-trained template AI application pipelines for systems using Hailo devices
Apra-Labs/ApraPipes
A pipeline framework for developing video and image processing application. Supports multiple...
open-edge-platform/geti
Build computer vision models in a fraction of the time and with less data.