surajiitd/NVIDIA_Jetson_Inference

This repo contains model compression(using TensorRT) and documentation of running various deep learning models on NVIDIA Jetson Orin, Nano (aarch64 architectures)

27
/ 100
Experimental

This project helps you take advanced AI models, like those for understanding images or mapping environments, and run them efficiently on small, specialized NVIDIA Jetson computers. It shows you how to optimize these models so they can perform complex tasks, such as robotic navigation or real-time object detection, directly on devices like drones or smart cameras. Operations engineers, robotics developers, or researchers working with edge AI devices would use this.

No commits in the last 6 months.

Use this if you need to deploy sophisticated deep learning models onto compact, energy-efficient NVIDIA Jetson hardware for real-time applications.

Not ideal if you are looking to train deep learning models or run them on cloud servers or standard desktop GPUs.

edge-ai robotics embedded-vision deep-learning-deployment real-time-inference
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 8 / 25
Community 14 / 25

How are scores calculated?

Stars

9

Forks

3

Language

Makefile

License

Last pushed

May 26, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/surajiitd/NVIDIA_Jetson_Inference"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.