Unity-Technologies/com.unity.perception
Perception toolkit for sim2real training and validation in Unity
This toolkit helps computer vision engineers and researchers create large, diverse synthetic datasets for training and validating AI models. It takes virtual environments and objects in Unity and outputs annotated image datasets, which can then be used to improve how AI systems recognize objects and scenes in the real world. This is for professionals building and testing computer vision applications.
990 stars. No commits in the last 6 months.
Use this if you need vast amounts of labeled image data to train computer vision models, but collecting and annotating real-world data is too costly, time-consuming, or impractical.
Not ideal if you primarily work with real-world image data and do not have a need for simulated environments or objects.
Stars
990
Forks
186
Language
C#
License
—
Category
Last pushed
Nov 08, 2024
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/Unity-Technologies/com.unity.perception"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
stereolabs/zed-unity
ZED SDK Unity plugin
CMU-Perceptual-Computing-Lab/openpose_unity_plugin
OpenPose's Unity Plugin for Unity users
evo-biomech/replicAnt
replicAnt - generating annotated images of animals in complex environments with Unreal Engine
Unity-Technologies/SynthDet
SynthDet - An end-to-end object detection pipeline using synthetic data
wtct-hungary/UnityVision-iOS
This native plugin enables Unity to take advantage of specific features of Core-ML and Vision...