aliencaocao/TIL-2022

Champion at Brainhack TIL 2022: Team 8000SGD_CAT

21
/ 100
Experimental

This project offers a robust solution for operating autonomous mobile robots in specific environments. It takes in real-time sensor data, like audio and video feeds, to perform tasks such as detecting human states (standing/fallen) and recognizing speech emotions. The output is precise robot navigation and task execution, making it useful for robotics enthusiasts, researchers, or anyone interested in developing intelligent robot applications.

No commits in the last 6 months.

Use this if you are developing autonomous robots that need to navigate complex spaces and react to environmental cues, such as detecting human states or understanding speech emotions.

Not ideal if your project focuses purely on theoretical AI research without a robotics implementation or if you need a solution for static image/audio classification without real-time robot control.

robotics autonomous-navigation computer-vision speech-emotion-recognition AI-robot-control
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 5 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

13

Forks

Language

Python

License

AGPL-3.0

Last pushed

Apr 06, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/aliencaocao/TIL-2022"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.