google-research/pangea

Panoramic Graph Environment Annotation toolkit, for collecting audio and text annotations in panoramic graph environments such as Matterport3D and StreetLearn.

38
/ 100
Emerging

This toolkit helps researchers and data scientists collect detailed human annotations for navigating virtual 3D environments. It allows you to define a path in a panoramic virtual space like Matterport3D, then record a person's voice as they describe how to follow that path and their movements. It outputs these voice recordings, manual transcriptions, and precise records of the annotator's virtual camera poses.

No commits in the last 6 months.

Use this if you need to generate high-quality datasets of human navigation instructions and corresponding movements within realistic 3D virtual environments for AI research.

Not ideal if you're looking for an off-the-shelf solution for simple 2D map annotations or if you don't have access to panoramic 3D environment datasets.

AI-training-data 3D-environment-simulation human-computer-interaction-research navigation-instruction-collection multimodal-data-annotation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

20

Forks

6

Language

JavaScript

License

Apache-2.0

Last pushed

Mar 05, 2021

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/nlp/google-research/pangea"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.