xmartlabs/spoter-embeddings

Create embeddings from sign pose videos using Transformers

32
/ 100
Emerging

This project helps sign language researchers and educators analyze and understand sign language by converting videos of sign poses into numerical 'embedding' vectors. These vectors capture the essence of a sign, making it easy to compare different signs or identify similar ones. The input is skeletal keypoint data extracted from sign language videos, and the output is a compact numerical representation of the sign, enabling tasks like classification or similarity searches for individual words or phrases in sign languages globally.

No commits in the last 6 months.

Use this if you need to compare, classify, or find similarities between different sign language gestures from video data, especially for few-shot learning tasks on new datasets.

Not ideal if you need a direct, out-of-the-box sign language translation system without further model development or analysis.

sign-language-research gesture-analysis linguistics education pose-estimation
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 9 / 25

How are scores calculated?

Stars

32

Forks

3

Language

Python

License

Apache-2.0

Last pushed

Oct 28, 2024

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/xmartlabs/spoter-embeddings"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.