articulateinstruments/DeepLabCut-for-Speech-Production

Trained deep neural-net models for estimating articulatory keypoints from midsagittal ultrasound tongue videos and front-view lip camera videos using DeepLabCut. This research is by Wrench, A. and Balch-Tomes, J. (2022) (https://www.mdpi.com/1424-8220/22/3/1133) (https://doi.org/10.3390/s22031133).

35
/ 100
Emerging

This project helps speech scientists and linguists automatically track precise tongue and lip movements during speech from video. You input midsagittal ultrasound videos of the tongue and front-view lip camera videos, and it outputs the estimated positions of keypoints on the tongue surface, hyoid, mandible, and lips. It is designed for researchers studying speech articulation.

No commits in the last 6 months.

Use this if you need to perform markerless pose estimation of speech articulators to analyze speech production from ultrasound and lip videos.

Not ideal if you require real-time analysis for clinical biofeedback or if your video data is not focused on speech articulation.

speech-pathology linguistics phonetics ultrasound-imaging articulatory-phonetics
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 13 / 25

How are scores calculated?

Stars

24

Forks

4

Language

Batchfile

License

GPL-3.0

Last pushed

Jun 13, 2023

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/articulateinstruments/DeepLabCut-for-Speech-Production"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.