flodelaplace/lab-camera-dynamic-calibrator

A fully automated, markerless extrinsic camera calibration pipeline using human motion. Features RTMPose, 3D lifting, and real-world metric scaling.

34
/ 100
Emerging

This project helps biomechanics researchers, sports scientists, or movement analysts accurately determine the real-world positions of multiple video cameras in a lab setting. By simply having a person move naturally within the camera's view, it processes synchronized video streams to output a standard calibration file that describes each camera's location and orientation in 3D space, scaled to real-world dimensions.

Use this if you need to calibrate multiple synchronized cameras for 3D motion analysis without using traditional checkerboards or markers, and want the output scaled to real-world measurements.

Not ideal if you are working with static scenes, need to calibrate single cameras, or prefer a GUI-based calibration tool.

biomechanics sports-science motion-capture gait-analysis human-kinematics
No Package No Dependents
Maintenance 13 / 25
Adoption 4 / 25
Maturity 9 / 25
Community 8 / 25

How are scores calculated?

Stars

8

Forks

1

Language

Python

License

MIT

Last pushed

Mar 23, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/computer-vision/flodelaplace/lab-camera-dynamic-calibrator"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.