cruiseresearchgroup/SensorLLM
[EMNLP 2025] Official implementation of "SensorLLM: Aligning Large Language Models with Motion Sensors for Human Activity Recognition"
This project helps researchers and practitioners classify human activities using data from various motion sensors. It takes raw sensor time-series data and translates it into human-understandable text, which a large language model then interprets to identify activities like walking, running, or sleeping. Anyone working with wearable sensor data for health monitoring, sports science, or behavioral studies would find this useful.
Use this if you need to accurately identify human activities from diverse motion sensor data and want to leverage large language models for better interpretation.
Not ideal if your data is not from motion sensors or you need a real-time, ultra-low-latency activity recognition system without language model overhead.
Stars
83
Forks
17
Language
Python
License
MIT
Category
Last pushed
Nov 28, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/cruiseresearchgroup/SensorLLM"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related models
KimMeen/Time-LLM
[ICLR 2024] Official implementation of " 🦙 Time-LLM: Time Series Forecasting by Reprogramming...
om-ai-lab/VLM-R1
Solve Visual Understanding with Reinforced VLMs
bytedance/SALMONN
SALMONN family: A suite of advanced multi-modal LLMs
NVlabs/OmniVinci
OmniVinci is an omni-modal LLM for joint understanding of vision, audio, and language.
fixie-ai/ultravox
A fast multimodal LLM for real-time voice