ihp-lab/Face-LLaVA
[WACV 2026] Face-LLaVA: Facial Expression and Attribute Understanding through Instruction Tuning
This project helps professionals in fields like marketing, psychology, or human-computer interaction to automatically understand facial expressions and attributes from images. You provide an image containing faces, and it generates a natural language description of their expressions (e.g., happy, sad) and attributes (e.g., age, gender, beard). It's designed for researchers or practitioners who need detailed, text-based insights into facial cues for analysis or automated systems.
Use this if you need to automatically analyze facial expressions and attributes from images and receive detailed, human-readable text descriptions for reasoning or further processing.
Not ideal if you only need simple face detection or basic classification without detailed natural language output.
Stars
11
Forks
2
Language
Python
License
—
Category
Last pushed
Mar 19, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/ihp-lab/Face-LLaVA"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
jingyaogong/minimind-v
🚀 「大模型」1小时从0训练26M参数的视觉多模态VLM!🌏 Train a 26M-parameter VLM from scratch in just 1 hours!
SkyworkAI/Skywork-R1V
Skywork-R1V is an advanced multimodal AI model series developed by Skywork AI, specializing in...
roboflow/vision-ai-checkup
Take your LLM to the optometrist.
zai-org/GLM-TTS
GLM-TTS: Controllable & Emotion-Expressive Zero-shot TTS with Multi-Reward Reinforcement Learning
NExT-GPT/NExT-GPT
Code and models for ICML 2024 paper, NExT-GPT: Any-to-Any Multimodal Large Language Model