Jakobovski/decoupled-multimodal-learning

A decoupled, generative, unsupervised, multimodal neural architecture.

33
/ 100
Emerging

This project helps 'autonomous agents' — like robots or AI assistants — learn about their environment by connecting different types of sensory information, such as images and sounds. It takes in raw, unlabeled data from various sensors and learns to classify it and understand how different senses relate to each other, similar to how a baby learns. The primary users are researchers or engineers developing unsupervised learning systems for multimodal data.

No commits in the last 6 months.

Use this if you need an AI system to learn from diverse, unlabeled sensory inputs and understand their relationships without explicit instruction, like classifying images based on associated sounds.

Not ideal if you have well-labeled datasets and require a supervised learning approach, or if your system only deals with a single type of data.

robotics-perception unsupervised-learning multimodal-AI generative-models AI-agents
No License Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 8 / 25
Maturity 8 / 25
Community 17 / 25

How are scores calculated?

Stars

44

Forks

10

Language

Python

License

Last pushed

Dec 08, 2018

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Jakobovski/decoupled-multimodal-learning"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.