Abdulhamid97Mousa/mosaic

MOSAIC: A Unified Platform for Cross-Paradigm Comparison and Evaluation of Homogeneous and Heterogeneous Multi-Agent RL, LLM, VLM, and Human Decision-Makers

42
/ 100
Emerging

This platform helps researchers test and compare how different types of decision-makers, including humans, AI models like RL agents, and large language or vision models, perform in the same interactive environment. You input various agents and scenarios, and it outputs visual comparisons and detailed performance logs. This is for AI researchers and practitioners who need to evaluate and understand complex multi-agent interactions.

Use this if you need a unified, visual way to set up, run, and compare the performance of human and diverse AI agents (RL, LLM, VLM) within the same simulated environment.

Not ideal if you are looking for a simple tool to train a single reinforcement learning agent in isolation, or if your focus is solely on theoretical model development without practical simulation.

AI evaluation multi-agent systems human-AI interaction decision-making research simulated environments
No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 11 / 25
Community 15 / 25

How are scores calculated?

Stars

20

Forks

5

Language

Python

License

MIT

Last pushed

Mar 10, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Abdulhamid97Mousa/mosaic"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.