Abdulhamid97Mousa/mosaic
MOSAIC: A Unified Platform for Cross-Paradigm Comparison and Evaluation of Homogeneous and Heterogeneous Multi-Agent RL, LLM, VLM, and Human Decision-Makers
This platform helps researchers test and compare how different types of decision-makers, including humans, AI models like RL agents, and large language or vision models, perform in the same interactive environment. You input various agents and scenarios, and it outputs visual comparisons and detailed performance logs. This is for AI researchers and practitioners who need to evaluate and understand complex multi-agent interactions.
Use this if you need a unified, visual way to set up, run, and compare the performance of human and diverse AI agents (RL, LLM, VLM) within the same simulated environment.
Not ideal if you are looking for a simple tool to train a single reinforcement learning agent in isolation, or if your focus is solely on theoretical model development without practical simulation.
Stars
20
Forks
5
Language
Python
License
MIT
Category
Last pushed
Mar 10, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Abdulhamid97Mousa/mosaic"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
mitdbg/palimpzest
A System for Optimized Semantic Computation
SamurAIGPT/GPT-Agent
🚀 Introducing 🐪 CAMEL: a game-changing role-playing approach for LLMs and auto-agents like...
bubbuild/republic
Build LLM workflows like normal Python while keeping a full audit trail by default.
lwcsrf/netflux
Minimalist framework for authoring custom agentic applications in python; emphasizes task...
dlMARiA/Syzygy-of-thoughts
Syzygy-of-thoughts