swarm-ai-safety/swarm
SWARM: System-Wide Assessment of Risk in Multi-agent environments
This framework helps AI safety researchers and ML engineers building agent systems identify and measure systemic risks that emerge when many AI agents interact. It takes descriptions of multi-agent environments and governance rules, then simulates agent interactions to output metrics like 'illusion delta' or 'quality gap,' showing how consistent and safe the system truly is. It's designed for anyone needing to stress-test multi-agent AI systems for emergent failures before deployment.
Use this if you are developing or researching multi-agent AI systems and need to empirically measure emergent risks and test safety interventions.
Not ideal if you are focused on the safety of single AI agents or traditional machine learning models without complex, interacting components.
Stars
16
Forks
4
Language
Python
License
MIT
Category
Last pushed
Mar 12, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/swarm-ai-safety/swarm"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
The-Swarm-Corporation/swarms-tools
Swarms Tools provides a vast array of pre-built tools for your agents, MCP servers, and...
swarmzero/swarmzero
SwarmZero's SDK for building AI agents, swarms of agents and much more.
Mintplex-Labs/openai-assistant-swarm
Introducing the Assistant Swarm. An extension to the OpenAI Node SDK to automatically delegate...
The-Swarm-Corporation/AI-CoScientist
An simple, reliable, and minimal implementation of the AI CoScientist Paper from Google "Towards...
metauto-ai/GPTSwarm
🐝 The First Self-Improving Agentic Solution