swarm-ai-safety/swarm

SWARM: System-Wide Assessment of Risk in Multi-agent environments

42
/ 100
Emerging

This framework helps AI safety researchers and ML engineers building agent systems identify and measure systemic risks that emerge when many AI agents interact. It takes descriptions of multi-agent environments and governance rules, then simulates agent interactions to output metrics like 'illusion delta' or 'quality gap,' showing how consistent and safe the system truly is. It's designed for anyone needing to stress-test multi-agent AI systems for emergent failures before deployment.

Use this if you are developing or researching multi-agent AI systems and need to empirically measure emergent risks and test safety interventions.

Not ideal if you are focused on the safety of single AI agents or traditional machine learning models without complex, interacting components.

AI safety research multi-agent systems AI governance risk assessment AI red-teaming
No Package No Dependents
Maintenance 10 / 25
Adoption 6 / 25
Maturity 11 / 25
Community 15 / 25

How are scores calculated?

Stars

16

Forks

4

Language

Python

License

MIT

Last pushed

Mar 12, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/swarm-ai-safety/swarm"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.