horizon-rl/strands-env
Standardizing environment infrastructure with Strands Agents — step, observe, reward.
This project helps AI developers standardize how they build and evaluate environments for training large language models (LLMs) with agentic capabilities. You define an environment by specifying the tools your LLM agent can use and how it receives rewards. This allows for consistent training and benchmarking of agentic LLMs.
Use this if you are developing or training LLM agents and need a structured way to define their interaction environments, including tool use and reward mechanisms.
Not ideal if you are a non-developer looking for an end-user application or if you are working with non-LLM models.
Stars
43
Forks
7
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 13, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/horizon-rl/strands-env"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
awslabs/agent-squad
Flexible and powerful framework for managing multiple AI agents and handling complex conversations
jeremiah-k/agor
AgentOrchestrator - Multi-agent development coordination platform. Transform AI assistants into...
microsoft/multi-agent-reference-architecture
Guide for designing adaptive, scalable, and secure enterprise multi-agent systems
rodmena-limited/stabilize
Queue-Based State Machine - A lightweight workflow execution engine with DAG-based stage...
aws-solutions-library-samples/guidance-for-multi-agent-orchestration-on-aws
Enables developers to build, deploy, and manage multiple specialized agents that work together...