horizon-rl/strands-env

Standardizing environment infrastructure with Strands Agents — step, observe, reward.

43
/ 100
Emerging

This project helps AI developers standardize how they build and evaluate environments for training large language models (LLMs) with agentic capabilities. You define an environment by specifying the tools your LLM agent can use and how it receives rewards. This allows for consistent training and benchmarking of agentic LLMs.

Use this if you are developing or training LLM agents and need a structured way to define their interaction environments, including tool use and reward mechanisms.

Not ideal if you are a non-developer looking for an end-user application or if you are working with non-LLM models.

LLM development AI agent training reinforcement learning model evaluation agentic AI
No Package No Dependents
Maintenance 10 / 25
Adoption 8 / 25
Maturity 11 / 25
Community 14 / 25

How are scores calculated?

Stars

43

Forks

7

Language

Python

License

Apache-2.0

Last pushed

Mar 13, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/horizon-rl/strands-env"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.