Future-House/aviary

A language agent gym with challenging scientific tasks

52
/ 100
Established

This project helps researchers and developers evaluate how well language agents (like chatbots or AI assistants) perform on challenging scientific and mathematical tasks. You define a specific problem or scenario, provide it to the system, and it generates a structured environment where an agent can attempt to solve it. This is ideal for AI researchers and practitioners who are building and testing the capabilities of language agents.

246 stars.

Use this if you need a standardized, customizable 'gym' to benchmark and train language agents on complex problems like solving math equations, answering scientific literature questions, or predicting protein stability.

Not ideal if you are looking for a pre-built language agent to use directly, as this tool focuses on creating evaluation environments, not the agents themselves.

AI evaluation language agent development scientific problem solving AI research model benchmarking
No Package No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 16 / 25
Community 16 / 25

How are scores calculated?

Stars

246

Forks

30

Language

Python

License

Apache-2.0

Last pushed

Feb 18, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/agents/Future-House/aviary"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.