Future-House/aviary
A language agent gym with challenging scientific tasks
This project helps researchers and developers evaluate how well language agents (like chatbots or AI assistants) perform on challenging scientific and mathematical tasks. You define a specific problem or scenario, provide it to the system, and it generates a structured environment where an agent can attempt to solve it. This is ideal for AI researchers and practitioners who are building and testing the capabilities of language agents.
246 stars.
Use this if you need a standardized, customizable 'gym' to benchmark and train language agents on complex problems like solving math equations, answering scientific literature questions, or predicting protein stability.
Not ideal if you are looking for a pre-built language agent to use directly, as this tool focuses on creating evaluation environments, not the agents themselves.
Stars
246
Forks
30
Language
Python
License
Apache-2.0
Category
Last pushed
Feb 18, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/agents/Future-House/aviary"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related agents
strakam/generals-bots
Develop your agent for generals.io!
inspirai/wilderness-scavenger
A platform for intelligent agent learning based on a 3D open-world FPS game developed by Inspir.AI.
ngoxuanphong/ENV
Reinforcement Learning System
jlin816/homegrid
A minimal home grid world environment to evaluate language understanding in interactive agents.
i01000101/Q-Learning-Visualizer
An AI that learns to solve mazes with Q-Learning algorithm ðŸ§