OceanGPT/OceanGym
OceanGym: A Benchmark Environment for Underwater Embodied Agents
OceanGym is a virtual underwater environment designed to help researchers test and refine autonomous underwater vehicles (AUVs) and robotic systems. It simulates realistic ocean conditions, including water flow, hydrodynamics, and depth-dependent lighting. Researchers can input different autonomous agent control policies and perception models to evaluate how well their designs navigate, make decisions, and recognize objects in challenging underwater scenarios.
100 stars.
Use this if you are developing or benchmarking autonomous underwater agents and need a realistic, customizable simulation environment to test their navigation, perception, and decision-making capabilities.
Not ideal if you are looking for a simple, off-the-shelf solution for general underwater data analysis or need a tool for physical hardware testing without a simulation layer.
Stars
100
Forks
8
Language
Python
License
—
Category
Last pushed
Jan 29, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/OceanGPT/OceanGym"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)