Habitante/gta-benchmark

GTA (Guess The Algorithm) Benchmark - A tool for testing AI reasoning capabilities

20
/ 100
Experimental

This tool helps AI researchers and developers evaluate the algorithmic reasoning abilities of their AI models. You input a Python function that you believe represents a hidden data transformation, and the system provides input-output examples for you to deduce the transformation logic. The output is an immediate score indicating how well your model's proposed function matches the actual underlying algorithm.

No commits in the last 6 months.

Use this if you need to benchmark and improve your AI models' capability to infer complex logical rules from observed data patterns, essentially 'guessing the algorithm' that produced certain outputs from given inputs.

Not ideal if you are looking for a general-purpose AI development framework or a tool for training AI models; this is specifically designed for evaluating a very particular aspect of AI reasoning.

AI-benchmarking algorithmic-reasoning model-evaluation AI-research pattern-recognition
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 4 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

7

Forks

Language

Python

License

MIT

Category

unity-game-ai

Last pushed

Jan 12, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/Habitante/gta-benchmark"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.