jingtaozhan/IntelligenceTest

An evaluation framework to test AI in a trial-and-error process. It is a simplified Natural Selection test.

22
/ 100
Experimental

This framework helps AI researchers and developers evaluate how effectively their AI systems can independently solve problems through trial and error. You input an AI model and a task it needs to solve, and it outputs a quantitative measure of the AI's 'intelligence level' based on how many failures occur before a correct solution is found. This is for researchers developing and refining AI models.

No commits in the last 6 months.

Use this if you need to objectively assess an AI model's ability to autonomously find solutions in complex scenarios, beyond just its final accuracy.

Not ideal if you are looking for a framework to evaluate traditional performance metrics like accuracy or F1-score for static datasets.

AI evaluation machine learning research model robustness autonomous systems AI development
Stale 6m No Package No Dependents
Maintenance 0 / 25
Adoption 6 / 25
Maturity 16 / 25
Community 0 / 25

How are scores calculated?

Stars

22

Forks

Language

Jupyter Notebook

License

MIT

Last pushed

Mar 11, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/jingtaozhan/IntelligenceTest"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.