jingtaozhan/IntelligenceTest
An evaluation framework to test AI in a trial-and-error process. It is a simplified Natural Selection test.
This framework helps AI researchers and developers evaluate how effectively their AI systems can independently solve problems through trial and error. You input an AI model and a task it needs to solve, and it outputs a quantitative measure of the AI's 'intelligence level' based on how many failures occur before a correct solution is found. This is for researchers developing and refining AI models.
No commits in the last 6 months.
Use this if you need to objectively assess an AI model's ability to autonomously find solutions in complex scenarios, beyond just its final accuracy.
Not ideal if you are looking for a framework to evaluate traditional performance metrics like accuracy or F1-score for static datasets.
Stars
22
Forks
—
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Mar 11, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/ml-frameworks/jingtaozhan/IntelligenceTest"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
graphbrain/graphbrain
Language, Knowledge, Cognition
cmekik/pyClarion
Experimental Python implementation of the Clarion cognitive architecture
marcelwa/aigverse
A Python library for working with logic networks, synthesis, and optimization.
ronniross/emergence-engine
A machine learning dataset and research module about the nature of consciousness and emergence phenomena.
mksunny1/general-intelligence
A framework for building self-organizing, reactive knowledge systems that learn, identify, and...