the-crypt-keeper/can-ai-code
Self-evaluating interview for AI coders
This project helps AI developers and researchers evaluate the true reasoning ability of large language models (LLMs). It takes an LLM as input and generates unique, parametrically scaled problems to assess how far up a difficulty ramp the model can climb. The output is a multi-dimensional cognitive fingerprint showing the model's height (difficulty ceiling), efficiency (tokens used), and performance under resource constraints, helping you understand its reasoning strengths and weaknesses.
602 stars. No commits in the last 6 months.
Use this if you are a machine learning researcher or AI developer who needs to rigorously test the advanced reasoning capabilities of different LLMs beyond simple code generation or pattern matching.
Not ideal if you are only interested in whether an AI can generate syntactically correct code or perform basic tasks, as this benchmark focuses on complex, multi-step reasoning.
Stars
602
Forks
34
Language
Python
License
MIT
Category
Last pushed
Jun 21, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/transformers/the-crypt-keeper/can-ai-code"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
balisujohn/localwriter
A LibreOffice Writer extension that adds local-inference generative AI features.
ChanithaAbey/AI-Agent-for-Stock-Prediction
An AI Agent for stock data analysis, news rerieval, and prediction; powered by yfinance,...
its-kumar-yash/deep-study-ai-agent
DeepStudy AI automates research, refines queries dynamically, and generates high-quality...
fmueller/scribae
CLI to turn Markdown notes into SEO briefs, drafts, metadata, and translations using LLMs.
hemangjoshi37a/hjAlgos
AI based algorithmic trading platform for zerodha users