Strong-AI-Lab/Logical-and-abstract-reasoning
Evaluation on Logical Reasoning and Abstract Reasoning Challenges
This tool helps AI researchers and practitioners evaluate and fine-tune Large Language Models (LLMs) specifically on logical and abstract reasoning challenges. It takes existing LLM configurations and various reasoning datasets as input, then outputs performance metrics in a CSV file, showing how well the models understand and apply logic. Users can also fine-tune HuggingFace models on specific reasoning datasets to improve their performance.
No commits in the last 6 months.
Use this if you are an AI researcher or machine learning engineer looking to rigorously test and improve the logical and abstract reasoning capabilities of Large Language Models.
Not ideal if you are a general user looking for a pre-trained LLM for day-to-day tasks or if you are not comfortable with command-line interfaces and model configuration files.
Stars
29
Forks
6
Language
Python
License
MIT
Category
Last pushed
Apr 21, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Strong-AI-Lab/Logical-and-abstract-reasoning"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
open-thought/reasoning-gym
[NeurIPS 2025 Spotlight] Reasoning Environments for Reinforcement Learning with Verifiable Rewards
Hmbown/Hegelion
Dialectical reasoning architecture for LLMs (Thesis → Antithesis → Synthesis)
LLM360/Reasoning360
A repo for open research on building large reasoning models
bowang-lab/BioReason
BioReason: Incentivizing Multimodal Biological Reasoning within a DNA-LLM Model | NeurIPS '25
TsinghuaC3I/Awesome-RL-for-LRMs
A Survey of Reinforcement Learning for Large Reasoning Models