artificial-scientist-lab/SciMuse
Interesting Scientific Idea Generation Using Knowledge Graphs and LLMs: Evaluations with 100 Research Group Leaders
This project helps research group leaders and scientists evaluate the potential of AI to generate highly interesting and personalized scientific research ideas. It takes existing scientific ideas and research papers as input, then measures how well different AI models can rank these ideas by their scientific interest, specifically for individual researchers. The primary output is a benchmark score (AUC) indicating an AI model's ability to predict human expert judgment on research idea quality, helping researchers understand which AI models are best at identifying promising new research directions.
No commits in the last 6 months.
Use this if you are a research group leader, academic, or R&D professional interested in leveraging AI to discover groundbreaking scientific ideas or validate the quality of AI-generated research proposals.
Not ideal if you are looking for a tool to generate research ideas directly; this project is focused on benchmarking the *evaluation* capabilities of AI models rather than direct idea generation.
Stars
32
Forks
3
Language
Python
License
MIT
Category
Last pushed
Feb 03, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/artificial-scientist-lab/SciMuse"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
MMMU-Benchmark/MMMU
This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal...
pat-jj/DeepRetrieval
[COLM’25] DeepRetrieval — 🔥 Training Search Agent by RLVR with Retrieval Outcome
lupantech/MathVista
MathVista: data, code, and evaluation for Mathematical Reasoning in Visual Contexts
x66ccff/liveideabench
[𝐍𝐚𝐭𝐮𝐫𝐞 𝐂𝐨𝐦𝐦𝐮𝐧𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬] 🤖💡 LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea...
ise-uiuc/magicoder
[ICML'24] Magicoder: Empowering Code Generation with OSS-Instruct