x66ccff/liveideabench
[𝐍𝐚𝐭𝐮𝐫𝐞 𝐂𝐨𝐦𝐦𝐮𝐧𝐢𝐜𝐚𝐭𝐢𝐨𝐧𝐬] 🤖💡 LiveIdeaBench: Evaluating LLMs' Scientific Creativity and Idea Generation with Minimal Context
This project helps researchers and scientists assess how well large language models (LLMs) can generate new scientific ideas. You provide a scientific keyword or topic, and it shows you how creative, diverse, and relevant the LLM's generated ideas are. This is for researchers and AI practitioners who are evaluating or developing LLMs for scientific discovery and innovation.
Use this if you need to objectively measure the scientific creativity and idea generation capabilities of different large language models under minimal contextual input.
Not ideal if you are looking for a tool to generate specific research hypotheses or directly assist with writing scientific papers.
Stars
23
Forks
4
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Mar 08, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/x66ccff/liveideabench"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
MMMU-Benchmark/MMMU
This repo contains evaluation code for the paper "MMMU: A Massive Multi-discipline Multimodal...
pat-jj/DeepRetrieval
[COLM’25] DeepRetrieval — 🔥 Training Search Agent by RLVR with Retrieval Outcome
lupantech/MathVista
MathVista: data, code, and evaluation for Mathematical Reasoning in Visual Contexts
ise-uiuc/magicoder
[ICML'24] Magicoder: Empowering Code Generation with OSS-Instruct
sherryzyh/physical_reasoning_toolkit
A Python toolkit for physical reasoning in LLMs and VLMs. This toolkit streamlines access to...