zjunlp/InnoEval
InnoEval: On Research Idea Evaluation as a Knowledge-Grounded, Multi-Perspective Reasoning Problem
This project helps researchers, academics, and R&D professionals evaluate new research ideas or innovation proposals. You input a research idea as text or a PDF URL, and it outputs a detailed evaluation report. The report assesses the idea's novelty, feasibility, and significance, drawing insights from multiple simulated reviewer perspectives and grounded evidence from web pages, code repositories, and academic papers.
Use this if you need an automated, comprehensive, and multi-faceted assessment of a single research idea or a large dataset of ideas, comparing them against each other, to streamline your review process.
Not ideal if you prefer manual, human-only review processes or require evaluations in highly niche fields where automated information retrieval might be insufficient.
Stars
16
Forks
1
Language
Python
License
MIT
Category
Last pushed
Feb 17, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/rag/zjunlp/InnoEval"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
modelscope/evalscope
A streamlined and customizable framework for efficient large model (LLM, VLM, AIGC) evaluation...
izam-mohammed/ragrank
🎯 Your free LLM evaluation toolkit helps you assess the accuracy of facts, how well it...
Kareem-Rashed/rubric-eval
Independent framework to test, benchmark, and evaluate LLMs & AI agents locally.
justplus/llm-eval
大语言模型评估平台,支持多种评估基准、自定义数据集和性能测试。支持基于自定义数据集的RAG评估。
relari-ai/continuous-eval
Data-Driven Evaluation for LLM-Powered Applications