neo4j-contrib/grape
Graph Retriever Analysis and Performance Evaluation
This framework helps you evaluate how accurately large language models (LLMs) can query knowledge graphs, specifically those compatible with Multi-hop Contextual Prompting (MCP) servers. It takes your Neo4j database as input, generates a dataset of questions and answers, and then uses an LLM judge to score how well different MCP server implementations retrieve information. This is useful for researchers and engineers building and deploying LLM applications that interact with knowledge graphs.
No commits in the last 6 months.
Use this if you need to objectively measure and compare the performance of different systems that enable LLMs to extract information from knowledge graphs.
Not ideal if you are looking for a tool to build or train LLMs, or if you only need to perform basic queries on a knowledge graph without LLM involvement.
Stars
31
Forks
4
Language
Jupyter Notebook
License
MIT
Category
Last pushed
Sep 08, 2025
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/neo4j-contrib/grape"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
langchain-ai/langchain-google
🦜🔗 LangChain interfaces to Google's suite of AI products (Gemini & Vertex AI)
nicolay-r/bulk-chain
A no-string API framework for deploying schema-based reasoning into third-party apps
MIDORIBIN/langchain-gpt4free
LangChain x gpt4free
doccano/doccano-mini
Annotation meets Large Language Models (ChatGPT, GPT-3 and alike).
dataprofessor/langchain-quickstart
Build your first LLM powered app with Langchain and Streamlit.