neo4j-contrib/grape

Graph Retriever Analysis and Performance Evaluation

35
/ 100
Emerging

This framework helps you evaluate how accurately large language models (LLMs) can query knowledge graphs, specifically those compatible with Multi-hop Contextual Prompting (MCP) servers. It takes your Neo4j database as input, generates a dataset of questions and answers, and then uses an LLM judge to score how well different MCP server implementations retrieve information. This is useful for researchers and engineers building and deploying LLM applications that interact with knowledge graphs.

No commits in the last 6 months.

Use this if you need to objectively measure and compare the performance of different systems that enable LLMs to extract information from knowledge graphs.

Not ideal if you are looking for a tool to build or train LLMs, or if you only need to perform basic queries on a knowledge graph without LLM involvement.

LLM-evaluation knowledge-graph-querying NLP-benchmarking AI-system-performance Neo4j-applications
Stale 6m No Package No Dependents
Maintenance 2 / 25
Adoption 7 / 25
Maturity 15 / 25
Community 11 / 25

How are scores calculated?

Stars

31

Forks

4

Language

Jupyter Notebook

License

MIT

Last pushed

Sep 08, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/neo4j-contrib/grape"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.