flexpa/llm-fhir-eval
Benchmarking Large Language Models for FHIR
This tool helps healthcare IT professionals and medical data scientists assess how well large language models (LLMs) can handle healthcare data. It takes LLM outputs for tasks like generating patient records or extracting data from clinical notes, and measures their accuracy against FHIR standards. The result is a benchmark report showing how accurately different LLMs perform these critical healthcare data operations.
Use this if you need to reliably evaluate and compare different large language models for their ability to generate, validate, or extract data in the FHIR healthcare standard.
Not ideal if you're looking for a tool to deploy an LLM directly into a production healthcare system, as this is an evaluation framework, not an implementation solution.
Stars
42
Forks
8
Language
TypeScript
License
—
Category
Last pushed
Feb 04, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/flexpa/llm-fhir-eval"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
EvolvingLMMs-Lab/lmms-eval
One-for-All Multimodal Evaluation Toolkit Across Text, Image, Video, and Audio Tasks
vibrantlabsai/ragas
Supercharge Your LLM Application Evaluations 🚀
open-compass/VLMEvalKit
Open-source evaluation toolkit of large multi-modality models (LMMs), support 220+ LMMs, 80+ benchmarks
EuroEval/EuroEval
The robust European language model benchmark.
Giskard-AI/giskard-oss
🐢 Open-Source Evaluation & Testing library for LLM Agents