olaflaitinen/llm-proteomics-hallucination

Systematic evaluation of hallucination risks in Large Language Models (GPT-4, Claude 3, Gemini Pro) for clinical proteomics and mass spectrometry interpretation. Production-ready detection framework with comprehensive benchmarks.

37
/ 100
Emerging

This project helps clinical researchers and medical professionals understand the risks of using large language models (LLMs) for interpreting clinical proteomics and mass spectrometry data. It takes LLM responses to specialized queries about proteins and their modifications, and outputs a detailed evaluation of their accuracy, highlighting hallucination rates and risk factors. This is for medical researchers, lab directors, or clinicians considering using AI for diagnostic support in proteomics.

Use this if you are a clinical proteomics expert concerned about the reliability of AI-generated insights for patient care and need to quantify hallucination risks.

Not ideal if you are looking for an LLM to directly integrate into a clinical workflow without rigorous validation or human oversight, as it demonstrates significant safety concerns.

Clinical Proteomics Mass Spectrometry Diagnostic Accuracy AI in Medicine Patient Safety
No Package No Dependents
Maintenance 6 / 25
Adoption 5 / 25
Maturity 13 / 25
Community 13 / 25

How are scores calculated?

Stars

9

Forks

2

Language

Python

License

MIT

Last pushed

Nov 11, 2025

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/transformers/olaflaitinen/llm-proteomics-hallucination"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.