Arize-ai/openinference
OpenTelemetry Instrumentation for AI Observability
This project helps machine learning engineers and AI developers understand the internal workings and performance of their AI applications, especially those using Large Language Models (LLMs). It provides a way to trace the steps and data flow within an AI application, from inputs like user queries to outputs generated by LLMs or external tools. The output is detailed observability data that helps debug, optimize, and monitor AI systems.
886 stars. Actively maintained with 61 commits in the last 30 days.
Use this if you need deep visibility into how your LLM-powered applications are performing, including interactions with vector stores or search engines.
Not ideal if you are looking for a general-purpose application monitoring tool that doesn't focus specifically on AI model inference.
Stars
886
Forks
200
Language
Python
License
Apache-2.0
Category
Last pushed
Mar 12, 2026
Commits (30d)
61
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/Arize-ai/openinference"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Related tools
vndee/llm-sandbox
Lightweight and portable LLM sandbox runtime (code interpreter) Python library.
apache/hertzbeat
An AI-powered next-generation open source real-time observability system.
traceloop/openllmetry
Open-source observability for your GenAI or LLM application, based on OpenTelemetry
utkuozdemir/nvidia_gpu_exporter
Nvidia GPU exporter for prometheus using nvidia-smi binary
Dynatrace/obslab-llm-observability
Search for a holiday and get destination advice from an LLM. Observability by Dynatrace.