langfuse and agenta
These are direct competitors offering overlapping LLMOps functionality (observability, prompt management, evaluation, playground), though Langfuse has significantly broader integration support and adoption while Agenta appears to be a more self-contained platform.
About langfuse
langfuse/langfuse
🪢 Open source LLM engineering platform: LLM Observability, metrics, evals, prompt management, playground, datasets. Integrates with OpenTelemetry, Langchain, OpenAI SDK, LiteLLM, and more. 🍊YC W23
This platform helps AI application developers build, test, and improve their large language model (LLM) powered products. It takes data from your LLM application's usage and provides tools for debugging, evaluating performance, and managing prompts. The end users are developers, machine learning engineers, and product managers working on AI applications.
About agenta
Agenta-AI/agenta
The open-source LLMOps platform: prompt playground, prompt management, LLM evaluation, and LLM observability all in one place.
This platform helps product and engineering teams build reliable applications powered by Large Language Models (LLMs). It provides tools to refine the prompts that guide LLMs, test their performance with various inputs, and monitor how they behave once deployed. You can input different prompts and test cases, then analyze the LLM's responses and performance metrics.
Related comparisons
Scores updated daily from GitHub, PyPI, and npm data. How scores work