msu-denver/bili-core
bili-core is an open-source framework for LLM benchmarking using LangChain, LangGraph, Streamlit, and Flask. It enables effective LLM model comparisons, Retrieval-Augmented Generation (RAG), and customizable decision workflows. Part of MSU Denver’s Sustainability Hub, bili-core promotes data democracy and transparent, reproducible AI research. 🚀
This tool helps researchers and AI practitioners compare the performance of different Large Language Models (LLMs) and fine-tune how they retrieve information (RAG). You input various LLMs, custom prompts, and external tools, then receive benchmark results and a customized RAG implementation. Anyone who needs to rigorously test and optimize LLM applications for specific tasks will find this valuable.
Use this if you need to systematically evaluate different LLMs, customize how they find and use information, and build complex conversational agents without running models locally.
Not ideal if you're looking for a simple, pre-configured chatbot solution without any need for benchmarking or deep customization of RAG parameters.
Stars
9
Forks
1
Language
Python
License
MIT
Category
Last pushed
Mar 13, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/msu-denver/bili-core"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Featured in
Higher-rated alternatives
sierra-research/tau2-bench
τ²-Bench: Evaluating Conversational Agents in a Dual-Control Environment
xlang-ai/OSWorld
[NeurIPS 2024] OSWorld: Benchmarking Multimodal Agents for Open-Ended Tasks in Real Computer Environments
bigcode-project/bigcodebench
[ICLR'25] BigCodeBench: Benchmarking Code Generation Towards AGI
THUDM/AgentBench
A Comprehensive Benchmark to Evaluate LLMs as Agents (ICLR'24)
scicode-bench/SciCode
A benchmark that challenges language models to code solutions for scientific problems