mokksy/ai-mocks
AI-Mocks is a Kotlin-based mock server toolkit that brings service virtualization to both HTTP/SSE and LLM APIs — think WireMock meets local OpenAI/Anthropic/Gemini/A2A testing, but with real streaming and Server-Sent Events support.
AI-Mocks helps developers reliably test applications that use Large Language Models (LLMs) like OpenAI, Anthropic, or Google Gemini. It lets you create predictable, simulated LLM responses, including streaming and Server-Sent Events, so you can test how your application handles different LLM behaviors without relying on external services. This is ideal for software engineers building and testing AI-powered features.
Use this if you are a software developer building applications that integrate with LLMs and need a reliable, controlled environment for testing different API responses and streaming behaviors.
Not ideal if you are an end-user looking for a way to interact with or manage live LLM services directly, or if you're not a developer.
Stars
39
Forks
4
Language
Kotlin
License
MIT
Category
Last pushed
Mar 13, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/mokksy/ai-mocks"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Higher-rated alternatives
lmstudio-ai/lmstudio-js
LM Studio TypeScript SDK
lmstudio-ai/lms
LM Studio CLI
samestrin/llm-interface
A simple NPM interface for seamlessly interacting with 36 Large Language Model (LLM) providers,...
nbonamy/multi-llm-ts
A Typescript library to use LLM providers APIs in a unified way.
token-js/token.js
Integrate 200+ LLMs with one TypeScript SDK using OpenAI's format.