mokksy/ai-mocks

AI-Mocks is a Kotlin-based mock server toolkit that brings service virtualization to both HTTP/SSE and LLM APIs — think WireMock meets local OpenAI/Anthropic/Gemini/A2A testing, but with real streaming and Server-Sent Events support.

43
/ 100
Emerging

AI-Mocks helps developers reliably test applications that use Large Language Models (LLMs) like OpenAI, Anthropic, or Google Gemini. It lets you create predictable, simulated LLM responses, including streaming and Server-Sent Events, so you can test how your application handles different LLM behaviors without relying on external services. This is ideal for software engineers building and testing AI-powered features.

Use this if you are a software developer building applications that integrate with LLMs and need a reliable, controlled environment for testing different API responses and streaming behaviors.

Not ideal if you are an end-user looking for a way to interact with or manage live LLM services directly, or if you're not a developer.

software-development API-testing LLM-integration AI-application-testing service-virtualization
No Package No Dependents
Maintenance 10 / 25
Adoption 7 / 25
Maturity 16 / 25
Community 10 / 25

How are scores calculated?

Stars

39

Forks

4

Language

Kotlin

License

MIT

Last pushed

Mar 13, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/mokksy/ai-mocks"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.