teamchong/vectorjson

O(n) streaming JSON parser for LLM tool calls. Agents act sooner, abort bad outputs early. WASM SIMD, up to 2000× faster than stock AI SDK parsers.

35
/ 100
Emerging

When your AI agent uses a tool that outputs a large amount of information, like generated code or detailed instructions, it typically sends this information in a JSON stream. Your application or user interface needs to process this stream efficiently, perhaps to show parts of the output immediately, stream content character-by-character, or skip irrelevant sections. This tool is for developers building AI agents who want to significantly speed up how their applications handle these streaming JSON outputs from large language models, making their agents feel faster and more responsive.

Available on npm.

Use this if you are building an AI agent that frequently makes tool calls and streams large JSON payloads, and you need to process these streams with minimal delay and resource usage.

Not ideal if your application only deals with small, single-shot JSON objects, as standard JSON parsing methods will likely be faster in those specific cases.

AI agent development LLM application performance real-time data processing streaming data optimization
No Dependents
Maintenance 10 / 25
Adoption 5 / 25
Maturity 20 / 25
Community 0 / 25

How are scores calculated?

Stars

14

Forks

Language

JavaScript

License

Apache-2.0

Last pushed

Mar 10, 2026

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/teamchong/vectorjson"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.