m96-chan/0xBitNet

Run BitNet b1.58 ternary LLMs with WebGPU — in browsers and native apps

40
/ 100
Emerging

This project allows developers to integrate small, efficient AI language models directly into web applications or desktop software. It takes pre-trained BitNet b1.58 or Falcon-E models as input and outputs generated text, enabling features like real-time chat or content summarization without needing a dedicated server. This is for software developers creating applications that require on-device AI text generation.

Available on npm.

Use this if you are a developer looking to add fast, locally-run AI language model capabilities to your browser-based or native applications.

Not ideal if you need to run large, complex AI models or require extensive customization beyond what's offered by the supported BitNet architecture.

AI application development on-device inference edge AI browser AI text generation
No Dependents
Maintenance 10 / 25
Adoption 10 / 25
Maturity 20 / 25
Community 0 / 25

How are scores calculated?

Stars

10

Forks

Language

TypeScript

License

MIT

Last pushed

Mar 08, 2026

Monthly downloads

206

Commits (30d)

0

Get this data via API

curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/m96-chan/0xBitNet"

Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.