tryAGI/Tiktoken
This project implements token calculation for OpenAI's gpt-4 and gpt-3.5-turbo model, specifically using `cl100k_base` encoding.
This tool helps developers accurately calculate the number of tokens in text or chat messages for OpenAI's GPT models (like GPT-4 and GPT-3.5-turbo), or for any model using a HuggingFace tokenizer. You input text or structured chat messages, and it outputs the precise token count or the tokens themselves. This is crucial for managing API costs and ensuring your prompts fit within model limits when building applications that interact with large language models.
Use this if you are a developer building applications with OpenAI's GPT models or other BPE-based language models, and you need a fast, accurate, and memory-efficient way to count or encode text tokens in .NET.
Not ideal if you are an end-user needing to count tokens for a one-off task without any programming, or if you are working with models that do not use OpenAI's encodings or standard HuggingFace BPE tokenizers.
Stars
82
Forks
7
Language
C#
License
MIT
Category
Last pushed
Mar 09, 2026
Commits (30d)
0
Get this data via API
curl "https://pt-edge.onrender.com/api/v1/quality/llm-tools/tryAGI/Tiktoken"
Open to everyone — 100 requests/day, no key needed. Get a free key for 1,000/day.
Compare
Higher-rated alternatives
aiqinxuancai/TiktokenSharp
Token calculation for OpenAI models, using `o200k_base` `cl100k_base` `p50k_base` encoding.
pkoukk/tiktoken-go
go version of tiktoken
dqbd/tiktokenizer
Online playground for OpenAPI tokenizers
microsoft/Tokenizer
Typescript and .NET implementation of BPE tokenizer for OpenAI LLMs.
lenML/tokenizers
a lightweight no-dependency fork from transformers.js (only tokenizers)