LLM Token Counter
๐ 100% local โ nothing leaves your browser
Samples:
Characters: 0 ยท Words: 0 ยท Lines: 0 ยท Sentences: 0
// how token counting works
Tokens are the basic units LLMs operate on. A token is roughly 4 characters in English, or about ยพ of a word. Common words like "the" or "cat" are one token each. Longer or rare words may be split into multiple tokens.
This tool uses a BPE approximation โ it's accurate to within ~5% for English text. For exact counts, use the model provider's tokenizer API (tiktoken for OpenAI, claude-tokenizer for Anthropic).
Context window = the maximum tokens a model can process in one call (input + output combined). If your prompt uses 80%+ of the window, you'll have very little space left for the response.