LLM token calculator
Estimate prompt tokens before the API bill arrives.
Compare providers from one wide workspace: OpenAI, Claude, Gemini, DeepSeek, Mistral, and Grok. Use it for planning, then verify production counts with provider token-counting or usage APIs.
Input tokens include system prompt, prompt text, and user message. Output tokens use the assistant response box.
Planning estimate: provider tokenizers differ, so verify production counts with each provider's token counting or usage APIs.
Pricing checked 2026-05-09. GPT-5.5: input $5/1M, cached input $0.5/1M, output $30/1M tokens.
References
Built as a planning tool, not a fake exact tokenizer.
Token estimates use a transparent character-per-token heuristic for speed and privacy. No prompt text leaves the browser, no API key is requested, and exact provider tokenizer behavior should be checked before committing a production budget.