Tokenize helper
Estimate token usage before sending a request.
Endpoint
POST /api/v1/tokenize
Provide the same high-level inputs you would send to chat endpoints. You can send either text
(string) or messages
(array), and optionally system
and model
.
curl -X POST "https://api.kushrouter.com/api/v1/tokenize" \
-H "Authorization: Bearer $API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"messages": [
{ "role": "user", "content": "Draft a customer onboarding email." }
]
}'
Example response:
{
"success": true,
"model": "gpt-4o-mini",
"tokens": 56
}
Typical workflow
- Call
/api/v1/tokenize
with your intended prompt. - Inspect
tokens
and adjust prompts or models as needed. - Send the final request to
/api/v1/messages
or the OpenAI/Anthropic endpoints.
Notes
- Supports the same validation rules as the live endpoints; invalid payloads return HTTP 400.
- Works with OpenAI and Anthropic style messages, respecting provider-specific parameter enforcement.
- Subject to rate limit; excessive calls may return HTTP 429.
- This endpoint estimates tokens only and does not execute the request.