Custom tokenizer depending on provider

Token estimates seems to depends heavily to openai api.

As far as I know, each model provider (openrouter, gemini, openai, …) has it own method to retreive the token counts for each request (for example, openrouter use await fetch("https://openrouter.ai/api/v1/generation?id=$GENERATION_ID", { headers })

Please authenticate to join the conversation.

Upvoters
Status

Closed

Board
💡

Feature Request

Date

About 2 years ago

Subscribe to post

Get notified by email when there are changes.