Support cost estimation for custom models (e.g., models added via Open Router)

I use GPT-4-32k through OpenRouter a lot, and this is a slight annoyance, especially with how expensive the token usage adds up for that model.

I know it would be hard to get the exact cost of each prompt, but an estimation would be great.

Would it be possible to include tokens utilized next to a prompt/reply and the estimated cost?

Please authenticate to join the conversation.

Upvoters
Status

Completed

Board
πŸ’‘

Feature Request

Tags

Chat Management/Interactions

Date

Over 2 years ago

Subscribe to post

Get notified by email when there are changes.