Allow 128k tokens in Setapp's GPT-4 model

This issue is only prevalent in the Setapp models. Messages seem to still have the 8k token limit warning rather than the adjusted 128k token limit. If I send a message with an input greater than 8k tokens, I get a message that says "Sorry, Custom has rejected your request. Here is the error message from Custom: Sorry, this request is too long. I’m better at processing shorter texts. Can you type your request in fewer words?". If I use my own API key, this isn't an issue. Switching to the Setapp GPT-3.5 16k model allows the message to send, as does the Setapp GPT-3.5 4k model. Issue is with Setapp GPT-4. Does not appear related to "error 429"

Please authenticate to join the conversation.

Upvoters
Status

Closed

Board
πŸ’‘

Feature Request

Date

About 2 years ago

Subscribe to post

Get notified by email when there are changes.