Add user prompt/instructions for a finalizing reply

so that we can tell the finalizing LLM to do a deep thesis on the parallel replies.

I constantly find that finalizing replies are too light; they basically summarize the parallel replies in a rather cursory way. A symptom of this is that, even if I use a reasoning model for finalizing, it will use very light reasoning or simply no reasoning (much lighter than a normal reply).

Because I do math-heavy science, I constantly want to tell the finalizing LLM sth like: β€œCheck each parallel reply with independent and critical eyes, evaluate each thoroughly, point out their errors, and distill the core insight.β€œ

Please authenticate to join the conversation.

Upvoters
Status

Open

Board
πŸ’‘

Feature Request

Date

5 months ago

Subscribe to post

Get notified by email when there are changes.