so that we can tell the finalizing LLM to do a deep thesis on the parallel replies.
I constantly find that finalizing replies are too light; they basically summarize the parallel replies in a rather cursory way. A symptom of this is that, even if I use a reasoning model for finalizing, it will use very light reasoning or simply no reasoning (much lighter than a normal reply).
Because I do math-heavy science, I constantly want to tell the finalizing LLM sth like: βCheck each parallel reply with independent and critical eyes, evaluate each thoroughly, point out their errors, and distill the core insight.β
Please authenticate to join the conversation.
Open
Feature Request
5 months ago
Get notified by email when there are changes.
Open
Feature Request
5 months ago
Get notified by email when there are changes.