I love using typingmind. It's my go to app when I need to use an LLM. It would be great if you all could implement a "/beam" type system that allows responses from multiple models, and the aggregates them into one final response. That's probably hard to do, but it would be useful. Also, please figure out how to autoset the max tokens to the model's maximum amount. Maybe a checkbox feature?
Please authenticate to join the conversation.
Closed
Feature Request
UXUI Improvement
Over 1 year ago
Get notified by email when there are changes.
Closed
Feature Request
UXUI Improvement
Over 1 year ago
Get notified by email when there are changes.