Currently, DeepSeek-R1's reasoning process is handled differently depending on the API provider:
Model responses include tags in the main response
Need to parse and separate these tags into a dedicated 'Thinking…' block
tags are not included in responses
Need an alternative method to capture and display reasoning tokens in the 'Thinking…' block
Implement proper parsing and display of DeepSeek-R1's reasoning process in a dedicated 'Thinking…' block, with provider-specific handling:
Parse tags from Azure Foundry and Fireworks API responses
Develop alternative method to capture reasoning from OpenRouter responses
This would create a consistent user experience for viewing model reasoning across all providers, similar to how DeepSeek's reasoning is displayed.
Suggested Implementation Priority:
Azure Foundry: Highest priority due to free API access, which would encourage more users to try DeepSeek-R1 through TypingMind
Fireworks API: Second priority due to superior token throughput, providing better user experience
OpenRouter: Lower priority due to slower token generation speed and more complex implementation requirements.

Please authenticate to join the conversation.
Completed
Feature Request
AI Models
About 1 year ago
Get notified by email when there are changes.
Completed
Feature Request
AI Models
About 1 year ago
Get notified by email when there are changes.