This can be used to add live information to the AI or implement Retrieval-Augmented Generation (RAG)
In order to add the right live info or gather the right RAG injection content, more context about the current conversation would be useful.
For example, the lastUserMessage might be like "look it up" and we don't know what "it" is. So providing a larger history (with both user and system) will be great.
Use case 1: better context for RAG
Use case 2: TM does not support plugins for "local" models like Ollama-served llama3. However, if we had this larger history instead of just the last user message sent, it can be used to run plugins in the backend (such as Google search) and inject it as RAG effectively. This would allow tooling in local systems.

Please authenticate to join the conversation.
Open
Feature Request
Almost 2 years ago
Get notified by email when there are changes.
Open
Feature Request
Almost 2 years ago
Get notified by email when there are changes.