Help us make TypingMind better!

Tell us how we could make TypingMind more useful to you by upvoting an existing post or creating a new post below. Thank you!

Memory/Contextin between chats

Hey! I absolutely love the product and I've been using it for a few months a feature that I would really really love that openai Gemini and Claude have on their websites is saving memory automatically between chats or simply having every chat in an AI agent have some reference of what's going on in the other chat so every time I create a new chat I don't have to give the context or put it in system instructions. The MCP way to do this feels extremely tedious having to set up your own server I would suggest having an option to simply summarize what's going on in a chat and have every new chat have access to that. In terms of how to implement it that is obviously up to you guys I would suggest Maybe every time a new chat is created summarize two or three of the recent chats like the last 10 messages in the other recent chats and plug that in as part of the prompt. Thank you so much and this would be so appreciated!

💡

Feature Request

5 days ago

Store and Auto-fill OpenRouter API Key for Future Imports

When importing models from OpenRouter, I need to re-enter my OpenRouter API key every time I want to add a new model. However, OpenRouter only shows the API key once when it is generated, and if I haven't saved it somewhere, I can't recover it to use in the future. This makes the process difficult if I forget to store the key, since TypingMind does not remember it. I would like TypingMind to securely save my OpenRouter API key after I enter it for the first time, and automatically use (or pre-fill) the same key whenever I import more models from OpenRouter. This would prevent me from losing access to my models if I lose the OpenRouter key, and make the import process much smoother. Thanks for considering this improvement!

💡

Feature Request

2 months ago

2

Store variables for reference in Prompts

Currently we can store prompts as templates, and these can reference variables in them. I’m looking for the opposite; store variables and reference them when I type out my prompts. The workflow I’m looking for is: save a named text block once, e.g. rules, style, brand_voice, disclaimer reference it while typing any prompt have TypingMind expand it inline before sending Example: ``` I need you to research frogs. Follow these rules: {{rules}} Using that research, create a Word document with this style guide: {{style}} Using that research, create a PowerPoint with this style guide: {{style}} ``` Why this would help: avoids repeating large blocks of text keeps prompts readable lets one shared block be updated once and reused everywhere reduces the need to maintain many near-duplicate prompt templates Current workarounds like full templates, agents, or system instructions are helpful, but they don’t solve the “define once, reference anywhere inline” workflow. Possible UX: a Snippet Library / Global Variables section invoke with {{name}}, /snippet, or a keyboard shortcut optional preview before send warning if a referenced snippet doesn’t exist This would complement Prompt Templates rather than replace them.

💡

Feature Request

8 days ago

adding api endpoint from AI Prime Store

Hi Gerhard, Thanks for reaching out and for the clear explanation! Currently, the TypingMind Proxy only supports a limited number of official or widely-used API endpoints (e.g., OpenAI, Anthropic, OpenRouter, etc.). Unfortunately, third-party providers like aiprime.store are not yet supported through the TypingMind Proxy, which is why you see the “TypingMind Proxy does not support the endpoint https://aiprime.store/v1/messages yet.” error. What you can do right now: You can attempt to connect directly to the aiprime.store API from TypingMind without enabling the TypingMind Proxy. (Sometimes, this works, but due to browser CORS restrictions, this is often only possible in self-hosted/PWA or with desktop app versions.) If CORS or security issues block the direct connection, then unfortunately, there is no workaround within the current TypingMind Proxy system as it does not whitelist custom endpoints like aiprime.store. Feature request: We understand this could be a useful addition for users of alternate Claude API providers! I recommend Submitting a feature request here so our team can consider adding support for additional proxy endpoints in future updates. I'll also forward this feedback to our dev team. If you need more direct help, feel free to reply here or contact support@typingmind.com. Let me know if you have any other TypingMind questions!

💡

Feature Request

9 days ago

Add GROUP-LEVEL Admin Permissions

I have a strategic feature request that would significantly expand the use case for TypingMind Teams: Group-Level Admin Permissions. My vision is to allow leaders on our team instance to act as 'Sub-Admins' for their specific teams. They need the ability to onboard and manage their own users only within their assigned group—without seeing global settings or other leader’s data. I see that much of the groundwork is already laid out in the Roles & Permissions section; the key missing link is making these RBAC (Role-Based Access Control) settings dependent on specific Groups. To start, this could even be a limited feature set—for example, providing 'View Only' access at the group level (analytics, chat logs, etc.)—before eventually expanding to full group-level feature management. Is this on Tony’s roadmap? Implementing this kind of multi-tenant RBAC would turn TypingMind into a massive engine for coaching, consulting, education and training organizations. We could scale seat sales exponentially by bringing entire coaching cohorts onto the platform while managing the top-level support ourselves. It would be a transformative update for those of us building coaching, educational, training ecosystems. Best, Stef

💡

Feature Request

23 days ago

Title: Support JSON arrays in custom model body parameters for OpenAI Responses API providers

Description: When using a custom model with API Type = "OpenAI Responses API (New)" against xAI's /v1/responses endpoint, TypingMind currently appears to serialize custom body parameters in a way that prevents arrays from being passed correctly. Use case: xAI model: grok-4.20-multi-agent-beta-0309 Endpoint: https://api.x.ai/v1/responses This model supports built-in tools such as: [ {"type":"web_search"}, {"type":"x_search"} ] The request must send tools as a real JSON array. Problem: In the custom model UI, bodyRows do not seem to allow passing tools as an actual JSON array. When trying to pass: [{"type":"web_search"},{"type":"x_search"}] TypingMind sends it as a string instead of a sequence/array. Observed server error: Failed to deserialize the JSON body into the target type: tools: invalid type: string "[{"type":"web_search"},{"type":"x_search"}]", expected a sequence Impact: The custom model itself works Responses API works But provider-native built-in tools cannot be used This blocks full compatibility with xAI multi-agent / responses-based models Requested improvement: Please allow true JSON arrays in custom model body parameters for Responses API models, or provide an advanced raw JSON body editor for custom models. This would enable compatibility with providers/models that require arrays in request bodies, such as xAI Responses API tools. This is not just a new feature request, but also a compatibility issue: TypingMind can already call the model successfully, but cannot pass provider-native tool arrays correctly through the custom model body UI.

💡

Feature Request

about 1 month ago

Enable Google Drive KB integration for individual license users

Hi team, I’m an individual user with a solo start-up business. I got an individual license for TypingMind and was delighted to integrate it with my Google Drive for some commonly used resources. I was disappointed and frustrated today to be advised that "Google Drive integration was offered for a limited time as a beta feature for some personal (License) accounts. Recently, this feature has been restricted to TypingMind Team (business) accounts only, due to integration changes and backend support decisions."I can’t afford or justify the Team account. Can you enable an option for personal License accounts to have access to this integration please?

💡

Feature Request

about 1 month ago

TypingMind: Agentic Mode & Local Knowledge Base Integration

FEATURE REQUEST TypingMind: Agentic Mode & Local Knowledge Base Integration A strategic case for why TypingMind must evolve from a chat frontend into a local-aware, agentic platform — or risk being made obsolete by tools that already do. 1. Executive Summary The AI productivity landscape shifted fundamentally in 2025. The question is no longer which chat interface presents model responses most elegantly. It is which tool can act on your behalf — reading files, executing tasks, building knowledge — without requiring the user to copy-paste context manually between apps. Tools like Claude Code, Google Antigravity, and Obsidian with MCP integrations now operate directly on the local machine. They read, write, and reason over your actual files. TypingMind, despite being a polished and reliable chat frontend, currently sits on the wrong side of this divide: it is a browser-based chat wrapper that cannot touch the local filesystem, cannot take action autonomously, and cannot build a persistent, growing knowledge base that improves with every interaction. The Core Ask Add a native Agentic Mode to TypingMind that: (1) supports a persistent local knowledge base the AI can build and query autonomously, (2) gives users the ability to delegate multi-step tasks — not just single prompts — and (3) integrates with the local filesystem and MCP ecosystem without requiring a separate proxy setup. 2. The Problem — What TypingMind Cannot Currently Do 2.1 It Cannot Touch Your Computer At its architectural core, TypingMind is a static web application. The API key, chat history, and knowledge base live in the browser's local storage. This means: •       No access to local files, folders, or drives •       No ability to read, write, or search documents on the machine •       No way to execute actions in other applications •       No persistent memory that survives beyond what the user manually uploads By contrast, Claude Code runs directly in the terminal and has full filesystem access. Google Antigravity operates as a desktop IDE-class agent that can spawn parallel sub-agents, browse the web, and execute shell commands. Obsidian, when connected to Claude via MCP, becomes a live vault that the AI reads and writes in real time. 2.2 Its Knowledge Base Is Passive, Not Agentic TypingMind's Knowledge Base (KB) is a manual upload system backed by RAG. The user uploads a document, TypingMind chunks and embeds it, and the model retrieves relevant chunks at query time. This is useful — but it is frozen in time. The knowledge base does not: •       Grow based on your conversations •       Automatically index files from folders on your machine •       Allow the AI to create new entries when it discovers something noteworthy •       Connect to a living file system the way Obsidian or Claude Code does In an agentic workflow, the AI is not a passive question-answering tool. It is an active participant that can read a document, extract the key points, write a summary back to the KB, and then use that summary in future conversations — without the user doing any of that work manually. 2.3 It Cannot Delegate — It Can Only Respond TypingMind operates on a synchronous, one-prompt-one-response model. Every task requires the user to frame it, submit it, review the result, re-prompt, and repeat. Tools like Claude Code and Antigravity operate on a task-oriented model where you describe an outcome and the agent plans the steps, executes them in sequence, and reports back — often without further human input. The Gap in Plain Terms TypingMind: "Here is a prompt. Give me a response." vs. Agentic tools: "Here is a goal. Go and complete it, tell me when you're done." 3. The Competitive Landscape — What Has Already Moved On 3.1 Claude Code Anthropic's own CLI-based agent has full local machine access, reads and modifies entire codebases, executes shell commands, manages files, and can chain multi-step operations autonomously. It works inside the terminal — not a browser. The implications extend beyond coding: users are increasingly using Claude Code for writing, research workflows, and any task that involves reading existing files and producing output. 3.2 Google Antigravity Launched in November 2025, Antigravity is a desktop-class agentic development platform. Its 'Manager Surface' lets users dispatch multiple independent agents simultaneously, each working on a different task across the editor, terminal, and browser. Agents produce Artifacts — screenshots, implementation plans, walkthroughs — as verifiable deliverables. Critically, Antigravity supports a learning layer that allows agents to save useful context and code snippets to a knowledge base to improve future tasks. This is exactly the capability TypingMind's KB lacks. 3.3 Obsidian + MCP + Claude The combination of Obsidian as a local Markdown vault, the Model Context Protocol (MCP), and Claude Code has created a powerful open architecture. The vault is a live knowledge base — plain files on disk — that any MCP-compatible agent can read, search, and write to. Claude Code connected to an Obsidian vault can cross-reference hundreds of notes, create new entries that fit the existing structure, and operate on the knowledge base as an active workspace. This requires zero upload steps. The files are simply there, on disk, as they always were. 3.4 TypingMind with Filesystem MCP (Partial Solution) TypingMind does support MCP servers — but only in the Personal edition, not Teams, and only via a separately running local MCP bridge process. This is a technically demanding setup that most users will not configure correctly. More importantly, it is bolted on as an afterthought rather than a first-class, integrated capability. The result is fragile, undiscoverable, and not available in the product tier many professional users are on. 4. Feature Requests — What Needs to Be Built The following requests are listed in priority order. Together they would transform TypingMind from a premium chat frontend into a platform competitive with the agentic tools that are currently eclipsing it. Feature Request What It Means Comparable In Local KB Sync Automatically index files from a specified folder on disk into the Knowledge Base — no manual upload Obsidian Smart Connections, Claude Code + Vault Agentic Task Mode Submit a goal, not just a prompt; the AI plans and executes multi-step sequences and reports back Claude Code, Google Antigravity Manager Surface AI-Written KB Entries Allow the AI to create and update KB entries during a conversation — persistent learning Google Antigravity knowledge save layer Native MCP (No Bridge) Built-in MCP support without a separate proxy process — works out of the box on macOS/Windows Cursor IDE, Antigravity native integrations MCP in Teams/All Tiers Filesystem and memory MCP servers available in Teams edition, not just Personal n/a — currently blocked by product tier Conversation Memory Persistent cross-session memory the AI retrieves at the start of each conversation automatically Claude Projects, ChatGPT Memory, Mem.ai Agent Skills API Support Anthropic's Agent Skills / Files API with proper container and beta header configuration Claude Code tool use, Antigravity tool layer Background Task Execution Long-running tasks that execute asynchronously — user is notified when complete, not forced to wait Google Antigravity async agents, n8n workflows 5. Priority #1 Deep Dive — Local Knowledge Base Sync 5.1 What It Should Do The user specifies one or more local folders. TypingMind watches those folders and automatically indexes their contents into the Knowledge Base using the same RAG pipeline already in place. Changes to files are reflected incrementally — no manual re-upload required. The AI should also be permitted to write back to a designated output folder: summaries, research notes, extracted key points. Over time, the knowledge base becomes richer with every interaction — not because the user is doing extra work, but because the AI is. 5.2 Why This Is Different From the Current KB •       Current KB: static, manual, upload-triggered •       Proposed KB: live, automatic, bidirectional •       Current KB: isolated from the actual files you work with •       Proposed KB: a mirror of your real working documents •       Current KB: AI can only read from it, not write to it •       Proposed KB: AI can read and create entries, building institutional memory 5.3 Technical Path This is achievable without a fundamental re-architecture. The required additions are: •       A desktop companion process (lightweight, native macOS/Windows) that watches folders using the OS file-watching API and pushes changes to the TypingMind KB via the existing API •       A write-back API endpoint that allows agents to create new KB entries •       UI for specifying watched folders, reviewing AI-written entries, and setting KB write permissions The filesystem MCP server already handles the underlying file-watching capability. The work is integrating it into the TypingMind product layer as a first-class feature, not a power-user configuration. 6. The Strategic Risk — What Happens If This Is Not Built TypingMind has succeeded by being a thoughtfully designed, API-key-friendly chat interface at a one-time price. That proposition was compelling when the alternative was ChatGPT's subscription wall. In 2026, that is no longer the comparison being made. Users who start using Claude Code, Antigravity, or Cursor do not come back to chat frontends for serious work. The workflow shift is irreversible: once you experience an AI that operates on your actual files and executes tasks autonomously, a browser-based prompt box feels like a regression. TypingMind risks becoming the tool people keep open for quick casual prompts while doing all meaningful work elsewhere. The Threat Is Not Another Chat Frontend The threat is a generation of tools that do not compete on UI polish or model selection. They compete on what the AI can do to and with your computer. TypingMind currently has no answer to this question. The good news is that TypingMind already has most of the underlying infrastructure: a KB pipeline, MCP support, an agent system, and a plugin architecture. The gap is integration — pulling these threads together into a coherent agentic experience that works without advanced configuration. That is a product design problem, not a technical one from scratch. 7. Summary of Requests Listed by business impact: Request Impact Without It Local KB Sync (folder watch + write-back) Critical Users migrate to Obsidian + Claude MCP for knowledge work Agentic Task Mode (goal-based, multi-step) Critical Users migrate to Claude Code or Antigravity for complex tasks Native MCP — no bridge process required High Setup friction keeps most users on manual workflows MCP available in Teams edition High Enterprise/team users locked out of the only existing local access path AI-written KB entries (persistent learning) High KB stays static; does not compound value over time Conversation memory across sessions Medium Users must re-explain context on every new chat Agent Skills API / Anthropic beta headers Medium Power users hit token and capability ceilings specific to TypingMind Background / async task execution Medium Long tasks block the interface; agentic delegation impossible 8. Closing Statement TypingMind has built genuine goodwill with power users who appreciate its clean design, one-time pricing, and flexibility across model providers. That foundation is worth protecting — but it will not protect itself. The shift to agentic AI is not a feature trend. It is a redefinition of what AI tools are for. A chat interface that cannot act on the user's computer is, by definition, only half a tool in 2026. The requests in this document are not about adding features for the sake of a changelog. They are about whether TypingMind remains relevant to the users who currently like it most — the technically literate, heavy users who will be the first to leave when a better option fully matures.

💡

Feature Request

about 1 month ago