Help us make TypingMind better!

Tell us how we could make TypingMind more useful to you by upvoting an existing post or creating a new post below. Thank you!

Support Anthropic Claude "Agent Skills" Integration via Custom Model Parameters

Description: Please add support for Anthropic's new Agent Skills (Skills API) in TypingMind’s custom model configuration. This would allow users to specify the container parameter, custom Skill IDs, and required beta headers (such as code-execution-2025-08-25 and skills-2025-10-02) when configuring Claude models—enabling full usage of Anthropic's "Skills" within the TypingMind chat interface. Key Features Needed: Ability to set container (with skills) parameter in custom model advanced settings Support for the required beta headers for code execution and Skills Option to select and manage custom skills (skill IDs) for use in Claude-powered chats UI support for Skill versioning and multi-Skill workflows (Optional) Files API integration for downloading/uploading generated files Benefits: Unlocks Advanced Automation: Use pre-built and custom Anthropic Skills for Excel, PowerPoint, PDF, scripting, and more, directly from the TypingMind interface. Empowers Agents & Workflow Building: Enable automations, code execution, custom document generation, and business process integration with Claude’s secure code sandbox. No Need for External Scripting: Use Anthropic’s Python client features from within TypingMind—no extra coding or external scripts required. Keeps TypingMind Competitive: Stay at the leading edge of LLM platform capabilities as Anthropic deploys more enterprise co-pilot and tool automation use cases. Scalable Across Teams: Organizations can administratively manage and evolve custom Skill libraries for all Claude-powered agents organization-wide. Thanks for considering this high-impact upgrade!

💡

Feature Request

About 2 months ago

Implement an "LLM Council" in multiple-LLM chat

Based on karpathy’s idea : https://github.com/karpathy/llm-council The idea is that instead of asking a question to your favorite LLM provider (e.g. OpenAI GPT 5.1, Google Gemini 3.0 Pro, Anthropic Claude Sonnet 4.5, xAI Grok 4, eg.c), you can group them into your "LLM Council". A query is sent to multiple LLMs, it then asks them to review and rank each other's work, and finally a Chairman LLM produces the final response. In a bit more detail, here is what would happen when you submit a query: Stage 1: First opinions. The user query is given to all LLMs individually, and the responses are collected. The individual responses are shown in a "tab view", so that the user can inspect them all one by one. Stage 2: Review. Each individual LLM is given the responses of the other LLMs. Under the hood, the LLM identities are anonymized so that the LLM can't play favorites when judging their outputs. The LLM is asked to rank them in accuracy and insight. Stage 3: Final response. The designated Chairman of the LLM Council takes all of the model's responses and compiles them into a single final answer that is presented to the user.

💡

Feature Request

3 months ago

5

API Request: Models, Agents, Plugins & Knowledge Base

1. Model Configuration & API Key Management We require the ability to manage model configurations and rotate API keys programmatically to meet security and compliance standards (e.g., ISO 27001). Currently, API v1.2.0 does not support model or API key management. Requested endpoints: GET /v2/models – List configured models POST /v2/models – Add new model configuration PATCH /v2/models/{id} – Update model details (e.g., rotate API keys) DELETE /v2/models/{id} – Remove model configuration Business justification: Automated API key rotation is critical for enterprise security compliance. Manual configuration creates operational risk and does not scale. 2. Agent Creation & Management Currently, the API only provides read-only access (GET /v1/ai-characters), which limits automation. Requested endpoints: POST /v2/agents – Create new agent PATCH /v2/agents/{id} – Update agent configuration (system prompt, model, access control, etc.) DELETE /v2/agents/{id} – Remove agent Business justification: This capability is required to support Agent-as-a-Service workflows, allowing us to deploy and manage agents automatically through CI/CD pipelines without manual Admin Panel interaction. 3. Plugin Installation & Configuration We need to standardize plugin deployment across multiple TypingMind instances to ensure consistent functionality. Requested endpoints: GET /v2/plugins – List installed plugins POST /v2/plugins – Install and configure plugins PATCH /v2/plugins/{id} – Update plugin settings DELETE /v2/plugins/{id} – Remove plugins Business justification: Lack of programmatic plugin management prevents Infrastructure-as-Code workflows, introduces configuration drift, and increases operational overhead. 4. Knowledge Base Management Currently, uploading and managing knowledge documents is a manual process. Requested endpoints: GET /v2/knowledge – List knowledge bases and documents POST /v2/knowledge – Upload documents and create knowledge base collections DELETE /v2/knowledge/{id} – Remove documents or collections Business justification: Automated knowledge ingestion is essential for scalable enterprise onboarding. Manual uploads do not scale for high-volume or multi-tenant deployments.

💡

Feature Request

About 1 month ago

Feature suggestion: Project-wide sharing and referencing of chat content within a folder

Description: Please enable chats within a project/folder to access the content (or marked summaries) of other chats in the same project. This would allow, for example, key insights, notes, or results from different threads to be visible and usable for all colleagues/agents in the project—without the need for time-consuming copy & paste or manual uploading of files/markdowns. It would be desirable to have: The ability to mark selected chat messages (e.g., summaries or important steps) as "project knowledge." This marked content is available in other chats in the same project as contextual information (e.g., via search function or automatic integration into system instructions/dynamic prompts). Optional: Project-wide search function across all chats and/or display of selected "shared notes" or "project summaries." Added value: This makes it easier to work on larger projects divided into sub-threads, saves repetition, reduces errors due to loss of information, and speeds up knowledge transfer within the team.

💡

Feature Request

About 1 month ago