Skip to content

Using AI assistantsđź”—

Enabling the AI assistantsđź”—

Before you can use the AI Assistants of HDevelopEVO, you need to enable the AI Assistants in your settings. Go to File > Preferences > Settings > AI Assistants and select Enable AI and accept the disclaimer.

LLM set-upđź”—

In order to be able to use the AI Assistants you need to provide access to at least one LLM, for example OpenAI or DeepSeek.

Anthropicđź”—

To enable Anthropic’s AI models, create an API key in your Anthropic’s API account and enter it under Settings > AI Assistants > Anthropic.

Important

When using this preference, the API key will be stored in clear text. Use the following environment variable instead to set the key securely: ANTHROPIC_API_KEY

Configure available models in the settings under Settings > AI Assistants > Anthropic Models. Default supported models include choices like claude-3-5-sonnet-latest.

Azuređź”—

All models hosted on Azure that are compatible with the OpenAI API are accessible via the Provider for OpenAI compatible models provider. Note that some models hosted on Azure may require different settings for the system message.

DeepSeekđź”—

Since DeepSeek conveniently provides an OpenAI-compatible API, integrating it into HDevelopEVO is straightforward. Alternatively, you can also host it yourself, for example via Ollama.

This is the configuration you can use to add DeepSeek Chat on the DeepSeek Platform as a custom OpenAI model to your settings.json file:

DeepSeek configuration
 "ai-features.openAiCustom.customOpenAiModels": [
        {
            "model": "deepseek-chat",
            "url": "https://api.deepseek.com",
            "id": "deepseek-chat",
            "apiKey": "<yourApiKey>",
            "enableStreaming": true,
            "supportsDeveloperMessage": false,
            "developerMessageSettings": "system"
        }
    ],
    "ai-features.agentSettings": {
        "Code Completion": {
            "languageModelRequirements": [
                {
                    "purpose": "code-completion",
                    "identifier": "deepseek-chat"
                }
            ]
        },
        "Architect": {
            "languageModelRequirements": [
                {
                    "purpose": "chat",
                    "identifier": "deepseek-chat"
                }
            ]
        },
        "Orchestrator": {
            "languageModelRequirements": [
                {
                    "purpose": "agent-selection",
                    "identifier": "deepseek-chat"
                }
            ]
        },
        "Universal": {
            "languageModelRequirements": [
                {
                    "purpose": "chat",
                    "identifier": "deepseek-chat"
                }
            ]
        },
        "Command": {
            "languageModelRequirements": [
                {
                    "purpose": "command",
                    "identifier": "deepseek-chat"
                }
            ]
        },
        "Coder": {
            "languageModelRequirements": [
                {
                    "purpose": "chat",
                    "identifier": "deepseek-chat"
                }
            ]
        }
    }

Set "supportsDeveloperMessage": false, as DeepSeek’s API does not yet support the newer “developer” message, which replaced the “system” role in most OpenAI models.

Alternatively, you can also adapt the agents settings in your AI Configuration. To get there, go to View > AI Configuration.

Google AIđź”—

To enable Google AI models, create an API key in your Google AI account and enter it in the Settings > AI Assistants > Google AI.

Important

When using this preference, the API key will be stored in clear text. Use the following environment variable instead to set the key securely: GOOGLE_API_KEY

Configure available models in the settings under AI Assistants > Google AI Models.

Hugging Faceđź”—

Many hosting options and models on Hugging Face support using an OpenAI compatible API. In this case, we recommend using the HDevelopEVO provider for OpenAI compatible models. The Hugging face provider only supports text generation at the moment for models not compatible with the OpenAI API

To enable Hugging Face as an AI provider, you need to create an API key in your Hugging Face account and enter it in the Settings > AI Assistants > Hugging Face.

Important

When using this preference, the API key will be stored in clear text. Use the following environment variable instead to set the key securely: HUGGINGFACE_API_KEY

Note

Hugging Face offers both paid and free-tier options (including “serverless”), and usage limits vary. Monitor your usage carefully to avoid unexpected costs, especially when using high-demand models.

Add or remove the desired Hugging Face models from the list of available models.

LlamaFile Modelsđź”—

To configure a LlamaFile LLM in HDevelopEVO, add the necessary settings to your configuration:

LlamaFile configuration
{
   "ai-features.llamafile.llamafiles": [
       {
           "name": "modelname", //you can choose a name for your model
           "uri": "file:///home/.../YourModel.llamafile",
           "port": 30000 //you can choose a port to be used by llamafile
       }
   ]
}

Replace name, uri, and port with your specific LlamaFile details.

HDevelopEVO also offers convenience commands to start and stop your LlamaFiles:

To start a LlamaFile:
Use the command Start Llamafile, then select the model you want to start.
To stop a LlamaFile:
Use the Stop Llamafile command, then select the running Llamafile which you want to terminate.

Ensure that your LlamaFiles are executable. For more details on LlamaFiles, including a quickstart, see the official Mozilla LlamaFile documentation .

Mistral Modelsđź”—

Mistral models, including those on “La Platforme” can be used via the OpenAI API and support the same feature set.

Here is an example configuration:

Mistral models configuration
"ai-features.openAiCustom.customOpenAiModels": [
    {
        "model": "mistral-large-latest",
        "url": "https://api.mistral.ai/v1",
        "id": "Mistral",
        "apiKey": "YourAPIKey",
        "developerMessageSettings": "system"
    },
    {
        "model": "codestral-latest",
        "url": "https://codestral.mistral.ai/v1",
        "id": "Codestral",
        "apiKey": "YourAPIKey",
        "developerMessageSettings": "system"
    }
]

Ollamađź”—

To connect to models hosted via Ollama, enter the corresponding URL, along with the available models, in your settings.json. Some models on Ollama also support using an OpenAI-compatible model API.

The key fields are:

  • modelId: The unique identifier of the model.
  • requestSettings: Provider-specific options, such as token limits or stopping criteria.
  • providerId: Optionally, this specifies the provider for the settings, for example, Huggingface, Ollama, OpenAi. If omitted, settings apply to all providers that match the modelId. ///

OpenAI (hosted by OpenAI)đź”—

To enable the use of OpenAI, you need to create an API key in your OpenAI account and enter it in the settings File > Preferences > Settings > AI Assistants > OpenAI Official > OpenAI Api Key.

Important

When using this preference, the API key will be stored in clear text. Use the following environment variable instead to set the key securely: OPENAI_API_KEY

Creating an API key requires a paid subscription, and using these models may incur additional costs. Be sure to monitor your usage carefully to avoid unexpected charges. We have not yet optimized the AI assistants for token usage.

OpenAI compatible models (e.g., via VLLM)đź”—

As an alternative to using an official OpenAI account, HDevelopEVO also supports arbitrary models compatible with the OpenAI API hosted via VLLM. This enables you to connect to self-hosted models with ease. To add a custom model, click on the link in the settings section and add your configuration like the following and check the Readme for all configuration options:

OpenAI comptible models configuration
{
  "ai-features.openAiCustom.customOpenAiModels": [
       {
   "model": "your-model-name",
    "url": "your-URL",
    "id": "your-unique-id",
    "apiKey": "your-api-key", 
    "developerMessageSettings": "system" 

       }
   ]
}
  • id: If not provided, the model name will be used as the ID.

  • apiKey: Use true to apply the global OpenAI API key.

  • developerMessageSettings: Controls the handling of system messages: user, system, and developer will be used as a role, mergeWithFollowingUserMessage will prefix the following user message with the system message or convert the system message to user message if the next message is not a user message. skip will just remove the system message. Defaulting to developer.

Vercel AIđź”—

The Vercel AI provider offers a unified way of communicating with LLMs through the Vercel AI SDK framework. It serves as an alternative to other providers and currently supports OpenAI and Anthropic APIs.

If you already have your OpenAI or Anthropic API keys set as environment variables (OPENAI_API_KEY or ANTHROPIC_API_KEY), no additional configuration is required for the Vercel provider.

If you configure your API keys through the settings, you need to explicitly set the API keys for the Vercel provider under File > Preferences > Settings > AI Assistants > Vercel AI.

Vercel AI: official models configurationđź”—

The Vercel provider includes the most common OpenAI and Anthropic models by default. To add new official models, configure them in your settings.json:

Vercel AI configuration
{
  "ai-features.vercelAi.officialModels": [
    {
      "id": "vercel/openai/new-gpt",
      "model": "new-gpt",
      "provider": "openai"
    }
  ]
}

Vercel AI: Custom models configurationđź”—

The Vercel provider supports custom models compatible with the Vercel AI SDK. Configure custom endpoints in your settings.json:

Vercel AI custom models configuration
{
  "ai-features.vercelAi.customModels": [
    {
      "model": "custom-model-name",
      "url": "https://api.example.com/v1",
      "id": "my-custom-model",
      "apiKey": "your-api-key",
      "provider": "openai",
      "supportsStructuredOutput": true,
      "enableStreaming": true
    },
    {
      "model": "local-llama",
      "url": "http://localhost:8000",
      "id": "local-llama-model",
      "apiKey": true,
      "provider": "openai",
      "supportsStructuredOutput": false,
      "enableStreaming": false
    }
  ]
}

Restriction

Keep in mind that the Vercel AI provider is currently experimental.

Custom request settingsđź”—

You can define custom request settings for specific language models to tailor how models handle requests, based on their provider.

Add the settings in your settings.json.

Custom request configuration
"ai-features.modelSettings.requestSettings": 
[
    {
        "scope": {
            "providerId": "ollama",
            "modelId": "qwen3:14b"
        },
        "requestSettings": { "num_ctx": 40960 },
        "clientSettings": {
            "keepToolCalls": true,
            "keepThinking": false
        }
    },
    {
        "scope": {
            "providerId": "huggingface",
            "modelId": "Qwen/Qwen2.5-Coder-32B-Instruct",
        },
        {
            "requestSettings": { "max_new_tokens": 2048 },
        }
     }
]
  • scope: Any combination of providerId, agentId, and modelId. Describes the model(s) to which the settings should be applied. The models are matched based on specificity (agent: 100, model: 10, provider: 1 points). This way, settings which are to be applied to all or just one ollama model can be specified, depending on whether only the providerId is specified or also the modelId is given.
  • clientSettings: Controls retention of reasoning and/or toolcall messages in the chat context. E.g., if keepThinking is set to true, the reasoning is kept in the context for follow-up chat messages. Else, earlier reasoning messages are removed (potentially saving input tokens).
  • modelId: Unique identifier of the model
  • requestSettings: Provider-specific options, such as token limits or stopping criteria
  • providerId: Optionally, this specifies the provider for the settings, for example, Huggingface, Ollama, OpenAi. If omitted, settings apply to all providers that match the modelId.

In addition to global custom request settings, HDevelopEVO supports an experimental feature that allows you to define custom request settings per individual chat session. This adds flexibility by enabling adjustments on-the-fly within a single conversation.
Click the icon in the top-right corner of a chat window to access this functionality. The settings must currently be entered manually in the settings.json file.

Example

To make the language model more or less creative for a particular session, adjust the temperature parameter:

{
"temperature": 1
}

Thinking Modeđź”—

HDevelopEVO provides support for Claude’s “Thinking Mode” when using Sonnet-3.7. By setting a custom request parameter—either globally or for a specific chat session—you can instruct the model to “think more.” This is particularly useful for more difficult questions and shows its strengths when using agents like the Architect or Coder on complex coding tasks.

To enable Thinking Mode, you need to add the following custom request setting:

"thinking": { "type": "enabled", "budget_tokens": 8192 }
You can configure this setting either:
  • Globally through the model settings
  • For a specific chat session by using the chat-specific settings icon in the chat window

Preview restriction

Keep in mind that the UI for chat-specific settings is currently experimental.

Code completionđź”—

Code Completion can be used in manual and automatic mode.

By default, the automatic Code Completion is enabled, which executes continuous requests to the underlying LLM while coding, providing suggestions on the go as you’re typing. In manual mode, suggestions will be displayed when pressing Ctrl+Alt+Space. This gives you greater control over the appearance of AI suggestions. The requests are then canceled when moving the cursor.

The code completion can be enabled in your settings: File > Preferences > Settings > AI Assistants > Chat > Automatic Code Completion.

There are two prompt variants available for the code completion. You can select them in Code Completion > Prompt Templates, and the used prompt template to your personal preferences or to the LLM you want to use.

You can also specify Excluded File Extensions, for which the AI-powered code completion will be deactivated.

The setting Strip Backticks will remove surrounding backticks that some LLMs might produce, depending on the prompt.

The setting Max Context Lines allows you to configure the maximum number of lines used for AI code completion context. This setting can be adjusted to customize the size of the context provided to the model, which is especially useful when using smaller models with limited token capacity.

Chatđź”—

HDevelopEVO provides a global chat interface where users can interact with a chat agent.

The context of the chat includes information about the current editor state, such as the selected range or the cursor location, which helps the AI provide more relevant responses. This approach is particularly useful when you need assistance with specific code segments.

Starting a chatđź”—

You can initiate a chat sessions directly from the editor context. To start a session, right-click anywhere in a file, either at the cursor position or with a selection—and choose the Ask AI option.

Alternatively, to open the chat, go to View > Chat or press Ctrl+Alt+I.

The chat panel opens on the right hand side. Use to send a request to a designated agent, and use to attach files that you have a question about for the chat.

Your chat query is sent to an chat agent which then provides the corresponding answer. By default, the Orchestrator Chat Agent is set and it chooses the best fitting agent. You can also send your query directly to a specific agent by typing @<name-of-agent>into your chat window. For example, type @coder for queries around coding, or type @command to find the right command according to all possible commands that can be executed at that stage.

Agent pinningđź”—

Agent pinning reduces the need for repeated agent references.
When you mention an agent in a prompt and no agent is pinned, the mentioned agent is automatically pinned. If an agent is already pinned, mentioning a different agent will not change the pinned agent. Instead, the newly mentioned agent will be used only for that specific prompt. You can manually unpin an agent through the chat interface if needed.

Image supportđź”—

HDevelopEVO supports adding images to chat sessions, which is especially useful when visual context is needed to solve problems or explain issues.

You can add images to chat sessions in several ways:

  • Click the icon in the chat input area.
  • Drag and drop images directly into the chat.
  • Copy and paste images from your clipboard.

When an image is included in your request, it will be sent to the LLM along with your text, if the selected model supports image inputs. This enables you to provide visual context that can help the AI understand and address your questions more effectively.

Context variablesđź”—

You can augment your requests in the chat with context by using variables.

Do one of the following:

  • To refer to the currently selected text, use #selectedText in your request.
  • To further specify the scope of your request, pass context files into the chat. You can also drag and drop a file into the chat view.
  • To use the auto-completion, type #file or directly type #<file-name>.

Here are some of the most frequently used variables:

#file:src/my-code.ts
In the user message is replaced to the workspace-relative path, alongside attaching the file to the context. This allows adding the file content to the context and then referring to the file in the chat input text efficiently in one go.
#file:filePath
Inserts the path to the specified file relative to the workspace root. After typing #file:, auto-completed suggestions will help you to specify a file. The suggestions are based on the recently opened files and on the file name you’re typing. After typing # followed by the file name you can directly start your search for the file you want to add and reference in your message.
#filePath
Is the shortcut for #file:filePath.
#currentRelativeFilePath
Is used for the relative file path of the currently selected file in the editor or explorer.
#currentRelativeDirPath
Is used for the directory path of the currently selected file.
#selectedText
Use this for the currently highlighted text in the editor. This does not include the information from which file the selected text originates.

You can see the full list of available variables when typing # in the chat input field.

Task contextđź”—

Task Context introduces a structured, reproducible way to work with AI agents ensuring clearer intent, better planning, and far more accurate results. It is a powerful approach for structured, reproducible AI-assisted development in HDevelopEVO. This feature transforms how you work with AI agents by externalizing your intent into dedicated files that serve as persistent, editable records of what you want the AI to accomplish.

Setting up task contextđź”—

Task context will be stored as Markdown files. You can set the directory in the settings . You can set the directory in the settings

Setting id: ai-features.promptTemplates.taskContextStorageDirector
The default is: .prompts/task-contexts.

Manually creating a task context fileđź”—

Instead of starting with a chat prompt, create a dedicated task context file that externalizes your requirements:

  1. Create a new file, for example, my_task.md, in a dedicated directory, by default .prompts/task-context/.
  2. Write your initial requirement in this file, like “add a reset button to the token usage view”.
  3. Initiate a session with this file using the command task context: Initiate Session.
  4. Select your desired agent to link the chat session to your externalized prompt file.
  5. Press Enter to start the request in the chat.

This approach makes your prompt reproducible and allows you to refine it before sending it to the LLM.

Planning with the architect agentđź”—

For complex tasks, it’s highly beneficial to use a planning agent before a coding agent:

  1. Select the “Architect” agent when initiating your chat session and describe your task. …
    ⤷ The Architect will analyze your workspace and create a detailed plan of what should be coded.
  2. Use the Summarize this session as a task for coder button in the chat.

The system will send the plan to an underlying LLM, which summarizes it into a structured format and creates a task context file. This structured task context includes comprehensive details such as:

  • Problem description and scope
  • Detailed design and implementation steps with specific files
  • Testing strategy, both automated and manual
  • Deliverables and PR description

Note

You can adapt this template via modifying the prompt “architect-tasksummary”.

Implementing with the coder agentđź”—

After reviewing and refining the task context, do the following:

  • Review the plan and make any necessary adjustments.
  • If modifications are needed, return to the planning agent and provide feedback.
  • Use the “Update Task Context” action to incorporate changes.
  • When the plan is finalized, trigger the Coder agent with the updated task context.

Because the Coder agent is now working from a detailed and verified plan, it produces results of much higher quality.

AI configurationđź”—

The AI Configuration View allows you to review and adapt agent-specific settings. Select an agent on the left side and review its properties on the right:

  • Enable Agent: Disabled agents will no longer be available in the chat or UI elements. Disabled agents also won’t make any requests to LLMs.
  • Edit Prompts: Click Edit to open the prompt template editor, where you can customize the agent’s prompts. Reset will revert the prompt to its default.
  • Language Model: Select which language model the agent sends its requests to. Some agents have multiple “purposes,” allowing you to select a model for each purpose.
  • Variables and Functions: Review the variables and functions used by an agent. Global variables are shared across agents, and they are listed in the second tab of the AI Configuration View. Agent-specific variables are declared and used exclusively by one agent.

View and Modify promptsđź”—

You can open and edit prompts for all agents from the AI Configuration View. Prompts are shown in a text editor. Changes saved in the prompt editor will take effect with the next request made to the corresponding agent.

You can reset a prompt to its default using the Reset button in the AI configuration view or the Revert toolbar item in the prompt editor.

Note that some agents come with several prompt variants, you can choose the active variant in the drop down box. To create user-defined variants, browse to the prompt templates directory and create/copy a new file starting with the same id as the default prompt of an agent.

Variables and functions can be used in prompts. Variables are replaced with context-specific information at the time of the request (for example, the currently selected text), while functions can trigger actions or retrieve additional information. You can find an overview of all global variables in the Variables tab of the AI Configuration View and agent-specific variables in the agent’s configuration.

Variables are used with the following syntax:

{{variableName}}

Tool functions are used with the following syntax:

~{functionName}

Prompt template and fragment locationsđź”—

By default, custom prompts, prompt variants, and prompt fragments are created and read from user-wide local directories that can be configured in the settings AI Assistants > Prompt Templates. This setting is valid for all of your projects. In addition, you can configure workspace-specific directories and files that are available as prompts and prompt fragments to introduce project-specific adaptations and additions.

You can specify workspace-relative directories settings for individual files, relevant file extensions for prompt templates and fragments under AI Assistants > Prompt Templates. Workspace-specific prompts have priority, so you can override the prompts of the available agents in a workspace-specific way. Furthermore, these workspace-specific templates are accessible via the prompt fragment variable #prompt:filename in both the chat interface and agent prompt editors.

Prompt fragments enable you to define reusable parts of prompts for recurring instructions given to the AI. These fragments can be referenced both in the chat interface, for one-time usage, and within the prompt templates of agents, to customize agents with reusable fragments. For example, you can define a prompt fragment that specifies a task, provides workspace context or coding guidelines, and then reuse it across multiple AI requests without having to repeat the full text.

To support this functionality, HDevelopEVO includes a special variable #prompt:promptFragmentID that takes the ID of a prompt fragment as an argument.

Agentsđź”—

The following agents are available by default:

Architect
This agent can access your workspace, it can get a list of all available files and folders and retrieve their content, but it cannot modify files. It can therefore answer questions about the current project, project files and source code in the workspace, such as how to build the project, where to put source code, where to find specific code or configurations, etc.
Coder
This agent can access your workspace, it can get a list of all available files and folders and retrieve their content. Furthermore, it can suggest modifications of files. It can therefore assist with coding tasks or other tasks involving file changes. For more information, see Coder: AI-powered development.
Command
This agent is aware of all commands that you can execute within HDevelopEVO. Based on the request, it can find the right command and then let you execute it.
Orchestrator
This agent analyzes your request against the description of all available chat agents and selects the best fitting agent to answer the request. Your request will then be directly delegated to the selected agent without further confirmation. This is the default agent.
Terminal Assistant
This agent provides assistance to write and execute arbitrary terminal commands. Based on request, it suggests commands and allows you to directly paste and execute them in the terminal. It accesses the current directory, environment, and the recent terminal output of the terminal session to provide context-aware assistance.
Universal
This agent provides answers to general programming and software development questions. It is also the fall-back for any generic questions you might ask. The universal agent currently does not have any context by default, that is it cannot access the current user context or the workspace.

Coder: AI-powered developmentđź”—

Coder is an AI-powered coding chat agent designed to assist with structured code modifications directly within HDevelopEVO. It can browse the workspace, retrieve relevant context, and propose code changes that you can review and apply seamlessly.

Coder can be used for the following:

  • Text retrieval

    Coder can browse the current workspace to find and read the content of relevant code files. As a user, you can augment your queries by mentioning or attaching specific files as context information to your chat messages to get faster and more accurate responses.

  • Propose changes

    Coder provides structured code modifications that can be reviewed and applied automatically.

  • Fix file issues

    Coder can automatically detect and fix issues in files by analyzing diagnostics reported by language servers, linters, and other tools.

  • Agent mode

    Coder can operate as a fully autonomous agent that plans, writes, tests, iterates, and fixes code without requiring manual intervention at each step.

  • Task context

    For complex development tasks, Coder works with task context to provide a structured, reproducible approach with clear planning before implementation.

To interact, type @Coder followed by your request into the chatline. This will also pin @Coder for the ongoing chat session, so in the following messages, you don’t need to mention @Coder again.

Coder modesđź”—

Coder operates in two distinct modes that offer different levels of autonomy:

Edit Mode
Edit mode is the default interaction model that gives you full control over file changes. The agent proposes changes through structured prompt-based interaction. You review and approve each file modification before it’s committed. Changes are presented as diffs for easy review. Ideal for targeted code modifications where careful review is essential.
Agent Mode
Agent Mode transforms Coder into a fully autonomous developer that can write and modify files without requiring user approval for each change, compile and test the generated code, and interpret results from tests and builds. Also it can fix its own errors and iterate until everything works correctly.

How to switch to agent mode:

  1. Navigate to the AI Configuration and select Coder as the active agent.
  2. Choose the agent-mode prompt.
  3. Optionally, switch to a more powerful LLM like Sonnet-4, GPT-4.1 or Gemini-Pro for better results.
  4. Enable notifications to get updates when long-running tasks complete.

Once active, the agent operates differently. It directly writes to the workspace, executes code, and even starts the application when complete.

Tip

To use Coder effectively, describe your programming task in clear, detailed, and natural prompts.  Coder will search your workspace for relevant code, but you can improve efficiency by specifying key locations such as code files that need to be modified, or supporting files that contribute to understanding the task, like interface definitions or similar implementations.
Agent mode preserves traceability of all changes via AI’s changeset feature.

Automatic issue detection and fixingđź”—

Coder can identify and automatically fix problems in your code files. To use this feature, ask Coder to fix issues in a specific file. You can use context variables like #currentRelativeFilePath or #file:path to specify which file needs fixing.

Example

@Coder Fix all issues in #_f

When triggered, Coder will:

  1. Open the specified file in an editor.
  2. Collect all issues reported in the problem view including the diagnostics from language servers, linters, and spell checkers.
  3. Propose automated fixes for the identified issues.

Ways to specify relevant contextđź”—

There are multiple ways to help Coder find the right files efficiently. Keep in mind that Coder operates with file paths relative to the workspace root:

  • Using context variables

    Coder supports predefined variables that dynamically provide relevant context. You can just type them in your request in the chat input field. This allows you to not just add the file as context but also describe why the file is relevant, for example “Look at #file:src/api.ts as a reference for generating an implementation…”

    Here are some examples of the most frequently used variables. You can see the full list of available variables when typing # in the chat input field:

    #file:filePath
    Inserts the path to the specified file relative to the workspace root. After typing #file:, auto completed suggestions will help you to specify a file. The suggestions are based on the recently opened files and on the file name you give.
    #filePath
    Shortcut for #file:filePath; after typing # followed by the file name, you can directly start your search for the file you want to add and reference in your message.
    #currentRelativeFilePath (shortcut #_f)
    The relative path of the currently selected file in the editor or explorer.
    #currentRelativeDirPath
    The directory path of the currently selected file.
    #selectedText
    The currently highlighted text in the editor.

    This does not include the information from which file the selected text is coming. All files added to the context via variables will also appear in the context overview below the chat input field.

  • Adding files directly to the context

    You can drag and drop files from the file explorer, or use the + button below the chat input field to directly add files to the context of a conversation. In contrast to using variables, you cannot describe the meaning of these files, though.

  • Natural language

    You can describe the location in plain text, such as: “In the … package under src/browser.” While Coder will still need to search, this helps it to focus on the right area of your workspace. Always prefer providing context variables to directly point to relevant files, if you know the correct location of files already. This will lead to faster and more accurate results.

Reviewing and applying code changesđź”—

Based on your task and the provided context, Coder suggests code changes. This process may take some time. For transparency, you can observe which files Coder accesses in the chat. You can expand function calls such as getFileContent to see which files are accessed. While Coder generates file changes, you can also observe the code generation by expanding the function call arguments.

In the default prompt, Coder uses two functions to suggest changes, and you can also just review the prompt template.

changeSet_writeChangeToFile
Will rewrite the full file with a new, changed version. This is usually very robust, but might take a while to complete.
changeSet_replaceContentInFile
Will only replace specific segments of text within a file. This is faster but may require multiple attempts if the content to be replaced is ambiguous or if there are many similar patterns in the code.

Coder will usually select the best of the two functions above, based on the proposed change. If you experience continous issues, specifically, if the changeSet_replaceContentInFile function continuously fails, you can experiment with changing the default prompt or switching to the prompt variant coder-rewrite which will only rewrite files.

Custom agentsđź”—

Custom agents enable you to define new chat agents with custom prompts on the fly. These agents are then immediately available in the default chat.

To define a new custom agent, navigate to the AI Configuration and click on Add Custom Agent. This opens the .yml file where all available custom agents are defined.

Custom agent configuration
id: obfuscator
name: Obfuscator
description: This is an example agent. Please adapt the properties to fit your needs.
prompt: Obfuscate the following code so that no human can understand it anymore. Preserve the functionality.
defaultLLM: openai/gpt-4o

The key fields are:

  • id: Unique identifier for the agent
  • name: Display name of the agent
  • description: Brief explanation of what the agent does
  • prompt: Default prompt that the agent will use for processing requests
  • defaultLLM: Language model used by default

Agent-to-agent delegationđź”—

Agent-to-agent delegation is a powerful feature in HDevelopEVO that enables one AI agent to delegate specific tasks to another specialized agent. This creates multi-agent workflows where each AI agent can focus on its dedicated responsibility, leading to better automation and specialization.

The delegation system allows agents to:

  • Delegate specialized tasks: One agent can hand off specific work to another agent that’s better suited for the task.
  • Chain workflows: Create complex, multi-step processes by connecting different agents.
  • Maintain context: The delegating agent can pass along necessary context and continue its work after delegation.
  • Automate repetitive tasks: Set up workflows where routine tasks are automatically handled by specialized agents.

MCP integrationđź”—

HDevelopEVO integrates the Model Context Protocol (MCP), enabling you to configure and utilize external services in your AI workflows.

Preview restriction

While this integration does not yet include MCP servers in any standard prompts, it already allows you to explore the MCP ecosystem and discover interesting new use cases.

To learn more about MCP, see the official announcement from Anthropic. For a list of available MCP servers, visit the MCP Servers Repository.

Configuring MCP serversđź”—

To configure MCP servers, open the Preferences and add entries to the MCP Servers Configuration section. Each server requires a unique identifier, for example, brave-search or filesystem, and configuration details such as the command, arguments, optional environment variables and autostart.

autostart, true by default, will automatically start the respective MCP server whenever you restart HDevelopEVO. In your current session, however, you’ll still need to manually start it using the MCP Start MCP Server command.

MCP example configuration
{
  "brave-search": {
    "command": "npx",
    "args": [
      "-y",
      "@modelcontextprotocol/server-brave-search"
    ],
    "env": {
      "BRAVE_API_KEY": "YOUR_API_KEY"
    },
    "autostart": false
  },
  "filesystem": {
    "command": "npx",
    "args": [
      "-y",
      "@modelcontextprotocol/server-filesystem",
      "/Users/YOUR_USERNAME/Desktop"
    ],
    "env": {
      "CUSTOM_ENV_VAR": "custom-value"
    }
  },
  "git": {
      "command": "uv",
      "args": [
        "--directory",
        "/path/to/repo",
        "run",
        "mcp-server-git"
      ]
    },
  "git2": {
      "command": "uvx",
      "args": [
        "mcp-server-git",
        "--repository",
        "/path/to/otherrepo"
      ]
  }
}

Note

uvx comes preinstalled with uv and does not need to be installed manually. Running pip install uvx installs a deprecated tool unrelated to uv.

The configuration options include:

  • command: Executable used to start the server (for example, npx)
  • args: Array of arguments passed to the command
  • env: Optional set of environment variables for the server

Windows users

On Windows, you need to start a command interpreter (for example, cmd.exe) as the server command in order for path lookups to work as expected. The effective command line is then passed as an argument.

For example:

"filesystem": {
    "command": "cmd",
    "args": ["/C", "npx -y @modelcontextprotocol/server-filesystem /Users/YOUR_USERNAME/Desktop"],
    "env": {
        "CUSTOM_ENV_VAR": "custom-value"
    }
  }

Starting and stopping MCP serversđź”—

HDevelopEVO provides commands to manage MCP servers:

Start MCP Server
Use the command MCP: Start MCP Server to start a server. The system displays a list of available servers to select from.
Stop MCP Server
Use the command MCP: Stop MCP Server to stop a running server.

When a server starts, a notification is displayed confirming the operation, and the functions made available. You can also set a MCP server to autostart in the settings, true by default.

Note

In a browser deployment MCP servers are scoped per connection, that is, if you manually start them, you need to start them once per browser tab.

Using MCP server functionsđź”—

Once a server is running, its functions can be invoked in prompts using the following syntax:

~{mcp_<server-name>_<function-name>}
  • mcp: Prefix for all MCP commands
  • <server-name>: Unique identifier of the server (for example, brave-search)
  • <function-name>: Specific function exposed by the server (for example, brave_web_search)

Example

To use the brave_web_search function of the brave-search server, you can write:

~{mcp_brave-search_brave_web_search}

This allows you to seamlessly integrate external services into your AI workflows.

SCANOSSđź”—

SCANOSS is a code scanner powered by SCANOSS, enabling you to analyze generated code for open-source compliance and licensing. This helps to understand potential licensing implications when using generated code.

Note

This feature sends a hash of suggested code snippets to the SCANOSS service hosted by the Software Transparency Foundation for analysis. While the service is free to use, very high usage may trigger rate limiting. Additionally, neither MVTec nor SCANOSS can guarantee that no license implications exist, even if no issues are detected during the scan.

To configure SCANOSS open Settings > AI Assistants > SCANOSS Mode and select the desired mode

  • Off: Disables SCANOSS completely
  • Manual: Allows you to trigger scans manually via the SCANOSS button on generated code via the Coder Agent or directly in the Chat view
  • Automatic: Automatically scans generated code snippets in the Chat view

SCANOSS Integration: Open source compliance scanningđź”—

Coder integrates with SCANOSS to help you identify potential licensing implications in your AI-generated code. This allows you to scan code changes proposed by Coder for open-source compliance and licensing concerns before applying them to your codebase.

Manual scanningđź”—

To manually scan a code snippet:

  1. Generate a code in the AI Chat view or via the Coder Agent.
  2. Click the SCANOSS button in the toolbar of the code renderer embedded in the Chat view or above the changeset.
  3. As a result one of the following icons will appear:

    • A warning icon if a match is found
    • A green check mark if no matches are found
  4. If a warning icon is displayed, click the SCANOSS button again to view detailed scan results in a popup window.

Automatic scanningđź”—

In Automatic mode, SCANOSS scans code in the background whenever it is generated in the Chat View or with the Coder Agent. Results are displayed immediately, indicating whether any matches are found.

Understanding SCANOSS resultsđź”—

After a scan is completed, SCANOSS provides a detailed summary, including

  • Match Percentage: Degree of similarity between the scanned snippet and the code in the database
  • Matched File: File or project containing the matched code
  • Licenses: List of licenses associated with the matched code, including links to detailed license terms

AI Historyđź”—

The AI History view allows you to review all communications between agents and underlying LLMs. Select the corresponding agent at the top to see all its requests in the section below.