Skip to content

AI Assistant

Overview

The AI Assistant integrates multiple large language models, allowing for quick switching and use of different language models through simple configuration. While fully leveraging the advantages of large language models, the AI Assistant also supports deep interaction with the system through custom plugins, enabling more complex and practical assistants that fit actual usage scenarios.

AI Agent

The automation process includes steps to call the AI Assistant. Combining automation with the AI Assistant can achieve complex AI Agents

Configuration

Basic Settings

On the Basic Settings page, you can name the AI Assistant, select an icon and icon color, and write a description.

  • Name: Set the name of the AI Assistant. For example: "GPT"
  • Icon: Choose an icon to represent your AI Assistant.
  • Icon Color: Select the background color of the icon to distinguish between different models.
  • Description: Briefly describe the functions and purpose of the AI Assistant. For example: "I am a large language model developed by OpenAI, and I can help you answer questions."

Scenario Guidance Questions

You can preset common questions and language model response strategies to help users quickly understand and use the model.

  • Input Box Placeholder: Enter placeholder text, for example: "Enter your question"
  • Add Question: Click the "Add" button to add common questions and their brief descriptions. For example:
    • "How to plan life" (6/200 characters)
    • "Help me write a job resume" (9/200 characters)

Basic Settings

Q&A Logic Settings

In the Persona and Response Logic settings, you can set the model's response strategy.
For example: "You are an office assistant and need to answer questions based on user queries. When users ask questions that require you to think, think step by step, and then call external tools after you have clarified your thoughts."
In this way, the AI Assistant will, based on the set [Persona and Response Logic], act as an office assistant, think multiple times before calling external tools.

Writing Suggestions

To ensure a better experience with the AI Assistant, it is recommended to include the following content when writing prompts:

  • Define Persona: Describe the role or responsibilities of the AI Assistant and its response style.
    Example: You are a project management assistant and need to answer user questions accurately.
  • Describe Functions and Workflow: Describe the functions and workflow of the AI Assistant, specifying how the AI Assistant should answer user questions in different scenarios.
    Example: When users query tasks by number, call the "get_task_by_code" tool to query the task.
    Although the AI Assistant will automatically select plugins based on the prompt content, it is still recommended to emphasize in natural language in which scenarios and which tools to call to enhance the binding force on the AI Assistant, select more expected plugins to ensure response accuracy.
    Example: When users ask about the latest unfinished tasks of ongoing projects, call "get_projects" to search for ongoing projects, then call "get_tasks" to query the unfinished tasks of ongoing projects, and finally organize all data for the user.
    Additionally, you can provide examples of response formats for the AI Assistant. The AI Assistant will imitate the provided response format to reply to users.
    Example:
md
Please respond according to the following format:
**Task Name**
Start Time: yyyy-MM-dd hh:mm
End Time: yyyy-MM-dd hh:mm
Task Description: Task description within 20 characters
  • Instruct AI Assistant to Answer Within Specified Range: If you want to limit the response range, directly tell the AI Assistant what should be answered and what should not be answered.
    Example: Refuse to answer topics unrelated to project management; if no results are found, tell users that you did not find relevant data instead of fabricating content.

Language Model Settings

In this section, you can select the language model provider and specific model version, as well as configure model parameters.

Setting ItemDescription
Model ProviderSelect or input the model provider, such as OpenAI
Language ModelSelect or input the specific language model, such as gpt-4-turbo
API KeyAPI Key required to call the model interface
API AddressSelect the address of the language model API, such as https://api.openai.com/v1/chat/completions
RandomnessA value between 0-2. The higher the Temperature value, the more random the response; the lower the value, the less randomness and the more certain the result. When the value approaches zero, the model becomes deterministic and repetitive.
Maximum Token Count for Response0 means unlimited. The maximum number of tokens generated by the prompt and response together. Different models have different token limits. Specifying the maximum length can prevent overly long or irrelevant responses and control costs.
Topic FreshnessThe larger the presence_penalty value, the more likely it is to expand to new topics
Frequency PenaltyThe larger the frequency_penalty value, the more likely it is to reduce repeated words
Number of Dialogue Turns CarriedThe number of context messages the model can remember for each query

Supported Models List

Model NameOfficial Documentation AddressAPI Key Application Address
DeepSeekhttps://deepseek.com/https://platform.deepseek.com/api_keys
VolcEngine Ark Modelhttps://www.volcengine.com/https://console.volcengine.com/ark/region:ark+cn-beijing/apiKey
Tongyi Qianwenhttps://tongyi.aliyun.com/https://bailian.console.aliyun.com/?spm=5176.29619931.J_SEsSjsNv72yRuRFS2VknO.2.74cd10d7DbAknf&tab=app#/api-key
Zhipu ChatGLMhttps://chatglm.cn/https://chatglm.cn/developersPanel/apiSet
Ollama Local Modelhttps://ollama.com/-
OpenAIhttps://openai.com/https://platform.openai.com/account/api-keys
Claudehttps://claude.ai/https://console.anthropic.com/account/api-keys

How to Define Plugins

If the language model is the brain of AI, then plugins are equivalent to AI organs, providing AI with more input or output capabilities. Informat provides one-click system plugins for the AI Assistant, giving it basic capabilities. It also provides custom plugins for designers to create richer and more diversified AI capabilities.

Plugin Definition

  • Name: The name of the plugin.
  • Identifier: The unique identifier of the plugin.
  • Enabled: Control whether the plugin is enabled in the AI Assistant.
  • Call Instructions: Description of the plugin's functions.
  • Call Automation: The automation executed when the AI Assistant calls the plugin.

System Plugins

System plugins provide the AI Assistant with the ability to deeply interact with the system, implementing core functions such as time retrieval and data operations. The specific plugin list is as follows:

NameIdentifierCall Instructions
Get Current Time_get_current_timeGet current time, AI model needs to call this method before query
Query Available Data Tables_query_all_table_listGet the list and basic information of all data tables in the current application
Query Specified Data Table Configuration_query_table_defineGet the complete structure definition of the specified data table (including all field information)
Query Data Table Record List_query_table_record_listQuery data table records by condition, supporting filtering, pagination, sorting, etc.
Query Data Table Record List Count_query_table_record_list_countCount the number of data table records that meet the conditions
Send System Email_send_system_emailUse system email to send emails
Query Application Member Account Basic Information_query_app_user_listGet member account basic information (including account ID, email, etc.)
Internet Access Capability_web_contentGet web page content through URL, allowing access to network resources
Code Executor_javascript_evalExecute JavaScript code, suitable for complex calculations and data processing
Render HTML-render_htmlRender HTML code as a visual interface, supporting interactive content display
Insert Specified Data Table Records_table_record_batch_insertBatch insert records into the specified data table
Edit Specified Data Table Records_table_record_batch_updateModify record content in the specified data table
Delete Specified Data Table Records_table_record_batch_deleteDelete records in the specified data table according to conditions
Query All Search Engine Modules_query_all_textindex_listGet all search engine module information (including data source configuration)
Call Search Engine to Search and Retrieve Data_textindex_searchCall the search engine to perform data retrieval operations
Send Notification_send_notificationCall script to send notifications to web, WeCom, DingTalk, Feishu and other platforms
Read Office Files_read_office_fileSupport reading content from PDF and Word files
Generate Informat Script_generation_informat_scriptGenerate Informat script based on context
Query Informat Script File List_query_informat_script_listGet the list of Informat script files
Query Specified Informat Script File Content_query_informat_script_contentGet script content by script ID
Execute Informat Script_execute_informat_scriptExecute Informat script by script ID and function name
Query Designer Data Table Definition_designer_query_table_defineQuery data table definition in unpublished applications
Create Data Table Module_create_table_moduleCreate data table module, supporting creation of related records and related record fields

Custom Plugins

Call Automation
When a custom plugin is called, it will execute a preset automation process to meet personalized needs.

Plugin Definition Example

Suppose you want to define a plugin to help create tasks, the definition is as follows:

  1. Plugin Definition Create Task Plugin
  2. Called Automation Called Automation

Enable MCP Server Service

One-click deployment as MCP Server, supporting cross-model call empowerment

MCP Server Address: ${host}/web0/aiagent/${appId}/${moduleId}/event

X-INFORMAT-APIKEY: Supports default application apiKey and custom apiKey

Example: TRAE Add Mcp

AI Assistant Debugging

During the construction of the AI Assistant, you need to continuously optimize and iterate prompts based on the actual performance of the AI Assistant to achieve the expected experience.

In the Preview and Debug area, test the actual performance of the Bot. If it does not meet expectations, analyze the reasons according to the Bot's goals and continuously adjust and optimize the response logic.

Debugging