AI-Service
Imprint
Imprint
  • AI-Service documentation

AI-Service documentation

The OX AI-Service integrates 3rd party AIs in App Suite UI, currently ChatGPT from OpenAI, Anthropic Claude on AWS Bedrock or localAI compatible models.

Configuration

As the AI Service delivers the App Suite UI plugin, it needs to be added to the baseUrls on the UI Middleware.

Add the AI Service url (K8s url) to the list of baseUrls of the UI Middleware configuration. This is needed to ensure that the AI integration's code is send to the user's browser.

Example config map

# hostnames may differ in your setup
baseUrls:
  - http://core-ui
  - http://ai-service
  - http://switchboard

Further reading: UI middleware Config map

Capability

Important: Starting with the AI-Service 2.0 and Core UI 8.28 the needed capability was renamed.

Capability: ai-service (was open-ai)

Feature toggle

Important: Starting with the AI-Service 2.0 and Core UI 8.28 the feature toggle was renamed.

Feature toggle: ai (was open-ai)

Requirements

Ensure the following prerequisites are met to run the AI-Service App Suite plugin:

  1. A running Switchboard in your stack is needed to run the AI Service. (Switchboard is needed for service-to-service authentication)
  2. You are using the Provider Edition (PE) of OX App Suite. This needs a paid App Suite Subscription (For more information contact Sales)

How to enable for users

By default, AI integration is disabled for all users. To enable the feature for users, add the following:

  1. Add capability ai-service which indicates a running AI-Service in your deployment
  2. Enable the feature toggle for users via
io.ox/core//features/ai = true
  1. Set the public available hostname for AI service in the user configuration
io.ox/core//ai/hostname = https://ai-service.your.deployment.com

Additional AI features

Chat

The AI chat allows a user to chat with the AI. There is a separate feature toggle for this.

io.ox/core//features/ai-chat=<bool>
default: true

This enables or disables the chat. The default is true.

You can configure the assistant globally by providing context with a system prompt. This prompt will be injected in every conversation as a start message invisible to the end user. Set chatAssistantContext on AI Service config, default is "You are a helpful AI assistant."

Example:

chatAssistantContext: "You are a helpful AI assistant. Your name is Hal 9000."

The frontend part can be configured via the following settings:

io.ox/core//ai/chat/showLauncher=<bool> (default: true)

Add the AI Chat as App in the the App Launcher

io.ox/core//ai/chat/showTopRightIcon=<bool> (default: true)

Add the AI Chat as Icon on the right side in the App Suite toolbar (next to the notification icon)

io.ox/core//ai/chat/autoOpenOnFirstRun=<bool> (default: true)

Automatically open the Chat window for all users with the AI feature on the first run, e.g. after updating to the most recent AI Service. This is helpful to boost AI engagement and make the feature visible.

The following settings are available in the helm values file:

# will be available as "config.yaml" in config map
config:
  chat:
    # This configures the time to live (TTL) for the chat history.
    history:
      # given in days (0 = keep forever)
      pruneChatsOlderThan: 0

Typeahead for mail compose

The typeahead feature provides automatic text predictions while composing emails. Using AI, it generates text that either completes a sentence or aligns with the overall context. Predictions can be triggered manually by pressing Ctrl+Space during mail composition or will appear automatically after the cursor remains idle for a set period (e.g., 3, 5, or 10 seconds).

This feature includes a dedicated toggle, allowing it to be enabled or disabled for all users.

io.ox/core//features/ai-typeahead=<bool> (default: false)

When enabled, this feature is opt-in, requiring users to acknowledge that the AI will process their email text to generate predictions.

The AI settings panel provides additional user options for this feature, allowing control over whether predictions are shown automatically, only when using a keyboard shortcut, or not at all.

Engage Idle AI Users

To encourage users to explore AI features, it may be helpful to remind them about these features. A simple approach is to display the AI Tour again after a certain period of inactivity.

io.ox/core//ai/tour/remindAfterIdleDays=<number> (default: -1 (disabled))

If a user has already seen the tour but does not use any AI feature within the specified number of days, the tour will reappear exactly once.

Note: This feature relies on the date of the user's initial consent to determine whether any AI features have been used during the defined period.

Upsell configuration

AI integration supports the App Suite UI upsell functionality. To enable upsell for the AI features, set the following frontend configuration:

io.ox/core//upsell/activated = true
io.ox/core//upsell/enabled/ai-service = true
io.ox/core//features/ai = true

Remove the capability ai-service for all users that should see the upsell triggers. When enabled, all AI buttons and dropdowns are shown in the UI but will trigger an upsell event on the ox object.

Models Configuration

The services and models parameters is deprecated and should no longer be used. Use the modelsConfig parameter instead.

The modelsConfig parameter defines the configuration for the used AI models. It has parameters for the default models and a list of all supported models.

  • defaultModel: The id of the default model that should be used
  • privateDefaultModel (optional): The id of a special model that can be used for a private mode. Usually points to a self hosted AI model.

The models parameter defines an object of predefined AI service (openai, localai etc.) and model (gpt-4 etc) combinations that are supported by the deployment.

  • key The key serves as an id.
  • value Object with the following properties:
    • displayName: The display name of the AI model.
    • service: The name of the ai service. Allowed values are: openai, claude-aws and localai
    • model: the name of the model you want to use. something like gpt-4o
    • maxTokens (optional): The maximum number of tokens that the model can handle.
example for modelsConfig:
modelsConfig:
  defaultModel: "gpt-4o"
  privateDefaultModel: "localai-private"
  models:
    gpt-4o:
      displayName: "ChatGPT 4o"
      service: "openai"
      model: "gpt-4o"
      maxTokens: 128000
    gpt-4:
      displayName: "ChatGPT 4"
      service: "openai"
      model: "gpt-4"
      maxTokens: 8192
    gpt-3.5-turbo:
      displayName: "ChatGPT 3.5-turbo"
      service: "openai"
      model: "gpt-3.5-turbo"
      maxTokens: 16385
    claude-haiku:
      displayName: "AWS Claude-Haiku"
      service: "claude-aws"
      model: "claude-haiku"
      maxTokens: 200000
    claude-sonnet:
      displayName: "AWS Claude-Sonnet"
      service: "claude-aws"
      model: "claude-sonnet"
      maxTokens: 200000
    localai-private:
      displayName: "ACME privacy"
      service: "localai"
      model: "llama2"
      maxTokens: 64000

The JSlob setting io.ox/core//ai/model works with these model ids. It also accepts the values 'default' and 'private'. You can just leave it blank to fallback to the default model

You also need to setup the consent dialog for the private and default models like this:

io.ox/core//ai/config/default =
  {
    serviceName: 'OpenAI',
    serviceLink: 'https://openai.com/about',
    dateUsagePolicy: 'https://openai.com/policies/api-data-usage-policies'
  }
io.ox/core//ai/config/private =
  {
    serviceName: 'ExampleAI',
    serviceLink: 'https://example.com/about',
    dateUsagePolicy: 'https://example.com/api-data-usage-policies'
  }

This is because UI doesn't know which service is behind the abstract "default" or "private" model names In the consent dialog this is used to fill the placeholders in the consent text. This way a user is directed to the proper resources.

A bit more detailed:

LocalAI Configuration

LocalAI allows you to use a self-hosted LLM with the OX AI Service. It acts as a drop-in API replacement compatible with the OpenAI API. You can also use it to connect to an already API-compatible LLM on your side.

To use it, configure your service JSON accordingly. See the Services and Models example configuration.

Additionally, set the localaiBaseUrl to point to your LocalAI endpoint. If no localaiBaseUrl is set but an openaiBaseUrl is configured, the latter will be used as a fallback. Note: This should only be used for testing.

To use OpenAI ChatGPT and LocalAI in parallel, you can either run two separate AI Service instances with different configurations or configure a single AI Service instance to connect to both OpenAI and a LocalAI.

Advanced Configuration: Enforcing Default Model and Service

The enforceDefaultModel setting controls whether the default model (e.g., "gpt-4o", "localai-private", etc.) is used for all requests.

When this value is set to false, the client configuration specifies which model to use for each request individually.

Important: Be cautious when overriding this setting, as it may result in an incompatible model that could cause runtime errors. Only modify these value if absolutely necessary.

io.ox/core//ai/model = "gpt-4o" (default is 'default')

There are 2 special values for this jslob setting: 'default' and 'private'. Use those if you want to just use the configured default models from the AI Service.

Model selection settings

To show the model switch in the settings some things need to be configured:

io.ox/core//ai/model must be writable. It will not work if this setting is read only

io.ox/core//ai/config/default/displayName and io.ox/core//ai/config/private/displayName must be set or io.ox/core//customStrings/ai/config/modelSelectionDefaultModel and io.ox/core//customStrings/ai/config/modelSelectionPrivateModel

This is to label the 2 models in the model selection correctly. If you only provide the displayName settings those will be put into our fallback labels: "<io.ox/core//ai/config/default/displayName> (Up to date knowledge, fast, hosted in US)" for the default model and "<io.ox/core//ai/config/private/displayName> (Best privacy and data protection, hosted in EU)" for the private model.

If you provide custom strings for the models those will be used instead.

If you want to further customize the model selection settings you can use io.ox/core//customStrings/ai/config/modelSelectionExplanation to set your own explanation text.

Accounting Feature

As AI usage relies on paid APIs, providers may want to charge their users for AI usage. The service supports a flexible way to apply different commercial models. This is done by counting API requests per user. When enabled, the Accounting feature ensures that all requests done by a user are counted in a database. This number is checked against a so called "plan" which is assigned to the user. If the AI quota is reached, no additional request can be made for the current time range.

For instance, a "free" plan might allow 10 requests per day or month, while a paid plan allows 1000 requests per month. The user and brand limits can be configured via config map on the Helm chart. A plan is an object with its key as the id for the plan and has the following properties:

  • name: The name of the plan
  • plan: The type of the plan (free, paid)
  • strategy: The strategy for the plan (duration, monthly)
  • duration: The duration if strategy is "duration" (format is 30d for 30 days)
  • limit: The limit of requests per user and brand
  • enabled: The plan is enabled or not

Additional Strategies Explanation

The monthly strategy resets usage at the start of each new month, providing a monthly quota of requests.

The duration strategy uses a fixed number of days (e.g., 14 days) and a certain request limit. Once the trial period ends or the quota is reached, the service returns 402 Payment Required, preventing further use. This prevents abuse of trial accounts and differentiates them from longer-term or recurring monthly plans.

Activation

To activate the Accounting feature in a Kubernetes environment, modify the configuration file by setting the accounting.enabled value to true in your values file.

When enabled, it monitors user activity and enforces quotas according to the user's capabilities and associated plan.

Functionality

Once the Accounting feature is activated, the service will inspect each user's JSON Web Token (JWT) for specific "capabilities" that match a configured "plan."

How It Works

  • JWT Inspection: When a user makes a request, the service inspects the user's JWT for any "capabilities" that correspond to a configured plan.

  • Plan Matching: If a capability matches a predefined plan, the service begins tracking the user's usage according to the rules of that plan.

  • Usage Tracking: The service monitors the number of requests made by the user as specified by the plan's quota. For example, a plan may allow 100 requests per month.

  • Quota Enforcement: If the user reaches the quota limit (e.g., 100 requests per month), the service responds with a status code 402 Payment Required to indicate that the quota has been exhausted.

  • Default Plan: If no capability is found that matches a configured plan, the user is automatically assigned a default plan. The default plan allows up to 10,000 requests per month.

Response Codes

  • 200 OK: Request was successful, and usage is within quota limits. The response will include information about the user's remaining quota. The responses body will contain the property remainingRequests with the number of remaining requests.

  • 402 Payment Required: The user has reached the quota limit for their plan.

User Routes

The service provides a route for users to check their usage and remaining quota:

  • GET /api/quotas: Get the all quotas for the current user. The responses body will contain the array remainingRequests of objects containing the servicename as key and the number of remaning requests as value.
  • GET /api/quotas/:model: Get the all quotas for the current user for the specific :model. The responses body will contain the property remainingRequests with the number of remaining requests.

Admin routes

There are several routes available for administrators to monitor usage and reset quotas:

  • GET /api/admin/brands/:brand/usage/:year/:month: Get usage statistics for a specific brand in a given month. :brand is the brand ID, :year is the year, and :month is the month. Example /api/admin/brands/brand1/usage/2024/1
  • GET /api/admin/quotas/:service/:cid: Get the quota for a specific user and service. :service is the name of the service (e.g., openai), and :cid is the composite user ID which is a combination of the context ID and the numeric user ID. For example, :cid could be 1-123.
  • DELETE /api/admin/quotas/:service/:cid/:strategy: Reset the quota for a specific user and service. :service is the name of the service (e.g., openai), :cid is the composite user ID, and :strategy is the strategy for the plan (e.g., monthly or duration).

To secure these routes, there are 3 options available for authentication:

  1. Basic Auth: Use basic authentication with the basicAuth.user and basicAuth.password values in the configuration file. If you want to use the charts secret file you have to set basicAuth.enabled to true. Otherwise, you can provide your own secret with basicAuth.enabled set to false and overrides.basicAuth set to the secret name. The provided secret should contain the keys basic_auth_user and basic_auth_password. In this case you don't need to set basicAuth.user and basicAuth.password. The request header should contain the Authorization header with the value Basic base64(username:password).

  2. Bearer Auth: Use bearer authentication with the bearerAuth.keys value in the configuration file. The bearerAuth.keys value should be a list of keys that are allowed to access the admin routes seperated by a comma. If you want to use the charts secret file you have to set bearerAuth.enabled to true. Otherwise, you can provide your own secret with bearerAuth.enabled set to false and overrides.bearerAuth set to the secret name. The provided secret should contain the key bearer_auth_keys. In this case you don't need to set bearerAuth.keys. The request header should contain the Authorization header with the value Bearer <key>.

  3. OAuth2: Given that you have an OAuth2 server running and this server supports openid connect and JWKS, you can use the oauth2.domain configuration to secure the admin routes. Turn this on via oauth2.enabled set to true. The service will then discover the JWKS from the given domain and validate the JWTs with it. The request header should contain the Authorization header with the value Bearer <JWT>.

Take note that you can configure multiple authentication methods at the same time. The service will try to authenticate the request with the first method and if it fails, it will try the next one. If no auth is enabled (default), no admin route is exposed at all.

Example plan configuration:

plans:
  - premium:
      name: "AI Premium"
      plan: "paid"
      strategy: "monthly"
      limit: 100
      enabled: true
  - trial:
      name: "AI Trial"
      plan: "free"
      strategy: "duration"
      duration: "30d"
      limit: 25
      enabled: true
  - specialTrial:
      name: "AI Sneak Peak Week"
      plan: "free"
      strategy: "duration"
      limit: 7
      limit: 15
      enabled: true

How capabilities are matched to plans

The service will look for the capabilities property in the JWT. If capabilities are found, the service will look for a capability that matches the ID of a plan (premium, trial, or specialTrial, as in the example above). If a plan is found, the service will use the plan to track the user's usage. If no plan is found, the service will use the default plan (10,000 requests per month). If the user has more than one capability that matches a plan, the service will use the first one found and log a warning.

How do I set the capabilities in the JWT?

The capabilities are set in the JWT by the Switchboard. The Switchboard will check for all App Suite capabilities of a user, then apply a filter so that only relevant capabilities are added to the JWT. This means you need to set the desired capability, for example, specialTrial, on the App Suite side for your user. This will most likely be handled by your upsell code, your administrators via the SOAP API, or the config cascade.

Once the user has the correct capability, the Switchboard needs to be configured to add the capability to the JWT. This is done by adding your capability to the Switchboard’s jwt.serviceCapabilities value, which by default is set to "switchboard,openai,zoom,jitsi" (be sure to add your capability rather than overwriting the defaults, e.g., "switchboard,openai,zoom,jitsi,specialTrial"). See the Switchboard documentation. This list acts like an allowlist for the user's capabilities. If the user has the capability, it will be added to the JWT.

Steps to Configure the AI Service

  1. Fill out Database Configuration

    Add the following configuration to enable the database for the AI service:

     database:
       host: <your mysql host>
       name: <your db name>
       user: <your mysql user omit if you provide your own secret>
       password: <password of your mysql user omit if you provide your own secret>
    
  2. Alternative: Skip Database Usage

    If you don't want to add a database, it's currently still possible to run the service without a database. However, in this case some features like AI chat, usage tracking, or rate limiting are not available. Since the feature toggle for AI chat is "enabled" by default, you would need to disable the AI chat as noted above (io.ox/core//features/ai-chat=false).

     database:
       enabled: false
    

    Finally, set the mysqlSecret.enabled to false.

  3. Optional: Custom Database Secret

    If you want to use your own secret for the database, set the mysqlSecret.enabled to false and provide the secret name in overrides.mysqlSecret.

  4. Prepare the Database

    The service expects an existing database to connect to and run initial migrations. Follow the steps below to set up the database correctly.

Database Schema Setup

To ensure the AI service operates correctly, you need to create the necessary tables and procedures in your database. Although the actual migration script will be executed automatically, it is essential to ensure that the database user has the appropriate permissions.

Minimal Required Grants

The minimal grants needed for the database user are:

GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, TRIGGER
ON `your_database_name`.*
TO 'your_db_user'@'%';

Replace your_database_name with the name of your database and your_db_user with the name of your database user.

Example Configuration

Here's an example of how to grant the necessary permissions:

CREATE USER 'ai_service_user'@'%' IDENTIFIED BY 'secure_password';

GRANT SELECT, INSERT, UPDATE, DELETE, CREATE, DROP, REFERENCES, INDEX, ALTER, CREATE TEMPORARY TABLES, LOCK TABLES, EXECUTE, SHOW VIEW, CREATE ROUTINE, ALTER ROUTINE, TRIGGER
ON `ai_service_db`.*
TO 'ai_service_user'@'%';

FLUSH PRIVILEGES;

In this example:

  • ai_service_user is the database user for the AI service.
  • secure_password is the password for the database user.
  • ai_service_db is the database name for the AI service.

Language support and i18n

Supported languages in the AI frontend

The AI frontend is translated in the following languages

LanguageLocale
English (US)en_US
English (GB)en_GB
Germande_DE
Frenchfr_FR
Spanish (ES)es_ES
Spanish (MX)es_MX
Dutchnl_NL
Polishpl_PL
Japaneseja_JP
Italianit_IT
Portuguese (BR)pt_BR
Swedishsv_SE
Danishda_DK
Finnishfi_FI
Turkishtr_TR

Fallback language is always English (en_US)

LLM language support

Not every LLM that can be connected offers the same level of language support. This is important, as users interact directly with the LLM through AI Chat and other user-generated input (e.g., when composing emails).

For ChatGPT 4o and newer, the following table categorizes languages based on their level of support:

Top Tier (Best Support)Strong (Good Support)Functional (Basic Support)
English (en)Swedish (sv)Czech (cs)
Spanish (es)Danish (da)Hungarian (hu)
German (de)Norwegian (no)Greek (el)
French (fr)Finnish (fi)Korean (ko)
Portuguese (pt)Polish (pl)Hebrew (he)
Italian (it)Russian (ru)Thai (th)
Dutch (nl)Turkish (tr)Arabic (ar)
Japanese (ja)Romanian (ro)
Chinese (zh)

For reference:

  • The LLM supports many languages, categorized by proficiency levels (e.g., Top Tier, Strong, Functional).
  • The UI is translated into a subset of these languages, covering those with strong user demand and reliable localization and matching the most important languages in App Suite

This means that while users can interact with the LLM in many languages, the application interface itself is only available in the languages listed above. As a fallback users are always able to interact with the Chat e.g in English.

Translation feature

The AI Integration includes a dedicated translation feature for emails. By clicking on the AI dropdown in the email toolbar, users can choose to translate emails into their preferred language.

The first option always translates the email into the user's currently set locale. For example, if the user's UI language is set to German, we assume this is their preferred language for translations. The second option is always English, as it is the most commonly used language in our context and serves as a reliable intermediary for translating foreign languages using the LLM.

The following languages are currently supported for translation. This list may change in the future. For UX reasons, we do not include every possible language, as an excessively long dropdown would negatively impact usability.

LocaleLanguage
enEnglish
esSpanish
deGerman
frFrench
itItalian
nlDutch
ptPortuguese
jaJapanese

Complete configuration reference

ParameterDescriptionDefault
image.repositoryThe image to be used for the deploymentregistry.open-xchange.com/core/ai-service
image.pullPolicyThe imagePullPolicy for the deploymentIfNotPresent
image.tagThe image tag, defaults to app version""
hostnamehostname for the ai-service deployment""
originsAllowed origins for CORS*
logLevelspecify log level for service"info"
logJsonLogs in JSON formatfalse
exposeApiDocsExpose API documentation via Swagger UI at /api-docsfalse
ingress.enabledGenerate ingress resourcefalse
ingress.annotationsMap of key-value pairs that will be added as annotations to the ingress resource{}
overrides.nameName of the chart"ai-service"
overrides.fullnameFull name of the chart installation"RELEASE-NAME-ai-service"
jwtSecret.enabledEnable the secret for JWTtrue
jwt.sharedSecretShared secret for JWT verification. This must match the secret configured for switchboard""
jwks.domainDomain of JWKS issuer like example.com leave empty if you want to use sharedSecret""
openaiSecret.enabledEnable the secret for openaitrue
openaiAPIKeyOpenAI API Key""
localaiSecret.enabledEnable the secret for localai. May be optional, depending on your used service and model.false
localaiAPIKeylocalAI API Key. May be optional, depending on your used service and model.""
azureSecret.enabledEnable the secret for Azurefalse
azureAPIUrlOpenAI Azure API Url (Internal use only)""
azureAPIKeyOpenAI Azure API Key (Internal use only)""
mysqlSecret.enabledCreate the kubernetes secret for mysql (enable if you want to use the DB or provide own secret)true
overrides.mysqlSecretIf you provide your own secret for mysql put the secret name here""
database.enabledUse Database. This is mandatory for AI chat, usage tracking, and rate limiting and therefore deprecated. With App Suite 8.34, the default changed to true.true
database.hostSQL server hostnameRELEASE-NAME-ai-service-sql
database.nameDatabase nameRELEASE-NAME-ai-service
database.connectionsNumber of concurrent connections to the DB server"10"
database.userDB User with access rights to sqlDB""
database.passwordDB Password of swDBUser""
database.rootPasswordDatabase root password to perform admin tasks""
database.rollbackWARNING: This will roll back the migrations this version has rolled outfalse
cron.cleanupDbDatabase cleanup interval (Cron notation)0 0 * * * *
azureAPIVersionOpenAI Azure API Key (Internal use only)""
openaiBaseUrlUrl of the OpenAI service (internal or localAI use only)""
localaiBaseUrlUrl of the localAI service. Falls back to to openaiBaseUrl if configured.""
localaiModerationDisableddisable moderation for localai independently from the global setting (usually not supported)true
accounting.enabledEnable the accounting featurefalse
chatAssistantContextFirst (system) message that is sent to the ai chat to setup the mood and general context."You are a helpful AI assistant."
plansList of plans with limits for users and brandssee example above
modelsList of models that the service supportssee example above
enforceDefaultModelAlways use the configured default model regardless of the clients requestfalse
disableAPIDisable all API endpoints. Only UI source files will be deliveredfalse
moderationDisabledDisabled moderation for testing purposes (internal or localAI use only)false
awsSecret.enabledEnable the secret for AWSfalse
awsRegionRegion of the AWS user with Bedrock & Claude enabled""
awsAccessKeyAWS IAM access key with Bedrock & Claude enabled""
awsSecretKeyAWS IAM secret key with Bedrock & Claude enabled""
awsBaseUrlURL of the AWS service (internal use only)""
basicAuth.enabledEnable the secret for basic authentication (Admin routes)false
basicAuth.userUsername for basic authentication (Admin routes)""
basicAuth.passwordPassword for basic authentication (Admin routes)""
overrides.basicAuthSecretIf you provide your own secret for basic auth put the secret name here""
bearerAuth.enabledEnable the secret for bearer authentication (Admin routes)false
bearerAuth.keysComma seperated keys to use with bearer auth (Admin Routes)""
overrides.bearerAuthSecretIf you provide your own secret for bearer auth put the secret name here""
oauth2.enabledEnable OAuth2 for admin routesfalse
oauth2.domainDomain of the openid connect capable OAuth2 server""
config.chat.history.pruneChatsOlderThanGiven in days. Removes old chat entries by a nightly job (see cron.cleanupDb). This job is disabled by 0 or any falsy value.0
config.features.autoTranslateFeature toggle. Allow automatic email translations.true
config.features.chatFeature toggle for "AI chat"true
config.features.suggestionsFeature toggle. Prompt suggestions are shown in the AI Chattrue
config.features.smartTasksFeature toggle. SmartTasks allow interactions with the UI. Requires support for "tools" on AI level. Currently only available for OpenAI.true
config.features.typeaheadFeature toggle. Typeahead suggests text fragments during mail compose.false

AI-Service migration guide

This part should help to clarify what is needed when upgrading from one version of the ai service to another, by providing a very basic example setup. This might NOT be complete or up to date. It should just help with what we feel are the most common obstacles you may encounter.

Migration from 2.6.x (needs appsuite 8.33) to 3.0.x (needs appsuite 8.34)

Helmchart

Model configuration

You need to add your models to the modelsConfig value in your helm chart, like so:

modelsConfig:
  defaultModel: "gpt-4o"
  models:
    gpt-4o:
      displayName: "ChatGPT 4o"
      service: "openai"
      model: "gpt-4o"
      maxTokens: 128000

This would be a super simple configuration with only one model. For further information check the Models Configuration chapter in the documentation.

You may delete your previous models and/or services configuration. It is no longer needed.

Database

A database connection is now mandatory and needs to be configured.

JSlob

Consent dialog

For a very simple configuration you need to add something like the following config to your JSlob:

io.ox/core//ai/config/default/serviceName = 'OpenAI',
io.ox/core//ai/config/default/serviceLink = 'https://openai.com/about',
io.ox/core//ai/config/default/dateUsagePolicy = 'https://openai.com/policies/api-data-usage-policies'

Those links will be used in the consent dialog and should direct the user to the proper documents.

For more complex setups and configurations check the Models Configuration Chapter and the Model selection settings Chapter in the documentation.

Model setting

io.ox/core//ai/model = 'default' or empty

If your io.ox/core//ai/model setting is empty you are already done. Otherwise set it to 'default' or just unset it. See Model selection settings for more information.

Last Updated:: 4/8/25, 12:40 PM