Models Configuration
Starting with the AI-Service 3.x the service uses a single config for defining configured models (e.g. GPT 4o, GPT 3.5) and services (e.g OpenAI, localAI). The services
and models
parameters from 2.6.x are deprecated and should no longer be used.
The modelsConfig
parameter defines the configuration for the used AI models. It has parameters for the default models and a list of all supported models the AI service can use. Clients will send the desired model with each request to the service. The service maps the requested model with the configured models. If a model is requested by the client which is not configured, the default model will be used.
Info
This setting can be used to provide different pricing tiers to end users. A basic user plan could be based on a cheaper AI model like ChatGPT 4o-mini while a premium plan is based on the larger GPT 4o.
defaultModel
: The id of the default model that should be usedprivateDefaultModel
(optional): The id of a model that is offered as a possbile alternative to Open AI. This could be a localai model hosted on your data center which offers more data privacy.
The models
parameter defines an object of predefined AI service (openai, localai etc.) and model (gpt-4 etc) combinations that are supported by the deployment.
key
The key serves as an id.value
Object with the following properties:displayName
: The display name of the AI model.service
: The name of the ai service. Allowed values are: openai, claude-aws and localaimodel
: the name of the model you want to use. something like gpt-4omaxTokens
(optional): The maximum number of tokens that the model can handle.
Minimal example for a default modelsConfig:
modelsConfig:
defaultModel: "gpt-4o"
models:
gpt-4o:
displayName: "ChatGPT 4o"
service: "openai"
model: "gpt-4o"
maxTokens: 128000
Full featured modelsConfig for advanced usage:
modelsConfig:
defaultModel: "gpt-4o"
privateDefaultModel: "llama2"
models:
gpt-4o:
displayName: "ChatGPT 4o"
service: "openai"
model: "gpt-4o"
maxTokens: 128000
gpt-4:
displayName: "ChatGPT 4"
service: "openai"
model: "gpt-4"
maxTokens: 8192
gpt-3.5-turbo:
displayName: "ChatGPT 3.5-turbo"
service: "openai"
model: "gpt-3.5-turbo"
maxTokens: 16385
llama2:
displayName: "LLaMA 2"
service: "localai"
model: "Llama-2-7b"
maxTokens: 4000
Using LocalAI with a self hosted LLM
The AI Service support LocalAI which allows you to use a self-hosted LLM. It acts as a drop-in API replacement compatible with the OpenAI API.
Set service
to localai
in your models config
modelsConfig:
defaultModel: "llama2"
models:
llama2:
displayName: "LLaMA 2"
service: "localai"
model: "Llama-2-7b"
maxTokens: 4000
Additionally, set the localaiBaseUrl
to point to your LocalAI endpoint. If no localaiBaseUrl
is set but an openaiBaseUrl
is configured, the latter will be used as a fallback.
Configuring a local AI and a hosted AI in parallel
To use OpenAI ChatGPT and LocalAI in parallel, you can either run two separate AI Service instances with different configurations or configure a single AI Service instance to connect to both OpenAI and a LocalAI.
io.ox/core//ai/model = "gpt-4o" (default is 'default')
The setting io.ox/core//ai/model
must map to the defined model ids. It also accepts the values default
and private
. Leaving this unset uses the default model.
Info
When configuring the dual use of different models via privateModel, ensure that the correct consent texts are provided for both models. In most cases, legal requirements mandate obtaining separate consent for each model. See Model Selection Settings
Example configuration for consent links
io.ox/core//ai/config/default =
{
serviceName: 'OpenAI',
serviceLink: 'https://openai.com/about',
dataUsagePolicy: 'https://openai.com/policies/api-data-usage-policies'
}
io.ox/core//ai/config/private =
{
serviceName: 'ExampleAI',
serviceLink: 'https://example.com/about',
dataUsagePolicy: 'https://example.com/api-data-usage-policies'
}
The consent popup will use the correct links from the config when a user switches between models.
There are 2 special values for this jslob setting: 'default' and 'private'. Use those if you want to just use the configured default models from the AI Service.
Configuring Model Selection in App Suite
You can allow users to choose between a private and a default AI model. This enables, for example, offering both a self-hosted model and ChatGPT in parallel, letting users select their preferred option.
Configuring the Model Switch in App Suite User Settings
To display the model selection option in the App Suite settings, the following configurations are required:
1. Ensure Write Access
- The setting
io.ox/core//ai/model
must be writable. - If this setting is protected, the model switch will not work.
2. Define Model Labels
To label the models correctly in the selection menu, you need to configure their display names:
Use these settings for basic labels:
io.ox/core//ai/config/default/displayName
(for the default model)io.ox/core//ai/config/private/displayName
(for the private model)
For translated custom model names, use:
io.ox/core//customStrings/ai/config/modelSelectionDefaultModel/en
io.ox/core//customStrings/ai/config/modelSelectionPrivateModel/en
Important
For each locale that needs to be customized you must place a key in the custom strings config, eg. "en_US" or "de_DE".
It is also possible to set this language based eg. "en" or "de".
This applies to all settings under io.ox/core//customStrings.
3. Default Labeling Behavior
If only the displayName
settings are provided, fallback labels will be used:
Default model:
- displayName: (Up to date knowledge, fast, hosted in US)
Private model:
- displayName: (Best privacy and data protection, hosted in EU)
If custom strings are defined, they will be used instead of these fallback labels.
4. Customize the Model Selection Explanation
To provide a custom explanation for model selection, set:
io.ox/core//customStrings/ai/config/modelSelectionExplanation