Advanced topics
Discussion of the rendering mechanism
We render some files from a templates
directory, using a script render.py
based the Python jinja2
framework, applying defaults from a lab.default.yml
configuration file, into a target directory, by default rendered/lab
.
In addition, the script automatically reads files matching the file glob pattern *.lab.default.yml
, allowing you to store shared defaults to all your labs there.
Additional (per-lab) configuration files can be provided with the -f
, --values
argument of the render.py
script. Multiple files can be provided using the switch multiple times; each of the given files will be read in order.
The (default) output directory is rendered/<basename-of-the-lab-yml-file>
. It can be overriden with the -r
, --rendered
argument.
In the templates directory, any _startup.*.j2
files are "executed" in order (technically: prepended to each individual template contents) which allows for further conditional variable initialization or any further jinja2
script code which doesn't fit into the constraints of a (pure YAML) lab.yml
config file.
In this fashion, every .j2
ending template file is rendered into a target file in the output directory with the same filename without the .j2
ending.
A special case are files ending in .generic_script.j2
. These are cross-platform source files rendered to platform-specific scripts (Bash, Powershell). By default we render Bash; on Windows platforms, we render Powershell. This can be controlled via the render_{sh|ps1}
configuration variables (see below).
As a solution to consistently render randomly-generated passwords into the target files, we have added a mechanism which creates and updates a local password storage on the fly. It behaves that if a password (identified by a label) has not been generated yet, it will be randomly generated, and on subsequent occurrences of the same password (by the same label), we will re-use the previously randomly generated password. For more details, see render.py
script source.
Concluding remark: the render.py
script and surrounding construction does not claim to showcase best-practices configuration management for k8s. It is meant as a low-effort vehicle with no value by itself (and shall not be subject of study itself), rather, its generated output files are meant as educational vehicle to demonstrate how to setup labs based on our company's official deliveries (core product helm charts and container images).
Discussion of the rendered files
install.{sh,ps1}
aims to be a human-readable document which shows how the individual components (batteries and App Suite itself) are to be installed.- The main installation of App Suite itself is one
helm install
invocation on the so-called "Stack Chart" (the default vanilla chart currently lives atoci://registry.open-xchange.com/appsuite/charts/appsuite
) - The stack chart installation is invoked with two values files:
values.yaml
carrying the entire App Suite configuration including sizing, replication factors, feature set, etc., andvalues.secret.yaml
carrying all secrets. Technically this separation does not matter here for thehelm install
behavior, but allows for different treatment of the files (unencrypted git commit vs special treatment of sensitive data). These files carry the - There is also an uninstall script
uninstall.{sh,ps1}
to allow for quick and easy redeployment cycles.
About the install / uninstall scripts
The scripts feature so-called "light" and "medium" run modes.
"Light" mode is intended to only (un)install App Suite while keeping the batteries and their data intact.
"Medium" mode is intended to (un)install App Suite and the batteries, while still keeping their data (PersistentVolumes) intact.
By default, the Persistent Volumes and the application namespace and its secrets (pull secrets, TLS secrets) are preserved. If you want to cleanup this as well, the uninstall script has got command line options for this as well.
Customizing the installation
Start by using one or more of the provided example config files in the examples/
subdirectory.
For instance, you can create a lab which uses keycloak (and ldap) as follows:
% v/bin/python render.py -f examples/lab.keycloak.yml
You can create your own configurations and "plug and play" fragments from the different example files into your own lab.whatever.yml
file.
If there are configuration settings which you always use in your labs, consider putting them in a file like my.lab.default.yml
in the repo root directory. It will automatically be read on every rendering invocation. Good candidates to put there are settings related to your (custom) domain name, TLS certs, and other platform / k8s related settings.
Render the files. The render.py
will by default render into a dedicated output directory based on the basename of your config file, e.g. lab.mine.yml
-> rendered/lab.mine
.
A -V
switch is available to give more diagnostics on output (in particular, output the changes to the rendered files), and generally recommended to use.
% vim lab.mine.yml
% v/bin/python render.py -f lab.mine.yml
% cd rendered/lab.mine
% ./install.sh # or install.ps1
Discussing typical customization use cases
Following a collection of typical use cases with the respective settings in the lab.yml
file.
Configure which chart is installed
Non-customized charts
By default, we use the appsuite/charts/appsuite
stack chart to create a lab based on the vanilla (non-customized) Open Source release of App Suite 8. This version is released publicly, i.e. without requiring pull credentials. If unsure, use this one.
There is also the appsuite-pro/charts/appsuite-pro
stack chart which contains proprietary components. Pulling this chart (and the images referenced therein) requires authentication (see below).
To use the open source stack chart, you don't need to configure anything, the default config as per lab.default.yml
applies:
as_chart: oci://registry.open-xchange.com/appsuite/charts/appsuite
To use the appsuite-pro
stack chart, you need to configure the location, and add a pull secret:
as_chart: oci://registry.open-xchange.com/appsuite-pro/charts/appsuite-pro
as_pullsecret: <your-pullsecret>
The pull secret needs to be created by you before running the installation script for example with:
kubectl create secret docker-registry -n as8 as8-pullsecret --docker-server=registry.open-xchange.com --docker-username=... --docker-password=...
The corresponding credentials are communicated to you via your OX sales or services representative.
Customized charts
There are also customized versions of the charts, either released publicly, like appsuite-public-sector/charts/appsuite-public-sector
, or private charts like appsuite-<customername>/charts/appsuite-<customername>
, the latter requiring pull credentials again. Refer to the release notes of your custom release for the exact chart name.
The customized charts are technically implemented as meta chart, containing one of the non-customized stack charts as sub chart, next to sub charts implementing the customizations.
This has the effect that the rendered values.yaml
file needs to be structured differently (having the settings of the stack chart nested under a top-level entry like appsuite
or appsuite-pro
). It is configured via the rendering mechanism via
as_chart: oci://registry.open-xchange.com/appsuite-public-sector/charts/appsuite-public-sector
use_nested_chart: true
By default, the name of the sub chart stack chart is appsuite
, which is the correct value for e.g. the public sector release, while most custom releases are based on the appsuite-pro
chart, which can be configured as follows:
as_chart: oci://registry.open-xchange.com/appsuite-<customer-name>/charts/appsuite-<customer-name>
use_nested_chart: true
nested_chart_anchor: appsuite-pro
Again, for accessing custom releases, a pull secret is required, see above.
App Suite Version
Or, technically, the version of the chart to install.
This, again, is configured by default in lab.default.yml
.
as_chart_version: 8.20.405
As usual, you can override this in your lab.yml
file.
The key question is then how to identify available versions. For technical reasons we can't provide webui access to our registry (which is based on harbor), but APIs are available, and you can use the following tools to conveniently access the APIs.
Skopeo
skopeo
is a command line utility that performs various operations on container images and image repositories." https://github.com/containers/skopeo
Not mentioned as a primary usecase, you can browse chart versions as well with skopeo
.
% skopeo list-tags docker://registry.open-xchange.com/appsuite/charts/appsuite
{
"Repository": "registry.open-xchange.com/appsuite/charts/appsuite",
"Tags": [
"8.19.369",
"8.19.372",
"8.19.373",
"8.19.374",
"8.19.375",
"8.19.376",
"8.19.377",
"8.19.378",
"8.19.379",
"8.19.380",
"8.20.398",
"8.20.403",
"8.20.404",
"8.20.405"
]
}
harbortour
We created a small Golang tool to use the Harbor API to list available repositories and artifacts. We call it harbortour
and for now it lives in this repo as a "battery". Thus, you find the source in batteries/harbortour/image/harbortour
.
It's Golang, so you can get Golang on your dev machine, build harbortour
from source, and run it:
% cd batteries/harbortour/image/harbortour
% go build
% ./harbortour
appsuite/appsuite-toolkit
appsuite/cacheservice
[...]
We also build and release it as container image. Thus, you can also invoke it directly as a container with podman run
or kubectl run
or anything like that. (I experience in testing that kubectl run
truncates the output, so running via podman
or actually just as a locally built binary is recommended. Feedback appreciated on this one.)
% podman run --rm registry.open-xchange.com/appsuite-operation-guides/harbortour:latest
appsuite/appsuite-toolkit
appsuite/cacheservice
[...]
The tool supports a few command line args:
% ./harbortour -h
Usage of ./harbortour:
-authorization string
Authorization String (base64(username:password))
-endpoint string
API Endpoint (default "https://registry.open-xchange.com")
-repository string
List artifacts for repository
So, to list private repositories, you can supply credentials via the -authorization
parameter in the usual format of the base64-encoded <username>:<password>
string.
To list the versions of a repository, use the -repository
parameter:
% ./harbortour -repository appsuite/charts/appsuite
appsuite/charts/appsuite sha256:39178e167c41069ce5764eb4348a0e2ac19df3621979f0811563aa071c0f3ece 8.20.405 2024-01-19T09:12:42.939Z
appsuite/charts/appsuite sha256:821f92849214f2d066f20fe5111a11948920f28c8f5b0a10ecc43db594607438 8.19.380 2024-01-19T09:12:32.097Z
[...]
Output format is (currently) a list of artifacts with some metadata (including sha256 hash, tag, and push timestamp) per line. The tag is the "version" you're looking for.
In this mode, the tool outputs the same information as skopeo
(shown above), less performantly (because the harbor API requires is to look it up in a two-step process with one call per artifact, thus requiring many roundtrips), but working via the harbor API, keeping the tool a "single API client" one.
Further usecases for quering the harbor API via harbortour
might be added later.
Batteries usage
You can chose to use our included batteries (for databases, storage, etc), or to connect App Suite with existing services for the respective purpose.
The recommended strategy is deploy the lab first with full "batteries included" and subsequently replace these batteries with their real counterpart services, if desired.
Note that running without functional IMAP endpoint was possible in earlier versions of App Suite, but as of recently, the UI will not load without successful IMAP connection. This means disabling the Dovecot CE battery (without replacing it with a proper external service) will render your lab unusable.
As of version 8.20, Redis is mandatory for some as8 components. It is recommended to use high available Redis sentinel
clusters. This can be configured with the configuration variables with redis
in their name; see lab.yml
, lab.default.yml
for documentation comments.
Furthermore, for consistent user configuration across App Suite and Dovecot, we recommend to use LDAP as well. To have the most minimal labs, we also have a modus to run without LDAP, where App Suite and Dovecot are configured with consistent, but autonomous user databases (in App Suite's DB, and in Dovecot files). But, this is really an anti pattern for any "prod like" installation and some features are not available in this setup, so you're encouraged to configure your lab to install install_slapd: true
and use_ldap: true
.
If you want to enable Keycloak on top, you can enable it with the settings install_keycloak
and use_oidc
.
HTTPS routing, TLS termination, certificates
In contrast to earlier (version 7) lab setups of App Suite, it is no longer possible to run without TLS. TLS is required for various reasons (HTTP2, brotli) and, as a consequence, (valid) certificates and DNS names are required.
This lab automation supports two different flavors of managing incoming traffic:
- If your k8s supports
LoadBalancer
service types, we can use these - Otherwise, we can use the
NodePort
service type with staticnodePort
values.
The corresponding lab.yml
setting is istio_service_type: LoadBalancer|NodePort
. This setting will seed defaults for the Keycloak service type as well.
The "static node ports" solution is often used in conjunction with an external load balancer (e.g. HAproxy) which exposes the endpoints on public IPs (usually k8s nodes have only internal network access) and well-known port numbers. Configuring such a HAproxy instance is out scope of this repo.
For TLS termination, we have the options:
- We can configure TLS termination on application level (App Suite, if applicable: Keycloak).
- We can configure the apps to run on plain HTTP and expect TLS termination to happen on an external load balancer
The corresponding lab.yml
setting is tls_termination_in_apps: true|false
For certificates, there are multiple options.
- We can use certificates which have been created externally. These are to be provided to our automation via one or more k8s
secret
(s) - We can use an existing
cert-manager
installation on the provided k8s and use it to create certificates - We can install
cert-manager
on the provided k8s, bootstrap a self-signed ca and use it to create certificates
The corresponding lab.yml
settings are certmanager_create_certificates
, certmanager_use_embedded_issuer
, certmanager_external_issuer_name
, certmanager_external_issuer_kind
.
Another degree of freedom is to create / use
- One dedicated cert per application, or
- A multi-san (or wildcard) cert for all applications (or on the load balancer)
The cert-manager
based variants will by default create per-application certs if TLS termination is supposed to happen on application level, and if the external load balancer is to be used, it will create a multi-SAN certificate for it. This can be overridden with the lab.yml variables certmanager_create_application_certs
and certmanager_create_multisan_cert
.
Script render targets
By default, we render Powershell scripts on Windows platforms, and Bash everywhere else.
To adjust that (or override autodetection failures), the you can define variables your lab.yml:
#render_sh: true
#render_ps1: false
Low-level discussion of the configuration values
Some settings have been discussed above. We don't repeat them here.
We discuss some important settings in the following. More optional overrides or default settings can be found in the lab.default.yml
file and the templates/_startup.*.j2
files. The files are documented and expected to be self-explanatory.
Basic settings
as_hostname
: DNS name to access the App Suite 8 WebUI after installation. Will also propagate (if applicable) into the Keycloak hostname by using the same "domain" with akeycloak
hostname (unless thekeycloak_hostname
is configured explicitly). Example value:as8.lab.test
as_hostname_dc
: LDAPdc=
style writing of the previous setting. Example value:dc=as8,dc=lab,dc=test
release
: Release name (ash in,helm install <release-name> <chart>
) for the App Suite Stack Chart. Example value:as8
namespace
: Namespace for the App Suite 8 release, and the batteries. (Exceptions: Some batteries also install stuff into their own namespaces, system namespaces, or un-namespaced resources.)
k8s feature related
istio_service_type
: This corresponds to the decision about "HTTPS routing, TLS termination, certificates". This value will propagate for Keycloak and other services if not provided explicitly for those as well. Example values:NodePort
orLoadBalancer
batteries_disable_storage
: You need to configure this if your k8s does not feature storage volumes via aStorageClass
. When configured, we will simply not mount volumes (which we usually do where it makes sense), which will have several bad implications, in particular data being lost on pod restarts, and bad performance. However, if your k8s does not feature storage, but you want to do first experiments with as8, this might be your only choice. Just be sure to understand that this never will be a reasonable choice for anything but the most basic trivial first labs. Example value:true
(defaults tofalse
)
k8s extensions: Istio, cert-manager
install_istio
: Well, we need Istio. But maybe your k8s already has it installed, so we should not try to re-install it. Example value:true
For the cert-manager settings: See the decision above about "HTTPS routing, TLS termination, certificates".
install_certmanager
: If we should install cert-manager. You don't need to install if you want to use an existing, or if you don't want to use any cert-manager at all (because you want to provide externally created certs). Example value:true
install_certmanager_selfsigned_ca
: If we should install a self-signed CA (certificate authority). You don't need this if you want to use an existing cert-manager issuer (there are settings to configure that, seelab.default.yml
. References:certmanager_use_embedded_issuer
,certmanager_external_issuer_name
,certmanager_external_issuer_kind
.). Example value:true
certmanager_create_certificates
: If you want to use externally created certs, you can set this tofalse
and provide the certs viatls
secrets. Secret names are then to be provided via the<label>-_certificate_name
settings, seelab.default.yml
. Example value:true
Batteries related
mysql_host
: Hostname of the database used by the middleware pods to access it. When not using our battery, this is the place to put the hostname of the external DB service. Note: we currently don't support in this automation multiple DB instances and/or ConfigDB/UserDB separation; we put it all on one single DB service for simplicity. Example value:mariadb
install_mariadb
: If you don't want to install themariadb
battery because you want to use an external database, this is the place to disable it. Example value:true
use_redis
,use_internal_redis
,install_spotahome_redis_operator
,install_spotahome_redis
,install_bitnami_redis
: It was only valid to run without redis in very early released versions of as8 (or, even earlier unreleased versions). As of 8.20,use_redis
needs to be set totrue
. The recommended way to have Redis is asentinel
cluster which offers high availability.- For labs / the simplest setup possible, you can however use a standalone Redis instance which is deployed by the
core-mw
chart. You can enable this via thelab.yml
settinguse_internal_redis
. In that case, make sure the otherinstall_redis_*
options are setfalse
, even if the defaults aretrue
. - As of time of writing this, the officially supported / recommeded method of installing redis is the Spotahome Redis operator. https://documentation.open-xchange.com/8.22/middleware/login_and_sessions/redis_session_storage/01_operator_guide.html Note that this is subject to change, as the Spotahome Redis Operator project seems to be stale. We expect to change the recommendation to the Bitnami Redis Chart, see next bullet points. For the time being, you should use the Spotahome Redis operator, however. Set the
install_spotahome_redis_operator
,install_spotahome_redis
settings totrue
value to haveas8-deployment
do the installation of the operator and the Redis cluster itself. - To use the Bitnami Redis Chart, use the setting
install_bitnami_redis: true
. - Finally, if you want to connect to some external Redis, use the
redis_host
setting to override the hostname for your Redis instance (cluster). Set the variousinstall_redis_*
settings to false in that case.
- For labs / the simplest setup possible, you can however use a standalone Redis instance which is deployed by the
install_minio
: Likeinstall_mariadb
, but aboutminio
. Example value:true
install_slapd
: Likeinstall_mariadb
, but aboutslapd
(the LDAP server). Example value:true
use_ldap
: If you want to use LDAP. Independent from theinstall_slapd
setting because you might want to use an external LDAP service. Example value:true
install_keycloak
: Likeinstall_mariadb
, but about Keycloak. Example value:true
use_oidc
: Likeuse_ldap
, but for OIDC. Example value:true
install_dovecot
: Likeinstall_mariadb
, but about Dovecot CE. Example value:true
- install_postfix: Like
install_mariadb
, but about Postfix. Example value:true
Accessing the Web Application
Some basic discussion which was omitted in the basic documentation is appropriate here.
We require the entire web application to run on host names (rather than just IPs) for two reasons:
- HTTP routing happens based on the
Host:
headers sent by the clients. In load balancers, but also in our Istio setup, HTTP requests must match by theHost:
header and by thepath
to be routed to the correct next hop - TLS is required by now (since as8); in contrast to earlier versions it's technologically no longer possible to run in plain HTTP without TLS. This is because certain technologies we employ (
http2
,brotli
, potentially others) are only defined to work ontop of TLS connections, and, for example, browsers will refuse to accept brotli-encoded data if transported over plain (non-TLS) HTTP. So even if you assess that for security / privacy reasons your internal lab doesn't need TLS and is fine if running over non-encrypted communication, technically it doesn't work. - Canonically you use TLS with hostnames.
NodePort
without external load balancer
Scenario For the case without Keycloak, this is discussed in the main README.md
file.
The extension to Keycloak is straightforward. Keycloak will also listen on a node port. The port number is configured statically via lab.default.yml
and can be read from the kubectl get service -n as8
output:
kubectl get service -n as8
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
[...]
istio-ingressgateway NodePort 10.233.32.71 <none> 15021:30021/TCP,80:30080/TCP,443:30443/TCP 99s
keycloak NodePort 10.233.11.99 <none> 80:31537/TCP,443:30614/TCP 3m53s
[...]
You'll need an additional hostname for the k8s node IP address in your hosts
file:
10.50.2.89 as8.lab.test keycloak.lab.test
On accessing the https://as8.lab.test:30443/appsuite/
URL, App Suite will initiate the OIDC flow to the proper Keycloak URL https://keycloak.lab.test:30614/appsuite/
. The redirect back is currently broken (fixes welcome), redirects to https://as8.lab.test/appsuite/
(the port gets lost). Login can still succeed by adding the :30443
port to the URL and use the reload browser button.
Note that configuration of this setup is based on the default value tls_termination_in_apps: false
, which has the effects that the "apps" (App Suite, Keycloak) are provided with TLS certs and configured to listen on HTTPS.
NodePort
with external load balancer
Scenario The principles about DNS (or /etc/hosts
) names (required for TLS cert verification and proper HTTP routing) apply as before. This time, however, the names need to point to the load balancer. The load balancer then uses IPs (or locally resolvable host names) to connect to the k8s node's node ports.
This setup is typically used with TLS termination on the load balancer, and the "apps" (App Suite, Keycloak) are configured to expose plain HTTP. The load balancer forwards / reverse-proxies the traffic to the node ports of the applications on the k8s nodes. The apps are still configured to and expect TLS to be in place from point of view of the clients (browser, etc), but TLS termination happens "somewhere" in front of the apps (here, on our load balancer).
We offer a sample configuration and very basic sample installation automation for creating such a load balancer based on the Envoy Service Proxy https://www.envoyproxy.io/, a CNCF graduated project created for exactly such use cases. Please note that this sample installation automation is very basic and does not meet best practices in terms of "declarative idempotent configuration automation", because this is also from the tooling point of view out of scope of this repo. It is only intended to create a working installation under certain "most commonly used" circumstances, and to show how it works in principle, even if you need to create an adjusted configuration based on our examples.
The envoy configuration is parametrized as followed:
install_envoy: true
toggles the corresponding sections in theinstall.{sh,ps1}
script and the rendering of theenvoy.yaml
configuration filetls_termination_in_apps: false
will be set by default automatically if you pickinstall_envoy: true
. It configures App Suite and Keycloak to offer plain-HTTP endpoints while being configured to expect and rely on external TLS terminationenvoy_ssh_target: "debian@172.24.2.2"
configures the SSH endpoint to installenvoy
to. In realistic setups this will be a dedicated machine accesible from "the internet" and connecting to "inner networks", while in simple minimal 1-machine labs, you can even installenvoy
on your (single)kubelet
node (there are usually no port conflicts).envoy_kubelet_addresses: ["172.24.2.2"]
Configure your kublet addresses here. In the (default) setup where as8 and keycloak run on k8s and are exposed via NodePorts, theenvoy_as8_addresses
andenvoy_keycloak_addresses
are (by default) derived from theenvoy_kubelet_addresses
setting to avoid configuration redundancy there. The endpoints for App Suite and Keycloak can also be given separateley, e.g. to point to an external Keycloak. By default we assume that App Suite and Keycloak run in our k8s, installed by ourrender.py
based tooling, and are to connected to via the IP address(es) of the kubelet node(s). The kubelet address can be configured here; there is also a way to derive it from the k8s API (kubectl get node
), see next section.
From the end user (test user) point of view, login then goes towards the https://as8.lab.test/appsuite/
address (no custom port specification necessary, assuming your load balancer runs on the standard port 443) and if Keycloak is configured, the redirects go to https://keycloak.lab.test/appsuite/
and back to https://as8.lab.test/appsuite/
.
Envoy autoconfiguration
To facilitate easier testing automation, we added the feature that you can define the value %%auto%%
to the keys envoy_ssh_target
and envoy_kubelet_addresses
.
For the envoy_ssh_target
setting, this will have the effect that the install script queries a kubernetes ConfigMap
to obtain these values. From that point of view, it is not really "automatic configuration", rather "indirect configuration" via the k8s ConfigMap. The main purpose is that you can store your configuration in your (lab) k8s and can easily switch between different k8s clusters without adjusting the local as8-deployment configuration.
At the moment, the ConfigMap is expected to look like
apiVersion: v1
kind: ConfigMap
metadata:
name: k8s-setup
namespace: default
data:
envoy_access_ip: 172.24.2.2
envoy_user: debian
If you want to use the "envoy autoconfiguration mode", you need to prepare this ConfigMap beforehand. (Hint: automate it in your lab k8s deployment automation.)
The install script will use something like ...
kubectl get configmap k8s-setup -o jsonpath='{.data.envoy_user}@{.data.envoy_access_ip}'
... to generate the envoy_ssh_target
(of the form <user>@<host>
) for ssh login to the envoy host for installing envoy.
For the envoy_kubelet_addresses
, the behavior is slighly different. Here, we use "true automatic self-configuration" by querying the k8s API for obtaining the kubelet IPs.
kubectl get node -o yaml | yq '(.items[].status.addresses.[] | select (.type=="InternalIP") | .address) as $item ireduce({"kubelets": []}; .kubelets += $item)'
Technically this list of kubelet IPs will be injected in the envoy configuration again by jinja2, but here not at the time of rendering, but at the time of installation.
(More explicitly, at rendering time, a "template template" envoy.yaml.j2.j2
is rendered to a template envoy.yaml.j2
as part of the regular render.py
rendering mechanism. The install script will later call a standalone CLI version of jinja2 to render the envoy.yaml.j2
file to a (temporary) envoy.yaml
file (in a temp location with a temp filename), which is then copied to the envoy host and installed as envoy.yaml
there. This "late rendering" is required to not require re-rendering of the entire configuration for application to a different k8s.)
LoadBalancer
Scenario Let's look at a sample kubectl get service -n as8
output:
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
[...]
istio-ingressgateway LoadBalancer 10.233.11.136 172.17.6.203 15021:31983/TCP,80:31211/TCP,443:30076/TCP 38m
keycloak LoadBalancer 10.233.40.156 172.17.6.202 80:31647/TCP,443:30556/TCP 40m
[...]
Again, like before, you need working DNS (or /etc/hosts
) names for the IPs (listed in the EXTERNAL-IP
column).
Like in the NodePort
with external load balancer scenario, login then goes towards the https://as8.lab.test/appsuite/
address and if Keycloak is configured, the redirects go to https://keycloak.lab.test/appsuite/
and back to https://as8.lab.test/appsuite/
.
Comparison with the legacy App Suite 7 installation procedure
Cf https://oxpedia.org/wiki/index.php?title=AppSuite:Open-Xchange_Installation_Guide_for_Debian_11.0
- Database installation We do this during installing the "batteries" in the
install.{sh|ps1}
script. We have more such steps for minio, LDAP, Keycloak though, as the scope of our lab is extended compared to the App Suite 7 single-node box scope of the App Suite 7 quickinstall guide. - JRE Installation Does not need attention here. We ship images with the correct JDKs.
- Add Open-Xchange Repository could be compared to configure access (credentials) to our helm and image registry https://registry.open-xchange.com/ .
- Updating repositories and install packages
- In the k8s world, installation and configuration of the software happens in the combined
helm install
step. The legacy guide uses aoxinstaller
tool which was a shell script to render some settings into some/opt/open-xchange/etc
config files. While App Suite 8 (as of the time of writing) still uses such configuration files (a mechanism to replace this is currently under development), we no longer use this tool explicitly, but replaced it by a sophisticated mechanism to render helmvalues.yaml
settings into configuration files. - Furthermore, this section contains (without explicit caption) also initialization steps (
initconfigdb
,registerserver
,registerfilestore
,registerdatabase
) which happen in the k8s world before (initconfigdb
) or after (register*
) the helm chart installation, by a slightly different tooling (initconfigdb
is (currently) invoked by a makeshiftkubectl run
construct, while theregister*
calls are actually happening via SOAP from a Python/Zeep script).
- In the k8s world, installation and configuration of the software happens in the combined
- Configure services The legacy guide refers to Apache configuration. In the k8s world, we replace Apache as HTTP routing tool by Istio. Istio configuration happens during the
helm install
step. - Creating contexts and users No longer considered as part of the installation itself, but rather as something supposed to happen after installation. However, we still ship some provisioning tooling for this step as part of this repo to make it easy and convenient to get some test users (and contexts) out of the automation. This happens naturally at the very end of the procedure, in our case as well via SOAP using some Python/Zeep script.
- Log files and issue tracking Not exactly an installation step, but worth mentioning that in the first place logging happens in the usual k8s ways (means, structured logging to stdout) and can be accessed via the usual tooling (
kubectl logs
). Further integration into modern logging aggregating and analysis frameworks is currently in the domain of the on-premises customer.