Deployment Guide
Scope
The tooling and documentation provided in this repo shall assist in setting up labs of App Suite 8 on kubernetes.
App Suite 8 is an application which can not be run standalone. Rather, it requires a number of prerequisites provided by the infrastructure for database, storage, networking, and potentially other services. These services need to be prepared in a suitable way, configured according to our application's requirements, and provided with credentials to access it. We want to provide an automation to create such self-contained, self-consistent labs.
Disclaimer
The contents of this repo is not part of the official software product deliverables of Open-Xchange. It comes without any support or any warranty.
It is contributed documentation in the same sense as e.g. the Quickinstall Guides of the App Suite 7 world.
Its goal is to provide information on how to create educational labs for further learning and as basis for further work towards more production-ready configurations and systems, in responsibility of the customer.
Introduction
As expected from recent k8s applications, App Suite 8 is to be installed in the usual helm install
way. The most minimal installation command (under the assumption all certain prerequisites were satisified, see below) looks like:
helm install as8 oci://registry.open-xchange.com/appsuite/charts/appsuite --version 8.x.y --values values.yaml
However, in order to actually obtain a working installation, the configuration, which is entirely carried by values.yaml
in this example, needs to be carefully prepared to match the environment, including access to the infrastructure (databases, file storage, mail service), proper configuration for incoming traffic, and so on.
This implies that these components be prepared beforehand.
This repository contains automation to cover the required efforts for preparing the infrastructure components and consistently configure the App Suite application with the corresponding endpoints and credentials.
In detail, this consists of
- a script,
render.py
, which renders - templates from the
templates/
subdirectory - into an output directory, (by default
rendered/lab
, - using a default configuration
lab.default.yml
, applying - further optional user configuration.
Rendered files include
- an installation script
install.sh
(orinstall.ps1
) containing in particular the requiredhelm install
calls for App Suite 8 and the other components - Helm chart configuration files (aka
values.yaml
) for App Suite 8 and our "batteries".
Covered components include
- Istio
- cert-manager
- MariaDB
- Redis
- Minio (for object storage)
- LDAP
- Keycloak
- Dovecot CE
- Postfix
Notes:
- The configuration of the components aims to be minimal, light-weight, easy to setup, for lab use only. There is no implicit or explicit support of any kind for these. We try to make obvious design choices in a reasonable way, but favor simplicity over anything else, including production-readiness.
- We use Dovecot CE here as simple minimal IMAP service. For any production use, consider running Dovecot Pro in a best-practices setup.
- Minio is used here for its simplicity of deployment on lab scale. This does not imply any statement about support status for production use. Please refer to the official documentation.
Kubernetes Requirements and Nomenclature
As App Suite 8 is a k8s application. Thus, a k8s service is a prerequisite.
Currently we don't define special requirements on the k8s service. Baseline assumption is that if you use a CNCF certified kubernetes (see https://www.cncf.io/certification/software-conformance/) you should be fine.
See the K8S Infrastructure remarks for slightly more information on this topic.
Sizing-wise, it should be sufficient to have some 8 GB of memory in your lab k8s for minimal small deployments. If you deploy more features or do scale-out experiments, this can easily increase to 16 GB and beyond.
Kubernetes extensions
Istio plays a special role in the list of "covered components" above in the sense that it is not a k8s "application", but rather a "system-level k8s extension".
This manifests itself in:
- It installs global entities (like CRDs and non-namespaced resources)
- You can only have one Istio per k8s
- Many applications potentially share one global Istio installation
Similar traits apply to cert-manager, even though in a "less invasive" extent.
"Batteries"
In the sense of batteries included, we like to call software components, which are deployed and managed by this automation alongside App Suite, but which are not App Suite itself, "batteries". This applies to MariaDB, Redis, Minio, LDAP, Keycloak, Postfix, Dovecot CE.
Where we found reasonable (authorative upstream lightweight minimal easily-manageable) container images for the batteries, we use them. This applies e.g. to MariaDB, Redis, Minio, Keycloak, Dovecot CE. Where we were not able to find reasonable container images, we cook our owns. This applies currently to LDAP and Postfix. Ontop we cook our own "Python Pod" pypod
as a multi-protocol Python-based provisioning client.
For some batteries, we create a helm chart which exposes the required configurables to allow for homogenous inter-configuration of the battieres, and App Suite itself. Other batteries are to be installed via upstream helm charts or operators (currently applies to Redis).
Preparations on the k8s client machine
Software prerequisites
Standard k8s clients:
kubectl
including working configuration (~/.kube/config
)helm
helm-diff
plugin, https://github.com/databus23/helm-diff
jq
and yq
as programmatic YAML and JSON editors. Debian hints:
sudo apt install jq
sudo wget https://github.com/mikefarah/yq/releases/latest/download/yq_linux_amd64 -O /usr/bin/yq && sudo chmod +x /usr/bin/yq
Recent Python3 and the venv
module to create virtual environments:
sudo apt install python3-venv
The repo itself
The repo itself is available for cloning via our gitlab.
git clone https://gitlab.open-xchange.com/appsuite/operation-guides.git
The virtual Python environment
After cloning the repo, cd
into it and create a Python virtual environment into the v/
subdirectory:
cd operation-guides
python3 -mvenv v
v/bin/pip install --upgrade pip wheel
A few Python modules are required to be installed into it:
v/bin/pip install -r requirements.txt
k8s prerequisites
Image pull secrets should not be required as long as you use publicly released helm charts and images. In case you need access to private assets, the corresponding secrets need to be applied upfront to the k8s cluster (in the designated application namespace).
Currently two different secrets can be provided, one for accessing our core software deliverables, and one for the "batteries" container images.
kubectl create namespace as8
The default lab configuration will create TLS certificates using a local self-signed CA. Thus, no certificates / secrets
with regards to TLS need to be imported. Using externally created certificates is out of scope of this document and described in the advanced documentation.
Rendering the templates
We will run the automated deployment based on files which are "rendered" from templates, in a configurable fashion. For simplicity, however, we postpone a detailed description of the rendering process and its customizability (including changing the chart and its version) to the advanced documentation, and focus on deploying a default lab here.
Still we need to "render" the files, if only for the single reason of having decided to not ship any default passwords. During the process of rendering, random passwords will be generated.
v/bin/python render.py
This will read a config file lab.yml
which we ship with defaults in the repo, work on the templates in the templates
subdirectory, and create rendered files in the rendered/lab
subdirectory.
Lab installation
After rendering, we can change into the output directory.
cd rendered/lab
The rendered files include an installation script, by default a Bash .sh
script, or a PowerShell .ps1
script if you are on a Windows platform.
./install.sh
This usually takes a few minutes to install all the batteries and App Suite itself.
Note: to change the chart or its version, see the corresponding section in the advanced documentation.
Verifying the installation
kubectl get all -n as8
Verify that the Pods
, Deployments
, StatefulSets
become ready.
Accessing the web application
The default configuration of the lab produces a service for web UI access of type NodePort
. This was chosen for its universal availability. For other service type options, see the advanced documentation, section "Accessing the Web Application".
% kubectl get service -n as8 istio-ingressgateway
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
istio-ingressgateway NodePort 10.233.13.12 <none> 15021:30021/TCP,80:30080/TCP,443:30443/TCP 18m
To connect to App Suite via this service, two more things need to be done:
The default lab configuration configures a hostname of
as8.lab.test
(via theas_hostname
key inlab.yml
). This is not a valid publicly resolvable DNS name, so you need to add a local/etc/hosts
entry for that purpose. (Windows: thehosts
file location is something likeC:\Windows\system32\drivers\etc\hosts
.)The example assumes a
10.50.2.89
IP for your k8s node (or, one of your k8s nodes).10.50.2.89 as8.lab.test
The default lab configuration uses a self-signed ca to create a certificate for your App Suite. You need to add this certificate to the trust store of your computer. It usually does not work to skip this and use "exception rules" in the browser.
The CA certificate should be extracted from the k8s secret to your working directory in default configuration / if you did not decide to bring your own certs. If that fails, grep for
cacert.pem
in the install script for how to it manually.The resulting
cacert.pem
file needs to imported to the trust store of your computer.System Command Debian, Ubuntu ( update-ca-certificates
)sudo cp cacert.pem /usr/local/share/ca-certificates && sudo update-ca-certificates
Fedora et al ( p11-kit
)sudo trust anchor cacert.pem
macOS sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain cacert.pem
Windows Import-Certificate -FilePath cacert.pem -CertStoreLocation Cert:\LocalMachine\Root
in an elevated powershell
Then you can connect to your service via curl
or the web browser of your choice, pointing to a URL like
https://as8.lab.test:30443/appsuite/
Logging in
TL;DR:
- Username:
testuser@10
- Password: See the
secrets.provision.contexts.10.testuser.plain
entry in thevalues.pypod.secret.yaml
file in yourrendered/lab
directory
Longer version:
The automation provisions contexts and users. Have a look at the rendered values.pypod.secret.yaml
file. It contains a list of users and their (randomly generated) passwords.
The default lab is setup without LDAP and Keycloak, using the App Suite DB for authentication and consistent passwd
file based authentication in Dovecot.
On accessing the URL mentioned above, the web browser should display the App Suite built-in login page. Chose a login name of the form <username>@<numerical-context-id>
, like testuser@10
, and the password listed in the secrets.provision.contexts.10.testuser.plain
key of the values.pypod.secret.yaml
file.
You should be able to login and do some explorative testing (write a self-mail, create calendar entries or addressbook entries, upload a file, etc).
Notes on Updating
There could be backward-incompatible changes at any time.
We try to explain breaking changes in the UPDATING.md
file.
In general, the updating procedure of operation-guides.git
looks like:
- In your git clone, pull for updates:
git pull
- Read
UPDATING.md
to check for breaking changes - Adjust your configuration if needed
- Re-render your lab
- Redeploy (or update) your lab
If something breaks, please go back to test a default lab (no custom lab.yml
configration). The default lab should always work. If the default lab is broken, contact us. If the default lab works, step by step add your custom configuration, verifying each step along the way.
Note that new versions of App Suite itself sometimes require updates to operation-guides.git
; therefore, please use new versions of App Suite only after it is the default version as_chart_version
as of lab.default.yml
. We always update the default version to the latest tested and verified App Suite version.