Advanced Config Reference

The operator has two ways for configuration:

  1. Per namespace backups. Optimal for shared clusters

  2. Global settings with namespaced schedules. Optimal for private clusters

Environment variables

  • BACKUP_ANNOTATION the annotation to be used for filtering, default: k8up.syn.tools/backup

  • BACKUP_BACKUPCOMMANDANNOTATION set the annotation name where the backup commands are stored, default k8up.syn.tools/backupcommand

  • BACKUP_CHECKSCHEDULE the default check schedule, default: 0 0 * * 0

  • BACKUP_DATAPATH where the PVCs should get mounted in the container, default /data

  • BACKUP_FILEEXTENSIONANNOTATION set the annotation name where the file extension is stored for backupcommands, default k8up.syn.tools/file-extension

  • BACKUP_GLOBALACCESSKEYID set the S3 access key id to be used globaly

  • BACKUP_GLOBALKEEPJOBS set the count of jobs to keep globally

  • BACKUP_GLOBALREPOPASSWORD set the restic repository password to be used globaly

  • BACKUP_GLOBALRESTORES3ACCESKEYID set the global resotre S3 accessKeyID for restores

  • BACKUP_GLOBALRESTORES3BUCKET set the global restore S3 bucket for restores

  • BACKUP_GLOBALRESTORES3ENDPOINT set the global restore S3 endpoint for the restores (needs the scheme [http/https]

  • BACKUP_GLOBALRESTORES3SECRETACCESSKEY set the global restore S3 SecretAccessKey for restores

  • BACKUP_GLOBALS3BUCKET set the S3 bucket to be used globally

  • BACKUP_GLOBALS3ENDPOINT set the S3 endpoint to be used globally

  • BACKUP_GLOBALSECRETACCESSKEY set the S3 secret access key to be used globaly

  • BACKUP_GLOBALSTATSURL set the URL of wrestic to post additional metrics gloablly, default ""

  • BACKUP_IMAGE URL of the restic image, default: 172.30.1.1:5000/myproject/restic

  • BACKUP_JOBNAME names for the backup job objects in OpenShift, default: backupjob

  • BACKUP_METRICBIND set the bind address for the prometheus endpoint, default: :8080

  • BACKUP_PODEXECACCOUNTNAME set the service account name that should be used for the pod command execution, default: pod-executor

  • BACKUP_PODEXECROLENAME set the rolename that should be used for pod command execution, default pod-executor

  • BACKUP_PODFILTER the filter used to find the backup pods, default: backupPod=true

  • BACKUP_PODNAME names for the backup pod objects in OpenShift, default: backupjob-pod

  • BACKUP_PROMURL set the operator wide default prometheus push gateway, default 127.0.0.1/

  • BACKUP_RESTARTPOLICY set the RestartPolicy for the backup jobs. According to the docs this should be OnFailure for jobs that terminate, default: OnFailure

You only need to adjust BACKUP_IMAGE everything else can be left default.

Global settings

Each variable starting with BACKUP_GLOBAL* is used to configure a global default for all namespaces. F.e. if you configure the S3 bucket and credentials you won’t have to specify them in the schedule or backup CRDs.

It’s possible to overwrite the global settings. Simply set the specific configuration in the CRD and it will use that instead.

PreBackup pods

Although K8up supports executing backup commands in already running pods, there might be a need to start a specific pod for the backup. Or what if the backup command you’d like to run doesn’t belong to a running pod? Or you want to backup something that’s running on a RWO PVC? Or even outside the Kubernetes cluster?

apiVersion: backup.appuio.ch/v1alpha1
kind: PreBackupPod
metadata:
  name: mysqldump
spec:
  backupCommand: mysqldump -u$USER -p$PW -h $DB_HOST --all-databases
  pod:
    spec:
      containers:
        - env:
            - name: USER
              value: dumper
            - name: PW
              value: topsecret
            - name: DB_HOST
              value: mariadb.example.com
          image: mariadb
          command:
            - 'sleep'
            - 'infinity'
          imagePullPolicy: Always
          name: mysqldump
yaml
Passing environment variables

If you want to pass environment variables to the backupCommand you’ll have to wrap them in a shell. In above prebackup pod example that would look like this:

spec:
  backupCommand: /bin/bash -c 'mysqldump -uroot -p "${MARIADB_ROOT_PASSWORD}" --all-databases'" --overwrite

You can also add it to any pods that are running in the namespace:

via kubectl:
kubectl -n ${YOUR_NAMESPACE} annotate pods ${YOUR_POD_NAME} "k8up.syn.tools/backupcommand=/bin/bash -c 'mysqldump -uroot -p\"\${MARIADB_ROOT_PASSWORD}\" --all-databases'" --overwrite

in the manifest:
spec:
  serviceName: "mariadb"
  replicas: 1
  template:
    metadata:
      labels:
        app: mariadb
      annotations:
        k8up.syn.tools/backupcommand: /bin/bash -c 'mysqldump -uroot -p "${MARIADB_ROOT_PASSWORD}" --all-databases'

That’s the perfect use case for using PreBackup pods! They’re pod definitions that live in your namespaces. Once the operator triggers a backup on that specific namespace it will loop thorugh all these pod definitions, run them and clean them up again after the backup has finished. This allows much more flexibility, as they support everything a normal pod template does. So for instance you are able to set pod affinity so the PreBackup pods get started on a specific node. That will allow you to get access to data that’s RWO and then trigger backups for them via a backup command. We will enhance this feature in the future so it will support file system backups, too.

See PreBackup pods for detailed object specifications.

Manual Installation

All required definitions for the installation are located at manifest/install/:

kubectl apply -f manifest/install/
bash

Please be aware that these manifests are intended for dev and as examples. They’re not the official way to install the operator in production. For this we provide a helm chart at github.com/appuio/charts. You may need to adjust the namespaces in the manifests. There are various other examples under manifest/examples/.