Object Specifications Reference
The K8up operator includes several Custom Resource Definitions (CRDs) which get added to the cluster by the helm chart or which have to be added manually. Here they are explained in more detail.
A generated API documentation is available at API reference. |
Schedule
With the schedule CRD it’s possible to put all other CRDs on a schedule.
apiVersion: k8up.io/v1
kind: Schedule
metadata:
name: schedule-test
spec:
backend:
repoPasswordSecretRef:
name: backup-repo
key: password
s3:
endpoint: http://10.144.1.224:9000
bucket: k8up
accessKeyIDSecretRef:
name: backup-credentials
key: username
secretAccessKeySecretRef:
name: backup-credentials
key: password
archive:
schedule: '0 * * * *'
restoreMethod:
s3:
endpoint: http://10.144.1.224:9000
bucket: restoremini
accessKeyIDSecretRef:
name: backup-credentials
key: username
secretAccessKeySecretRef:
name: backup-credentials
key: password
backup:
schedule: '* * * * *'
failedJobsHistoryLimit: 4
successfulJobsHistoryLimit: 0
promURL: http://10.144.1.224:9000
check:
schedule: '*/5 * * * *'
promURL: http://10.144.1.224:9000
prune:
schedule: '*/2 * * * *'
retention:
keepLast: 5
keepDaily: 14
Restore
It’s possible to define different kinds of restore jobs. Currently these kinds of restores are supported:
-
To a PVC
-
To S3 as tar.gz
Example for a restore to a PVC:
apiVersion: k8up.io/v1
kind: Restore
metadata:
name: restore-test
spec:
tags:
- prod
- archive
restoreMethod:
folder:
claimName: restore
backend:
repoPasswordSecretRef:
name: backup-repo
key: password
s3:
endpoint: http://10.144.1.224:9000
bucket: k8up
accessKeyIDSecretRef:
name: backup-credentials
key: username
secretAccessKeySecretRef:
name: backup-credentials
key: password
This will restore the latest snapshot from http://10.144.1.224:9000
to the PVC with the name restore
.
Settings
-
backend
: see backend for further explanation -
restoreMethod
: is eithers3
orfolder
. For s3 please seebackend
forfolder
you just need to provide a valid claim name as shown in the example above -
restoreFilter
: a filter passed to the underlying Restic, which will be used. Please consult the Restic docs for valid path filters. -
snapshot
: valid snapshot ID that should get restored. If not provided, the most recent one will be restored. -
keepJobs
: amount of jobs that should be left after cleanup, for example how many job/pod objects should be left after they finished. Deprecated, usefailedJobsHistoryLimit
andsuccessfulJobsHistoryLimit
instead. Only applicable when used within a schedule. -
failedJobsHistoryLimit
: amount of failed jobs that should be left after cleanup, for example how many job/pod objects should be left after they finished. Defaults to 3. Only applicable when used within a schedule. -
successfulJobsHistoryLimit
: amount of successful jobs that should be left after cleanup, for example how many job/pod objects should be left after they finished. Defaults to 3. Only applicable when used within a schedule. -
tags
: list of tags to be considered for the restore. They’re ignored if a snapshot ID is provided. -
activeDeadlineSeconds
: specifies the duration in seconds relative to the startTime that the job may be continuously active before the system tries to terminate it.
Archive
The archive CRD will take the latest snapshots from each namespace/project in the repository. Thus you should only run one schedule per repository for archival as there’s a chance that you’ll archive snapshots more than once.
apiVersion: k8up.io/v1
kind: Archive
metadata:
name: archive-test
spec:
activeDeadlineSeconds: 600
repoPasswordSecretRef:
name: backup-repo
key: password
restoreMethod:
s3:
endpoint: http://10.144.1.224:9000
bucket: restoremini
accessKeyIDSecretRef:
name: backup-credentials
key: username
secretAccessKeySecretRef:
name: backup-credentials
key: password
backend:
s3:
endpoint: http://10.144.1.224:9000
bucket: k8up
accessKeyIDSecretRef:
name: backup-credentials
key: username
secretAccessKeySecretRef:
name: backup-credentials
key: password
Backup
This will trigger a single backup.
apiVersion: k8up.io/v1
kind: Backup
metadata:
name: k8up-test
spec:
activeDeadlineSeconds: 600
tags:
- prod
- archive
- important
failedJobsHistoryLimit: 4
successfulJobsHistoryLimit: 0
backend:
repoPasswordSecretRef:
name: backup-repo
key: password
s3:
endpoint: http://10.144.1.224:9000
bucket: k8up
accessKeyIDSecretRef:
name: backup-credentials
key: username
secretAccessKeySecretRef:
name: backup-credentials
key: password
promURL: http://10.144.1.224:9000
Settings
-
backend
: see backend -
keepJobs
: amount of jobs that should be left after cleanup, for example how many job/pod objects should be left after they finished. Deprecated, usefailedJobsHistoryLimit
andsuccessfulJobsHistoryLimit
instead. Only applicable when used within a schedule. -
failedJobsHistoryLimit
: amount of failed jobs that should be left after cleanup, for example how many job/pod objects should be left after they finished. Defaults to 3. Only applicable when used within a schedule. -
successfulJobsHistoryLimit
: amount of successful jobs that should be left after cleanup, for example how many job/pod objects should be left after they finished. Defaults to 3. Only applicable when used within a schedule. -
promURL
: sends backup statistics to this Prometheus pushgateway while the backups are running. -
statsURL
: will send a JSON webhook containing backup information information to this endpoint. Can be used to gather a list with available backups. -
tags
: list of tags to be added to the backup. Can be used in restores and archives again. -
activeDeadlineSeconds
: specifies the duration in seconds relative to the startTime that the job may be continuously active before the system tries to terminate it.
Check
This will trigger a single check run on the repository.
apiVersion: k8up.io/v1
kind: Check
metadata:
name: check-test
spec:
activeDeadlineSeconds: 600
backend:
repoPasswordSecretRef:
name: backup-repo
key: password
s3:
endpoint: http://10.144.1.224:9000
bucket: k8up
accessKeyIDSecretRef:
name: backup-credentials
key: username
secretAccessKeySecretRef:
name: backup-credentials
key: password
promURL: http://10.144.1.224:9000
Settings
-
statsURL
: will send a JSON webhook containing check information information to this endpoint. -
backend
: see backend -
keepJobs
: amount of jobs that should be left after cleanup, for example how many job/pod objects should be left after they finished. Deprecated, usefailedJobsHistoryLimit
andsuccessfulJobsHistoryLimit
instead. Only applicable when used within a schedule. -
failedJobsHistoryLimit
: amount of failed jobs that should be left after cleanup, for example how many job/pod objects should be left after they finished. Defaults to 3. Only applicable when used within a schedule. -
successfulJobsHistoryLimit
: amount of successful jobs that should be left after cleanup, for example how many job/pod objects should be left after they finished. Defaults to 3. Only applicable when used within a schedule. -
activeDeadlineSeconds
: specifies the duration in seconds relative to the startTime that the job may be continuously active before the system tries to terminate it.
Prune
This will trigger a single prune run, and delete the snapshots according to the defined retention rules. This one needs to run exclusively on the repository. No other jobs must run on the same repository while this one is still running. The Operator ensures that the prune will run exclusively on the repository when run on a schedule. If manually triggering, a prune the restic locking will kick in and prevent it from damaging the repository. It will also fail the whole Pod in that case.
apiVersion: k8up.io/v1
kind: Prune
metadata:
name: prune-test
spec:
activeDeadlineSeconds: 600
retention:
keepLast: 5
keepDaily: 14
backend:
repoPasswordSecretRef:
name: backup-repo
key: password
s3:
endpoint: http://10.144.1.224:9000
bucket: k8up
accessKeyIDSecretRef:
name: backup-credentials
key: username
secretAccessKeySecretRef:
name: backup-credentials
key: password
Settings
-
retention
: see retention -
backend
: see backend -
keepJobs
: amount of jobs that should be left after cleanup, for example how many job/pod objects should be left after they finished. Deprecated, usefailedJobsHistoryLimit
andsuccessfulJobsHistoryLimit
instead. Only applicable when used within a schedule. -
failedJobsHistoryLimit
: amount of failed jobs that should be left after cleanup, for example how many job/pod objects should be left after they finished. Defaults to 3. Only applicable when used within a schedule. -
successfulJobsHistoryLimit
: amount of successful jobs that should be left after cleanup, for example how many job/pod objects should be left after they finished. Defaults to 3. Only applicable when used within a schedule. -
activeDeadlineSeconds
: specifies the duration in seconds relative to the startTime that the job may be continuously active before the system tries to terminate it.
Retention
Retention is part of the prune object. It defines how the retention of a given backend should look like. Most upstream Restic rules are supported except for the ones working with labels. Please see the upstream Restic docs for more info.
retention:
keepLast: 5
keepDaily: 14
List of available settings:
-
keepLast
-
keepHourly
-
keepDaily
-
keepWeekly
-
keepMonthly
-
keepYearly
-
keepTags
Please don’t confuse tags and keepTags here. If you specify keepTags it will remove all snapshots that don’t have the tag! If you use the tags array it will apply the retention only to snapshots with that specific tag. That way there can be multiple backup sets on a repository, for example prod and dev .
|
The retention is applied per namespace. |
Backend
backend:
repoPasswordSecretRef:
name: backup-repo
key: password
s3:
endpoint: http://10.144.1.224:9000
bucket: k8up
accessKeyIDSecretRef:
name: backup-credentials
key: username
secretAccessKeySecretRef:
name: backup-credentials
key: password
Settings
Make sure to configure only one storage type! |
Don’t lose the encryption key or you won’t be able to access your backup data again! Keep a copy of that somewhere outside of the actual cluster. |
S3
Settings:
-
endpoint
: http(s) endpoint of the S3 instance -
bucket
: name of the bucket that should be used -
accessKeyIDSecretRef
: Kubernetes secret reference containing the the Access Key ID -
secretAccessKeySecretRef
: Kubernetes secret reference containing the Secret Access Key
Azure
Settings:
-
container
: name of the container that should be used -
path
: path inside the container, will default to "/" if omitted -
accountNameSecretRef
: Kubernetes secret reference containing the account name -
accountKeySecretRef
: Kubernetes secret reference containing the account key
GCS
Settings:
-
projectIDSecretRef
: Kubernetes secret reference containing the Google project ID -
accessTokenSecretRef
: Kubernetes secret reference containing the Google access token -
bucket
: name of the bucket that should be used
B2
Settings:
-
path
: Path of the B2 instance -
bucket
: name of the bucket that should be used -
accountIDSecretRef
: Kubernetes secret reference containing the account ID -
accountKeySecretRef
: Kubernetes secret reference containing the account key
PreBackupPod
PreBackupPods are objects that live in the namespace that should be backed up. They’re completely optional though. Their main goal is to provide some sort of pre backup scripts. They can be used for various use cases though, see PreBackup pods.
apiVersion: k8up.io/v1
kind: PreBackupPod
metadata:
name: mysqldump
spec:
backupCommand: mysqldump -u$USER -p$PW -h $DB_HOST --all-databases
pod:
spec:
containers:
- env:
- name: USER
value: dumper
- name: PW
value: topsecret
- name: DB_HOST
value: mariadb.example.com
image: mariadb
command:
- sleep
- infinity
imagePullPolicy: Always
name: mysqldump
Settings
-
backupCommand
: command that should get executed within the pod. Attention the command should output its data to stdout so thatk8up restic
can pick it up correctly -
fileExtension
: as this leverages the stdin backup capabilities of Restic it will generate a virtual file. That file name is by default just the name of the PreBackup pod. But to make restores easier you can define a file extension, that gets appended to the filename. For example: ".sql" for a mysql dump -
pod
: pod is defaultpodTemplateSpec
of Kubernetes.
EffectiveSchedule
An EffectiveSchedule
is a status object that persists generated schedule definitions when using K8up specific schedules like @daily-random
.
They are completely controller-managed in the controller namespace.
Users should not have the need to interact with them.
There is an EffectiveSchedule
for each job type a Schedule
object can define.
apiVersion: v1
kind: List
items:
- apiVersion: k8up.io/v1
kind: EffectiveSchedule
metadata:
name: backup-dll789k5qql624wx
namespace: k8up-system
spec:
effectiveSchedules:
- name: schedule-test
namespace: default
generatedSchedule: 4 * * * *
jobType: backup
- apiVersion: k8up.io/v1
kind: EffectiveSchedule
metadata:
name: check-qhmm4xmgsrwmj4dh
namespace: k8up-system
spec:
effectiveSchedules:
- name: schedule-test
namespace: default
generatedSchedule: 44 * * * *
jobType: check
EffectiveSchedules
are being deleted when there are no more references to Schedules
.