How to Restore a Backup
It’s possible to tell the Operator to perform restores either to a PVC or an S3 bucket.
Get a List of Available Snapshots
To get a list of available snapshots per namespace you can use:
kubectl get snapshots
NAME AGE
162e7a85 31m
a1dc5eff 31m
To determine which PVC the snapshots they belong to, you can check the path field:
kubectl get snapshots 162e7a85 -oyaml
apiVersion: k8up.io/v1
kind: Snapshot
metadata:
name: 162e7a85
namespace: default
spec:
date: "2023-03-03T07:34:42Z"
id: 162e7a85acbc14de93dad31a3699331cb32187ff0d7bd2227b7c4362a1d13a42
paths:
- /data/subject-pvc
repository: s3:http://minio.minio.svc.cluster.local:9000/backup
The paths are in the format of /data/$PVCNAME.
You can use the ID to reference a specific snapshot in a restore job.
apiVersion: k8up.io/v1
kind: Restore
metadata:
name: restore-test
spec:
snapshot: 162e7a85acbc14de93dad31a3699331cb32187ff0d7bd2227b7c4362a1d13a42
backend:
...
See below for complete examples.
Restore from S3 to S3 bucket
For this you can create a restore object:
apiVersion: k8up.io/v1
kind: Restore
metadata:
name: restore-test
spec:
s3:
endpoint: http://localhost:9000
bucket: restore
accessKeyIDSecretRef:
name: backup-credentials
key: username
secretAccessKeySecretRef:
name: backup-credentials
key: password
backend:
repoPasswordSecretRef:
name: backup-repo
key: password
s3:
endpoint: http://localhost:9000
bucket: k8up
accessKeyIDSecretRef:
name: backup-credentials
key: username
secretAccessKeySecretRef:
name: backup-credentials
key: password
This will trigger a one time job to restore the latest snapshot to S3.
Restore from S3 to PVC
|
You can’t restore from backups that were done from |
First, create a new PVC to extract the data to:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: restore-test-mfw
# Optional:
#namespace: snapshot-test
annotations:
# set to "true" to include in future backups
k8up.io/backup: "false"
# Optional:
#labels:
# app: multi-file-writer
spec:
# Optional:
# storageClassName: <YOUR_STORAGE_CLASS_NAME>
accessModes:
- ReadWriteMany
resources:
requests:
# Must be sufficient to hold your data
storage: 250Mi
Then create a restore job
apiVersion: k8up.io/v1
kind: Restore
metadata:
name: restore-test-mfw
namespace: snapshot-test
spec:
restoreMethod:
folder:
claimName: mfw-restore
backend:
repoPasswordSecretRef:
name: backup-repo
key: password
s3:
endpoint: http://minio-service:9000
bucket: k8up
accessKeyIDSecretRef:
name: backup-credentials
key: username
secretAccessKeySecretRef:
name: backup-credentials
key: password
By default, there is no check to ensure that the selected snapshot actually contains data of the specified PVC. This happens because a separate snapshot gets created for each PVC and PreBackupPod, which is why spec.paths can be set to a list of paths which must be present in the selected snapshot. In the case of PVC snapshots the path always has the structure /data/$claimName, which in the above example equates to paths: ["/data/mfw-restore"].
To restore from a snapshot older then latest, one can specify spec.restoreTimeFilter above, which takes a date string in the form of YYYY-MM-DD hh:mm:ss or any prefix of this. This way, the latest snapshot whose timestamp matches this pattern. So for example 2026-03-18 would use the latest snapshot from that day. In the case that no matching snapshot exists, the system will fall back to the default behaviour of using the latest snapshot overall.
Restore to PVC as non-root user
For some storage volumes it may be necessary to adjust permissions as non-root user, otherwise the restore could fail due to "permission denied" errors.
If you encounter such a situation, try setting the following security context in the Restore spec:
apiVersion: k8up.io/v1
kind: Restore
...
spec:
podSecurityContext:
fsGroup: 65532
fsGroupChangePolicy: OnRootMismatch
Manual restore via Restic
To manually restore you’ll need:
-
Linux machine with restic
-
Fuse (Optional for mounting)
Let’s take this backend example from a schedule:
backend:
s3:
endpoint: http://localhost:9000
bucket: k8up
accessKeyIDSecretRef:
name: backup-credentials
key: username
secretAccessKeySecretRef:
name: backup-credentials
key: password
You’ll need the credentials from the secrets and the encryption key. With that information you can configure restic:
export RESTIC_REPOSITORY=s3:http://localhost/k8up
export RESTIC_PASSWORD=p@assword
export AWS_ACCESS_KEY_ID=8U0UDNYPNUDTUS1LIAF3
export AWS_SECRET_ACCESS_KEY=ip3cdrkXcHmH4S7if7erKPNoxDn27V0vrg6CHHem
Now you can use Restic to browse and restore snapshots:
# List snapshots
restic snapshots
repository dec6d66c opened successfully, password is correct
ID Date Host Tags Directory
----------------------------------------------------------------------
5ed64a2d 2018-06-08 09:18:34 macbook-vshn.local /data
----------------------------------------------------------------------
1 snapshots
restic restore 5ed64a2d --target /restore
# Or mount the repository for convenient restores
restic mount ~/Desktop/mount
repository dec6d66c opened successfully, password is correct
Now serving the repository at /Users/simonbeck/Desktop/mount/
Dont forget to umount after quitting!
ll ~/Desktop/mount
total 0
dr-xr-xr-x 1 simonbeck staff 0 Jun 8 09:21 .
drwx------+ 6 simonbeck staff 192 Jun 8 09:15 ..
dr-xr-xr-x 1 simonbeck staff 0 Jun 8 09:21 hosts
dr-xr-xr-x 1 simonbeck staff 0 Jun 8 09:21 ids
dr-xr-xr-x 1 simonbeck staff 0 Jun 8 09:21 snapshots
dr-xr-xr-x 1 simonbeck staff 0 Jun 8 09:21 tags
Here you can browse all backups by host, ids, snapshots or tags.
Restore via CLI
K8up supports restoring via CLI. It’s a fast and simple way to create restore.k8up.io/v2 CRD.
ohmybash ~ k8up restore --help
NAME:
k8up restore
USAGE:
k8up cli restore [command options] [arguments...]
CATEGORY:
cli
DESCRIPTION:
CLI commands that can be executed everywhere
OPTIONS:
--snapshot kubectl get snapshots Required ; ID of the snapshot kubectl get snapshots, set, via cli or via env: [$SNAPSHOT]
--secretRef value Required ; Set secret name from which You want to take S3 credentials, via cli or via env: [$SECRET_REF]
--s3endpoint value Required ; Set s3endpoint from which backup will be taken, via cli or via env: [$S3ENDPOINT]
--s3bucket value Required ; Set s3bucket from which backup will be taken, via cli or via env: [$S3BUCKET]
--s3secretRef value Required ; Set secret name, where S3 username & password are stored from which backup will be taken, via cli or via env: [$S3SECRETREF]
--restoreMethod value Required ; Set restore method [ pvc|s3 ], via cli or via env: (default: pvc) [$RESTOREMETHOD]
--claimName value Required ; Set claimName field, via cli or via env: [$CLAIMNAME]
--S3SecretRefUsernameKey value Optional ; Set S3SecretRefUsernameKey, key inside secret, under which S3 username is stored, via cli or via env: (default: username) [$S3SECRETREFUSERNAMEKEY]
--S3SecretRefPasswordKey value Optional ; Set S3SecretRefPasswordKey, key inside secret, under which Restic repo password is stored, via cli or via env: (default: password) [$S3SECRETREFPASSWORDKEY]
--restoreName value Optional ; Set restoreName - metadata.Name field, if empty, k8up will generate name, via cli or via env: [$RESTORENAME]
--runAsUser value Optional ; Set user UID, via cli or via env: (default: 0) [$RUNASUSER]
--restoreToS3Endpoint value Optional ; Set restore endpoint, only when using s3 restore method, via cli or via env: [$RESTORETOS3ENDPOINT]
--restoreToS3Bucket value Optional ; Set restore bucket, only when using s3 restore method, via cli or via env: [$RESTORETOS3BUCKET]
--restoreToS3Secret value Optional ; Set restore Secret, only when using s3 restore method, expecting secret name containing key value pair with 'username' and 'password' keys, via cli or via env: [$RESTORETOS3SECRET]
--RestoreToS3SecretUsernameKey value Optional ; Set RestoreToS3SecretUsernameKey, key inside secret, under which S3 username is stored, via cli or via env: (default: username) [$RESTORETOS3SECRETUSERNAMEKEY]
--RestoreToS3SecretPasswordKey value Optional ; Set RestoreToS3SecretPasswordKey, key inside secret, under which Restic repo password is stored, via cli or via env: (default: password) [$RESTORETOS3SECRETPASSWORDKEY]
--namespace value Optional ; Set namespace in which You want to execute restore, via cli or via env: (default: default) [$NAMESPACE]
--kubeconfig value Optional ; Set kubeconfig to connect to cluster, via cli or via env: (default: ~/.kube/config) [$KUBECONFIG]
--help, -h show help (default: false)
Example usages:
# restore using PVC
k8up cli restore \
--restoreMethod pvc \
--kubeconfig .e2e-test/kind-kubeconfig-v1.24.4 \
--secretRef backup-repo \
--namespace default \
--s3endpoint http://minio:9000 \
--s3bucket backups \
--s3secretRef minio-credentials \
--snapshot 5c3fc641 \
--claimName wordpress-pvc \
--runAsUser 0
# restore using S3 => S3
k8up cli restore \
--restoreMethod s3 \
--kubeconfig .e2e-test/kind-kubeconfig-v1.24.4 \
--secretRef backup-repo \
--namespace default \
--s3endpoint http://minio:9000 \
--s3bucket backups \
--s3secretRef minio-credentials \
--snapshot 5c3fc641 \
--claimName wordpress-pvc \
--runAsUser 0 \
--restoreToS3Bucket backup2 \
--restoreToS3Secret minio-credentials \
--restoreToS3Endpoint http://minio:9000
# PVC restore
(where the backup is)
S3 storage ------------> k8up --------------> PVC restore
--s3endpoint --runAsUser --claimName
--s3bucket --kubeconfig
--s3secretRef --snapshot
--namespace
--restoreMethod
--secretRef
# s3 to s3 restore
(where the backup is) (different s3 where we want to copy our backup)
S3 storage ------------> k8up --------------> s3 storage
--s3endpoint --runAsUser --restoreToS3Bucket
--s3bucket --kubeconfig --restoreToS3Secret
--s3secretRef --snapshot --restoreToS3Endpoint
--namespace
--restoreMethod
--secretRef
As a result of this CLI will be created k8up object restore.k8up.io, which will create a job and job will create a pod to do actual restore. If You won’t specify namespace all of those objects will be created in default one. You can easily access them using, for example:
kubectl get restores.k8up.io
kubectl get jobs
kubectl logs -f jobs/resotre-job-123
Self-signed issuer and Mutual TLS
If you are using self-signed issuer or using mutual tls for authenticate client, you be able to using volume for mounting cert files into backup object.
Self-signed issuer
-
Using with
optionsfeature in backend
apiVersion: k8up.io/v1
kind: Restore
metadata:
name: restore-test
spec:
failedJobsHistoryLimit: 2
successfulJobsHistoryLimit: 2
snapshot: 162e7a85acbc14de93dad31a3699331cb32187ff0d7bd2227b7c4362a1d13a42
backend:
s3: {}
tlsOptions:
caCert: /mnt/ca/ca.crt
volumeMounts:
- name: ca-tls
mountPath: /mnt/ca/
restoreMethod:
s3: {}
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
volumes:
- name: ca-tls
secret:
secretName: ca-tls
defaultMode: 420
-
Using with
envin backend
apiVersion: v1
kind: ConfigMap
metadata:
name: restore-cert
data:
CA_CERT_FILE: /mnt/ca/ca.crt
---
apiVersion: k8up.io/v1
kind: Restore
metadata:
name: restore-test
spec:
failedJobsHistoryLimit: 2
successfulJobsHistoryLimit: 2
snapshot: 162e7a85acbc14de93dad31a3699331cb32187ff0d7bd2227b7c4362a1d13a42
backend:
s3: {}
envFrom:
- configMapRef:
name: restore-cert
volumeMounts:
- name: ca-tls
mountPath: /mnt/ca/
restoreMethod:
s3: {}
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
volumes:
- name: ca-tls
secret:
secretName: ca-tls
defaultMode: 420
-
Using with
optionsfeature in restore
apiVersion: k8up.io/v1
kind: Restore
metadata:
name: restore-test
spec:
failedJobsHistoryLimit: 2
successfulJobsHistoryLimit: 2
snapshot: 162e7a85acbc14de93dad31a3699331cb32187ff0d7bd2227b7c4362a1d13a42
backend:
s3: {}
restoreMethod:
s3: {}
tlsOptions:
caCert: /mnt/ca/ca.crt
volumeMounts:
- name: ca-tls
mountPath: /mnt/ca/
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
volumes:
- name: ca-tls
secret:
secretName: ca-tls
defaultMode: 420
-
Using with
envin restore
apiVersion: v1
kind: ConfigMap
metadata:
name: restore-cert
data:
RESTORE_CA_CERT_FILE: /mnt/ca/ca.crt
---
apiVersion: k8up.io/v1
kind: Restore
metadata:
name: restore-test
spec:
failedJobsHistoryLimit: 2
successfulJobsHistoryLimit: 2
snapshot: 162e7a85acbc14de93dad31a3699331cb32187ff0d7bd2227b7c4362a1d13a42
backend:
s3: {}
restoreMethod:
s3: {}
envFrom:
- configMapRef:
name: restore-cert
volumeMounts:
- name: ca-tls
mountPath: /mnt/ca/
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
volumes:
- name: ca-tls
secret:
secretName: ca-tls
defaultMode: 420
-
Using same cert with
optionsfeature in backend and restore
apiVersion: k8up.io/v1
kind: Restore
metadata:
name: restore-test
spec:
failedJobsHistoryLimit: 2
successfulJobsHistoryLimit: 2
snapshot: 162e7a85acbc14de93dad31a3699331cb32187ff0d7bd2227b7c4362a1d13a42
backend:
s3: {}
tlsOptions:
caCert: /mnt/ca/ca.crt
volumeMounts:
- name: ca-tls
mountPath: /mnt/ca/
restoreMethod:
s3: {}
tlsOptions:
caCert: /mnt/ca/ca.crt
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
volumes:
- name: ca-tls
secret:
secretName: ca-tls
defaultMode: 420
-
Using same cert with
envin backend and restore
apiVersion: v1
kind: ConfigMap
metadata:
name: restore-cert
data:
CA_CERT_FILE: /mnt/ca/ca.crt
RESTORE_CA_CERT_FILE: /mnt/ca/ca.crt
---
apiVersion: k8up.io/v1
kind: Restore
metadata:
name: restore-test
spec:
failedJobsHistoryLimit: 2
successfulJobsHistoryLimit: 2
snapshot: 162e7a85acbc14de93dad31a3699331cb32187ff0d7bd2227b7c4362a1d13a42
backend:
s3: {}
envFrom:
- configMapRef:
name: restore-cert
volumeMounts:
- name: ca-tls
mountPath: /mnt/ca/
restoreMethod:
s3: {}
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
volumes:
- name: ca-tls
secret:
secretName: ca-tls
defaultMode: 420
-
Using different cert with
optionsfeature in backend and restore
apiVersion: k8up.io/v1
kind: Restore
metadata:
name: restore-test
spec:
failedJobsHistoryLimit: 2
successfulJobsHistoryLimit: 2
snapshot: 162e7a85acbc14de93dad31a3699331cb32187ff0d7bd2227b7c4362a1d13a42
backend:
s3: {}
tlsOptions:
caCert: /mnt/ca/ca.crt
volumeMounts:
- name: ca-tls
mountPath: /mnt/ca/
restoreMethod:
s3: {}
tlsOptions:
caCert: /mnt/custom-ca/ca.crt
volumeMounts:
- name: custom-ca-tls
mountPath: /mnt/custom-ca/
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
volumes:
- name: ca-tls
secret:
secretName: ca-tls
defaultMode: 420
- name: custom-ca-tls
secret:
secretName: custom-ca-tls
defaultMode: 420
-
Using different cert with
envin backend and restore
apiVersion: v1
kind: ConfigMap
metadata:
name: restore-cert
data:
CA_CERT_FILE: /mnt/ca/ca.crt
RESTORE_CA_CERT_FILE: /mnt/custom-ca/ca.crt
---
apiVersion: k8up.io/v1
kind: Restore
metadata:
name: restore-test
spec:
failedJobsHistoryLimit: 2
successfulJobsHistoryLimit: 2
snapshot: 162e7a85acbc14de93dad31a3699331cb32187ff0d7bd2227b7c4362a1d13a42
backend:
s3: {}
envFrom:
- configMapRef:
name: restore-cert
volumeMounts:
- name: ca-tls
mountPath: /mnt/ca/
restoreMethod:
s3: {}
volumeMounts:
- name: custom-ca-tls
mountPath: /mnt/custom-ca/
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
volumes:
- name: ca-tls
secret:
secretName: ca-tls
defaultMode: 420
- name: custom-ca-tls
secret:
secretName: custom-ca-tls
defaultMode: 420
Self-signed issuer with mTLS
-
Using with
optionsfeature in backend
apiVersion: k8up.io/v1
kind: Restore
metadata:
name: restore-test
spec:
failedJobsHistoryLimit: 2
successfulJobsHistoryLimit: 2
snapshot: 162e7a85acbc14de93dad31a3699331cb32187ff0d7bd2227b7c4362a1d13a42
backend:
s3: {}
tlsOptions:
caCert: /mnt/ca/ca.crt
clientCert: /mnt/tls/tls.crt
clientKey: /mnt/tls/tls.key
volumeMounts:
- name: client-tls
mountPath: /mnt/tls/
restoreMethod:
s3: {}
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
volumes:
- name: client-tls
secret:
secretName: client-tls
defaultMode: 420
-
Using with
envin backend
apiVersion: v1
kind: ConfigMap
metadata:
name: restore-cert
data:
CA_CERT_FILE: /mnt/tls/ca.crt
CLIENT_CERT_FILE: /mnt/tls/tls.crt
CLIENT_KEY_FILE: /mnt/tls/tls.key
---
apiVersion: k8up.io/v1
kind: Restore
metadata:
name: restore-test
spec:
failedJobsHistoryLimit: 2
successfulJobsHistoryLimit: 2
snapshot: 162e7a85acbc14de93dad31a3699331cb32187ff0d7bd2227b7c4362a1d13a42
backend:
s3: {}
envFrom:
- configMapRef:
name: restore-cert
volumeMounts:
- name: client-tls
mountPath: /mnt/tls/
restoreMethod:
s3: {}
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
volumes:
- name: client-tls
secret:
secretName: client-tls
defaultMode: 420
-
Using with
optionsfeature in restore
apiVersion: k8up.io/v1
kind: Restore
metadata:
name: restore-test
spec:
failedJobsHistoryLimit: 2
successfulJobsHistoryLimit: 2
snapshot: 162e7a85acbc14de93dad31a3699331cb32187ff0d7bd2227b7c4362a1d13a42
backend:
s3: {}
restoreMethod:
s3: {}
tlsOptions:
caCert: /mnt/tls/ca.crt
clientCert: /mnt/tls/tls.crt
clientKey: /mnt/tls/tls.key
volumeMounts:
- name: client-tls
mountPath: /mnt/tls/
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
volumes:
- name: client-tls
secret:
secretName: client-tls
defaultMode: 420
-
Using with
envin restore
apiVersion: v1
kind: ConfigMap
metadata:
name: restore-cert
data:
RESTORE_CA_CERT_FILE: /mnt/tls/ca.crt
RESTORE_CLIENT_CERT_FILE: /mnt/tls/tls.crt
RESTORE_CLIENT_KEY_FILE: /mnt/tls/tls.key
---
apiVersion: k8up.io/v1
kind: Restore
metadata:
name: restore-test
spec:
failedJobsHistoryLimit: 2
successfulJobsHistoryLimit: 2
snapshot: 162e7a85acbc14de93dad31a3699331cb32187ff0d7bd2227b7c4362a1d13a42
backend:
s3: {}
restoreMethod:
s3: {}
envFrom:
- configMapRef:
name: restore-cert
volumeMounts:
- name: client-tls
mountPath: /mnt/tls/
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
volumes:
- name: client-tls
secret:
secretName: client-tls
defaultMode: 420
-
Using same cert with
optionsfeature in backend and restore
apiVersion: k8up.io/v1
kind: Restore
metadata:
name: restore-test
spec:
failedJobsHistoryLimit: 2
successfulJobsHistoryLimit: 2
snapshot: 162e7a85acbc14de93dad31a3699331cb32187ff0d7bd2227b7c4362a1d13a42
backend:
s3: {}
tlsOptions:
caCert: /mnt/tls/ca.crt
clientCert: /mnt/tls/tls.crt
clientKey: /mnt/tls/tls.key
volumeMounts:
- name: client-tls
mountPath: /mnt/tls/
restoreMethod:
s3: {}
tlsOptions:
caCert: /mnt/tls/ca.crt
clientCert: /mnt/tls/tls.crt
clientKey: /mnt/tls/tls.key
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
volumes:
- name: client-tls
secret:
secretName: client-tls
defaultMode: 420
-
Using same cert with
envin backend and restore
apiVersion: v1
kind: ConfigMap
metadata:
name: restore-cert
data:
CA_CERT_FILE: /mnt/tls/ca.crt
CLIENT_CERT_FILE: /mnt/tls/tls.crt
CLIENT_KEY_FILE: /mnt/tls/tls.key
RESTORE_CA_CERT_FILE: /mnt/tls/ca.crt
RESTORE_CLIENT_CERT_FILE: /mnt/tls/tls.crt
RESTORE_CLIENT_KEY_FILE: /mnt/tls/tls.key
---
apiVersion: k8up.io/v1
kind: Restore
metadata:
name: restore-test
spec:
failedJobsHistoryLimit: 2
successfulJobsHistoryLimit: 2
snapshot: 162e7a85acbc14de93dad31a3699331cb32187ff0d7bd2227b7c4362a1d13a42
backend:
s3: {}
envFrom:
- configMapRef:
name: restore-cert
volumeMounts:
- name: client-tls
mountPath: /mnt/tls/
restoreMethod:
s3: {}
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
volumes:
- name: client-tls
secret:
secretName: client-tls
defaultMode: 420
-
Using different cert with
optionsfeature in backend and restore
apiVersion: k8up.io/v1
kind: Restore
metadata:
name: restore-test
spec:
failedJobsHistoryLimit: 2
successfulJobsHistoryLimit: 2
snapshot: 162e7a85acbc14de93dad31a3699331cb32187ff0d7bd2227b7c4362a1d13a42
backend:
s3: {}
tlsOptions:
caCert: /mnt/tls/ca.crt
clientCert: /mnt/tls/tls.crt
clientKey: /mnt/tls/tls.key
volumeMounts:
- name: client-tls
mountPath: /mnt/tls/
restoreMethod:
s3: {}
tlsOptions:
caCert: /mnt/custom-tls/ca.crt
clientCert: /mnt/custom-tls/tls.crt
clientKey: /mnt/custom-tls/tls.key
volumeMounts:
- name: custom-client-tls
mountPath: /mnt/custom-tls/
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
volumes:
- name: client-tls
secret:
secretName: client-tls
defaultMode: 420
- name: custom-client-tls
secret:
secretName: custom-client-tls
defaultMode: 420
-
Using different cert with
envin backend and restore
apiVersion: v1
kind: ConfigMap
metadata:
name: restore-cert
data:
CA_CERT_FILE: /mnt/tls/ca.crt
CLIENT_CERT_FILE: /mnt/tls/tls.crt
CLIENT_KEY_FILE: /mnt/tls/tls.key
RESTORE_CA_CERT_FILE: /mnt/custom-tls/ca.crt
RESTORE_CLIENT_CERT_FILE: /mnt/custom-tls/tls.crt
RESTORE_CLIENT_KEY_FILE: /mnt/custom-tls/tls.key
---
apiVersion: k8up.io/v1
kind: Restore
metadata:
name: restore-test
spec:
failedJobsHistoryLimit: 2
successfulJobsHistoryLimit: 2
snapshot: 162e7a85acbc14de93dad31a3699331cb32187ff0d7bd2227b7c4362a1d13a42
backend:
s3: {}
envFrom:
- configMapRef:
name: restore-cert
volumeMounts:
- name: client-tls
mountPath: /mnt/ca/
restoreMethod:
s3: {}
volumeMounts:
- name: client-custom-tls
mountPath: /mnt/custom-tls/
podSecurityContext:
fsGroup: 1000
runAsUser: 1000
volumes:
- name: client-tls
secret:
secretName: client-tls
defaultMode: 420
- name: client-custom-tls
secret:
secretName: client-custom-tls
defaultMode: 420