How to Manage pod resources
There are 3 different levels how pod resources can be managed. Each level takes precedence over the previous one.
-
Global defaults
-
resourceRequirementsTemplatefield from aScheduleobject -
resourcesfields in aBackup,Prune,Check,ArchiveorRestorespec within aSchedule.
|
Global defaults are only applied through |
|
The computed resources resulting from the merge described above are not persisted in the |
Global default
Let’s assume the cluster administrator or a resource quota requires to set memory limit on a all pods spawned by K8up. In that case K8up needs to be started with the following environment variable:
BACKUP_GLOBAL_MEMORY_LIMIT=200M
| See Operator Config reference to see which environment variables are supported. |
Following example is a simple Schedule example without further resource specs.
apiVersion: k8up.io/v1
kind: Schedule
metadata:
name: schedule-test
spec:
...
prune:
schedule: '*/5 * * * *'
This will create a Prune object with the resource default applied:
apiVersion: k8up.io/v1
kind: Prune
metadata:
name: schedule-test-prune-qbq2k
spec:
...
resources:
limits:
memory: 200M
This object in turn spawns a Job with the given resource spec.
Resources from a template
Following example is a Schedule example that also sets CPU request for all jobs.
apiVersion: k8up.io/v1
kind: Schedule
metadata:
name: schedule-test
spec:
resourceRequirementsTemplate:
requests:
cpu: "50m"
...
check:
resources:
requests:
memory: "64Mi"
cpu: "250m"
schedule: '@hourly-random'
prune:
schedule: '*/2 * * * *'
Combined with the global memory limit, the Prune object will have following resources:
apiVersion: k8up.io/v1
kind: Prune
metadata:
name: schedule-test-prune-qbq2k
spec:
...
resources:
requests:
cpu: "50m"
limits:
memory: 200M
Whereas, the Check object will overwrite the CPU resource request template with its own specification:
apiVersion: k8up.io/v1
kind: Check
metadata:
name: schedule-test-check-q5kae
spec:
...
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: 200M