Deleting a KOTS snapshot in an embedded kURL cluster does not reclaim space on the disk

After I delete a KOTS snapshot in an embedded kURL cluster it does not seem to reclaim space on the disk. How can I reclaim the space?

After deleting a snapshot, the Velero Restic data will remain until the maintenance cron prunes data. This happens once every 7 days. It is scheduled this infrequently because it holds an exclusive lock on the repo making snapshots unavailable while pruning.

It is possible to force Restic to prune the data with the following hack. This will make Restic prune the repos within 5 minutes the next time the cron runs.

kubectl -n velero get resticrepositories -oname \
  | xargs -I{} kubectl -n velero patch {} \
  --type='json' \
  -p '[{"op":"replace", "path":"/status/lastMaintenanceTime", "value":"2020-01-01T00:00:00Z"}]'

It is also possible to run the restic prune command manually.

Run this command if ceph is backing restic:

kubectl -n velero exec -it deploy/velero -- \
  restic -r s3:http://rook-ceph-rgw-rook-ceph-store.rook-ceph/velero/restic/default prune

Or if minio is backing restic:

kubectl -n velero exec -it deploy/velero -- \
  restic -r s3:http://minio.minio/velero/restic/default prune

You will be prompted for a password. You can find the password by running the command:

kubectl -n velero get secret velero-restic-credentials \
  -ojsonpath='{ .data.repository-password }' | base64 --decode

As of kURL v2023.01.31-0, the latest version of the Velero add-on has a serverFlags field where additional flags can be passed to the velero server command that runs in the container. The Restic prune frequency can be configured within this field:

spec:
    velero:
        version: 1.9.5
        serverFlags:
        - --default-restic-prune-frequency=48h
1 Like

If using a HostPath or NFS storage destination:

Use the following command if the minio add-on is enabled:

kubectl -n velero exec -it deploy/velero -- \
  restic -r s3:http://kotsadm-fs-minio.default:9000/velero/restic/default prune

Or use the following if the minio add-on is not enabled:

export RESTIC_REPO_PREFIX=$(kubectl get bsl default -n velero -o jsonpath='{.spec.config.resticRepoPrefix}')

kubectl -n velero exec -it deploy/velero -- \
  restic -r $RESTIC_REPO_PREFIX/default --cache-dir=/scratch/.cache/restic prune

Note: In more recent versions of Velero where node-agent is used (1.10 and newer), the repository password is stored in a secret named velero-repo-credentials, so the following should be used to obtain it:

kubectl -n velero get secret velero-repo-credentials \
  -ojsonpath='{ .data.repository-password }' | base64 --decode

The same steps need to be done for all restic repositories. For example, in an embedded cluster installation, a kurl restic repository will exist as well. To prune the kurl repository, run the same commands but replace default with kurl. For example:

export RESTIC_REPO_PREFIX=$(kubectl get bsl default -n velero -o jsonpath='{.spec.config.resticRepoPrefix}')

kubectl -n velero exec -it deploy/velero -- \
  restic -r $RESTIC_REPO_PREFIX/kurl --cache-dir=/scratch/.cache/restic prune

The list of available restic repositories can be retrieved by running the following command:

kubectl get resticrepositories -n velero
1 Like