It’s possible to use local persistent volumes for your application’s PVCs as an alternative to allowing them to be provisioned with Rook/Ceph.
Prior to running the kubernetes install or join script on the node where the volume should be created, the cluster operator should mount a block device and create a filesystem on it. This example mounts a block device at /mnt/disks/local-pv-1
.
sudo mkfs.ext4 -m 0 -E lazy_itable_init=0,lazy_journal_init=0,discard /dev/sdb
sudo mkdir -p /mnt/disks/local-pv-1
sudo mount -o discard,defaults /dev/sdb /mnt/disks/local-pv-1
sudo chmod a+w /mnt/disks/local-pv-1/
sudo cp /etc/fstab /etc/fstab.backup
echo UUID=`sudo blkid -s UUID -o value /dev/sdb` /mnt/disks/local-pv-1 ext4 discard,defaults,nofail 0 2 | sudo tee -a /etc/fstab
Add config options to your release to collect the node name and mount path used by the customer.
config:
- name: local-pv
title: Local PV Configuration
items:
- name: local-pv-1-path
title: Local PV 1 Path
type: text
- name: local-pv-1-node
title: Local PV 1 Node
type: text
Then define a StorageClass in your release yaml and use template functions to create a PersistentVolume along with a PersistentVolumeClaim referencing that class:
---
# kind: scheduler-kubernetes
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: local-pv
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
---
# kind: scheduler-kubernetes
apiVersion: v1
kind: PersistentVolume
metadata:
name: local-pv-1
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-pv
local:
path: '{{repl ConfigOption "local-pv-1-path" }}'
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- '{{repl ConfigOption "local-pv-1-node" }}'
---
# kind: scheduler-kubernetes
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: local-pv-1-claim
annotations:
replicated.com/no-rewrite-storage-class: "true"
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-pv
resources:
requests:
storage: 100Gi
---
# kind: scheduler-kubernetes
apiVersion: apps/v1
kind: Deployment
metadata:
name: local-pv
spec:
selector:
matchLabels:
app: local-pv
template:
metadata:
labels:
app: local-pv
spec:
containers:
- name: local-pv-container
image: nginx:alpine
ports:
- containerPort: 80
name: www
volumeMounts:
- name: local-pv-storage
mountPath: /usr/share/nginx/html
volumes:
- name: local-pv-storage
persistentVolumeClaim:
claimName: local-pv-1-claim