I encountered a strange behaviour concerning my Azure kuberentes pod.
This is a postgres db disk.
My pod crashes only if I choose a specific (the original) name for the disk
Using the same snapshot by creating another disk with a different name the pod will be available.
Why does the disk name matter at all? Is it cached somewhere?
kubectl get pods
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 22s default-scheduler Successfully assigned default/appsdevjiradb to aks-mynode-38030476-1
Normal Pulled 5s (x3 over 22s) kubelet Container image "appsacr.azurecr.io/jiradb:jiradb-14.4-0607" already present on machine
Normal Created 5s (x3 over 22s) kubelet Created container appsdevjiradb
Normal Started 5s (x3 over 22s) kubelet Started container appsdevjiradb
Warning BackOff 3s (x2 over 20s) kubelet Back-off restarting failed container dbpod in pod dbpod_default(30039926-dd7a-4583-8f5b-82f5a6839149)
Do you have any idea what else I can check?
This my yaml file:
apiVersion: v1
kind: Pod
metadata:
name: jiradb
labels:
name: jiradb
spec:
containers:
- name: jiradb
image: appsacr.azurecr.io/repo:image-version-0.1
env:
- name: POSTGRES_PASSWORD
value: *****
- name: PGDATA
value: /var/lib/postgresql/data/pgdata
ports:
- containerPort: 5432
volumeMounts:
- mountPath: /var/lib/postgresql/data
name: dbdata
subPath: pgdata
volumes:
- name: dbdata
azureDisk:
kind: Managed
diskName: temp2
diskURI: /subscriptions/<subid>/resourceGroups/MC_westeurope/providers/Microsoft.Compute/disks/temp2