Cool, thanks! It went through that and failed with the following error
Swapping PVC prometheus-k8s-db-prometheus-k8s-0 in monitoring to the new StorageClass
Marking original PV pvc-a4fbfba9-f5e7-43b3-96b8-bfa5b350bfd0 as to-be-retained
Marking migrated-to PV pvc-0b327f5e-341c-4fbe-ada0-d35185df8bec as to-be-retained
Deleting original PVC prometheus-k8s-db-prometheus-k8s-0 in monitoring to free up the name
Deleting migrated-to PVC prometheus-k8s-db-prometheus-k8s-0 in monitoring to release the PV
Removing claimref from original PV pvc-a4fbfba9-f5e7-43b3-96b8-bfa5b350bfd0
Removing claimref from migrated-to PV pvc-0b327f5e-341c-4fbe-ada0-d35185df8bec
Creating new PVC prometheus-k8s-db-prometheus-k8s-0 with migrated-to PV pvc-0b327f5e-341c-4fbe-ada0-d35185df8bec
failed to swap PVs for PVC prometheus-k8s-db-prometheus-k8s-0 in monitoring: failed to create migrated-to PVC prometheus-k8s-db-prometheus-k8s-0 in monitoring: object is being deleted: persistentvolumeclaims "prometheus-k8s-db-prometheus-k8s-0" already exists
Hi I saw you guys have a new tag here (Tags · replicatedhq/pvmigrate · GitHub), I am wondering if the change has been reflected to the latest script? Thanks!
Thanks! I just tried it and it seems fixing the previous issue. However it failed again
Migrating data from default to longhorn
PV pvc-0b327f5e-341c-4fbe-ada0-d35185df8bec does not match source SC default, not migrating
PV pvc-5bb88f68-d88d-4f9f-b3da-6635237bb096 does not match source SC default, not migrating
PV pvc-7c182312-4d57-4b05-98f3-5c9794fd3690 does not match source SC default, not migrating
PV pvc-870b07c5-215c-4081-ae29-6d6332e54ba6 does not match source SC default, not migrating
Found 3 matching PVCs to migrate across 1 namespaces:
namespace: pvc: pv: size:
monitoring prometheus-k8s-db-prometheus-k8s-0 pvc-0b327f5e-341c-4fbe-ada0-d35185df8bec 0
monitoring prometheus-k8s-db-prometheus-k8s-0 pvc-0b327f5e-341c-4fbe-ada0-d35185df8bec 0
monitoring prometheus-k8s-db-prometheus-k8s-1 pvc-d54aafc3-dab6-435f-8e1b-1264a485493b 10Gi
Creating new PVCs to migrate data to using the longhorn StorageClass
failed to find existing PV pvc-0b327f5e-341c-4fbe-ada0-d35185df8bec for PVC prometheus-k8s-db-prometheus-k8s-0 in monitoring
The same PVC shows up twice, wondering if you have any idea what could be the issue?
That pvc pvc-0b327f5e-341c-4fbe-ada0-d35185df8bec is in there three times actually - it’s also listed as “not default SC, not migrating”. Would you mind running kubectl get pvc -A and kubectl get pv and posting that here?
Would it be possible for you to test again on a fresh instance? I think that the unsuccessful migrations here caused an issue, and while we could fix it with surgery that is not something I intend to automate.
failed to scale down pods: pod prometheus-k8s-0 in monitoring mounting prometheus-k8s-db-prometheus-k8s-0 was created at 2022-01-27T16:09:30Z, after scale-down started at 2022-01-27T16:08:36Z. It is likely that there is some other operator scaling this back up
I think prometheus-operator was doing that scaling up/down? Is there a way we could find out which operator is scaling this up?
Those two pods are using close to 4000m CPUs. Is that normal? Is there a way that we could make it smaller? (We could definitely edit the pod for example but wanted to find a way that can automatically do the trick)