kURL upgrade requires kubeconfig on worker nodes

Hi, I am trying the upgrade procedure of embedded cluster created using kURL:

  • I create the cluster using older versions of k8s and addons
  • I deploy our application to the cluster
  • I generate new yaml file with cluster definition, where there are the latest versions of k8s and the addons
  • I run the upgrade from the master node
  • Once asked, I drain the master node (upgrade of the master node goes smoothly)
  • When asked, I drain the first worker node
  • After the node is drained, I run the command (shown on the master node) on the worker node

The command is this:

curl -fsSL https://kurl.sh/version/v2023.03.21-0/e151d1d/upgrade.sh | sudo bash -s kubernetes-version=1.26.3 primary-host=10.15.53.9 secondary-host=10.15.53.8 secondary-host=10.15.53.5 secondary-host=10.15.53.6 secondary-host=10.15.53.4 secondary-host=10.15.53.7

Once I run the upgrade command on the worker node, I get an error …

$ curl -fsSL https://kurl.sh/version/v2023.03.21-0/e151d1d/upgrade.sh | sudo bash -s kubernetes-version=1.26.3 primary-host=10.15.53.9 secondary-host=10.15.53.8 secondary-host=10.15.53.5 secondary-host=10.15.53.6 secondary-host=10.15.53.4 secondary-host=10.15.53.7
⚙  Running upgrade with the argument(s): kubernetes-version=1.26.3 primary-host=10.15.53.9 secondary-host=10.15.53.8 secondary-host=10.15.53.5 secondary-host=10.15.53.6 secondary-host=10.15.53.4 secondary-host=10.15.53.7
Package kurl-bin-utils-v2023.03.21-0.tar.gz already exists, not downloading
SELinux is not installed: no configuration will be applied
W0324 12:58:26.925752   81774 loader.go:223] Config not found: /etc/kubernetes/admin.conf
The connection to the server localhost:8080 was refused - did you specify the right host or port?
W0324 12:58:26.969550   81782 loader.go:223] Config not found: /etc/kubernetes/admin.conf
The connection to the server localhost:8080 was refused - did you specify the right host or port?

To continue, I need to copy the kubeconfig (admin.conf) file to the specified location on the worker node. Without this, the upgrade fails.

Is that intentional or did I do something wrong? I did not find anything about this requirement (of copying the admin.conf file) in the docs and would expect that there would be some note about that when the upgrade script prints out the command to be run on the worker node …

Thank you!