For as long as we’ve offered the current generation of Embedded Cluster, I’ve been arguing with the team about using kotsadm
as the default for deploying your application. When I install my demo application (“Slackernews”), I run a command named slackernews-mackerel
[1] to deploy it. Yet it installs into a namespace called kotsadm
, and you need some insider knowledge to understand
why.
TL;DR
Always use the namespace
attribute in your HelmChart
to set the namespace your application installs into.
Do I really need to care?
One might argue that the whole point of Embedded Cluster is that your customer never has to worry about Kubernetes. In fact, I make that argument all the time. So if my customer doesn’t have to care about Kubernetes, why should I care what namespace my application is installed into?
As the developer, I do need to worry about Kubernetes. Providing a great experience with a Kubernetes-based application to customers without them thinking about Kubernetes takes a little work. That work involves iteration, and probably a lot of time running the shell
subcommand and poking around inside the cluster. And I have to keep remembering that in Embedded Cluster, my application is installed into this strange namespace.
You might say “so what, I just have to learn that and remember it.” That isn’t so hard, I suppose. You can even carry that knowledge over when you change jobs and use Replicated to distribute a new application. Just another little morsel of knowledge that adds up what we call experience.
But it’s not just you. What about the support team? They get a support bundle and need to know where to look when they run sbctl
. They have to remember to look in the right namespace, which probably isn’t the namespace you’re using for your SaaS, or that you recommend when your customers install with a Helm chart. Now it’s not just your product team—it’s a much bigger part of your organization that needs to learn this little incantation.
And while we want to make sure customers never, ever have to worry about Kubernetes, there’s always a case where they might end up running kubectl
in spite of our efforts. Maybe it’s a one-off thing where they need to troubleshoot a gnarly bug the support bundle isn’t prepared for. Or they need to make a quick runtime patch of a manifest because they can’t wait for the next release. Or maybe, just maybe, they’re curious and taking a look around.
Actually, those curious folks are probably the worst. They’re most likely to wonder what that crazy kotsadm
namespace is.
Why is it called kotsadm
?
If you’ve been using Replicated for more than a couple of years, you’re probably quite familiar with KOTS. If you’re newer here, you may have heard the term but you’re not as familiar with what KOTS is about. KOTS stands for “Kubernetes Off-The-Shelf,” which was the name for the first Kubernetes implementation of the Replicated Platform. KOTS provides a plugin for kubectl
named kots
that you use to start the install. The “adm” part refers to the Admin Console you use to manage the install.
As the platform matured and we implemented the current Embedded Cluster, we dropped the need for kubectl
and a plugin. We created a command named for you for your application that you use to start the install. That command still needs an interactive way to complete the install and manage the application. It steals the Admin Console from KOTS to do this. Along with this theft came the default namespace KOTS uses for the console: our friend kotsadm
. Applications also continue to install there by default. They don’t need to, but that’s where they’ve always gone.
But now, there’s no longer an obvious tie to something called KOTS. The result is this awkward situation where your customer-facing application lives in a namespace named after internal Replicated tooling. It’s like having your storefront in a building labeled “Corporate Headquarters.” Technically functional, but confusing for anyone who looks around.
Am I stuck forever?
I have to admit, it took me way too long to realize I was hounding the Embedded Cluster team for no reason. The solution has been available longer than I’ve been working with the Replicated platform. HelmChart
resources, even in the deprecated version, allow you to specify a namespace
attribute. The Admin Console uses that namespace to install the chart.
Here’s what I did for SlackerNews:
apiVersion: kots.io/v1beta2
kind: HelmChart
metadata:
name: slackernews
spec:
# chart identifies a matching chart from a .tgz
chart:
name: slackernews
chartVersion: $VERSION
namespace: slackernews
# lots of values handling omitted
This was a pretty simple change, and almost everything I needed. I also needed to make sure the namespace was created and that it had the correct secrets for accessing my image through the Replicated Proxy Registry. That meant updating my Application
resource with additionalNamespaces
to include the namespace.
Think of additionalNamespaces
as serving the same role for namespaces that additionalImages
serves for container images. When you specify additionalImages
, you’re telling the platform “these images need to be available in airgapped environments, so include them in the bundle.” Similarly, additionalNamespaces
tells the platform “my application needs these namespaces to exist and be properly configured with registry secrets and RBAC permissions.”
additionalNamespaces:
- slackernews
Don’t forget your status
I thought I was done after making those two changes, but I realized I’d missed something when my status informers never got to ready. Most of us follow the example YAML in the docs and write our status informers like this:
statusInformers:
- deployment/slackernews
- deployment/slackernews-nginx
# logic for postgres omitted
That’s not quite right for this scenario, though. There’s an implicit default namespace in that definition (the dreaded kotsadm
). The full syntax is [namespace]/type/name
, which I knew from installing supporting services in other namespaces, but forgot applied to this case.
A quick switch to:
statusInformers:
- slackernews/deployment/slackernews
- slackernews/deployment/slackernews-nginx
# logic for postgres omitted
and my status informers lit up green.
1 ↩︎