Wildcard DNS and kubernetes

An important fact to note about kubernetes’ internal dns configuration is that it provisions pods resolv.conf with the setting options ndots:5. this is to allow dns searches to hit across the nested subdomains that kubernetes uses internally.

To walk through why this is important, we can imagine an environment in which a customer has a custom search domain for their network. for the purposes of our example we’ll use: some.domain

For convenience, the customer has configured the DNS record *.some.domain to respond with the address of their loadbalancer.

A pods /etc/resolv.conf will end up looking something like:

search default.svc.cluster.local svc.cluster.local cluster.local some.domain
options ndots:5

With the custom search domain from the host appended to the cluster internal search domains.

An internal dns request to a service DNS name like redis would traverse each search domain until it gets a non NXDOMAIN response.

For example

redis.default.svc.cluster.local <- this gives a result so we stop here

If none of these return a result, coredns forwards the search to the hosts /etc/resolv.conf which in turn will forward to the hosts upstream dns server.

For a typical internal query coredns will respond to redis.default.svc.cluster.local, and the dns query will stop there. but for dns queries to external resources, like replicated.app, the query falls through the cluster’s search domains.

One of our pods tries to contact replicated.app, we can imagine it’s dns queries as:

replicated.app.some.domain <- because of wildcard dns, this returns a result

In a typical environment, the wildcard domain *.some.domain causes no issue, but for kubernetes the dns query hits replicated.app.some.domain, to which the upstream dns server happily responds with the results for *.some.domain casusing requests to an external service to be inadvertantly redirected to the customers own domain.