Configure iptables firewall to limit ingress and egress to embedded cluster


  1. I’m trying to figure out how iptables configurations are maintained between reboots and updates. If I add commands to iptablesConfig in the installer, will they persist on node reboot?

  2. Do you have an example set of iptables commands that work with the defaults set by kube and flannel that would restrict ingress and egress to the cluster?

Example, I see that I can connect to the etcd service from another node in my subnet. Is the expectation that I would use security groups or equivalent in my infrastructure? If so, what do I do if I don’t have an equivalent to security groups but want to limit access to etcd to the nodes of my cluster only?

Here are the services and ports I can access from my subnet:

kube-apis *:6443
etcd *:2380
etcd *:2379
kubelet *:10250
kube-prox *:10256
node_expo *:9100

I know these are required for cluster communication. I simply want to be able to specify which nodes are part of the cluster and which ports are expected for ingress to my application. The iptables rules that get applied is pretty comprehensive but it’s not obvious where to safely add rules.



We do not recommend changing the firewall rules in most of the cases, we let the Kubernetes CNI to manage those. That being said you can always go for more restrictive rules: in the documentation for each add-on you can find a list of ports that must be open in order for them to work properly.

As for the iptablesConfig section, this is not a feature we are actively maintaining and recommending. You can add rules there and those are going to be applied before installing the node but persistence is not executed. If you want to persistent them you need to use something like iptables-save after the install.

Let me know if you have more questions.