Create network for container communication without exposing ports

At present we are publishing all our container ports when we want to have any container to container communication and then use host ip address as in:

      // in app config:
      "{{repl HostPrivateIpAddress "db" "postgres" }}"
                default.port="{{repl ConfigOption "db_external_port"}}"

This means that we expose our postgres port on host. This is a big security NO NO!.
I looked a lot through the docs and even your Native Scheduler Examples are using above patterns everywhere. At no point I see any example of using container-name or id for communication and not exposing the ports externally.
I believe only way to avoid exposing ports is to use a non-default, user-defined network that allows for inter container communication. Then we can resolve container names and access ports without exposing them on host interface.
Looks like you have a way to do just that:

Unfortunately this is just for attaching container to specified network. it is not clear how I can create such network.
For example I was hoping that I can do something like:

// create my own network
               driver: bridge
// inside postgres container definition with no public_port
            network: my-net   // attach container to user-defined network (this looks already possible)

@Yugabyte Inc.

FWIW I managed to get it to work with network_mode: container:...
So now I can hide the database port.
But would be nice to get user defined bridge network working.

sb_yb, is your concern that the port is accessible from outside the host (using one of the host’s IP addresses) or that the port is accessible on the host itself (using, for example)? Also is this for multi-node or single node installs?

Single node. Port is published hence it is accessible outside the host is my concern.
I want my app to access the db port without publishing it.
Basically want to use
containerName:port for addressing.
This does not work with default docker bridge network because inter container communication is turned off.
I can create my own network using default bridge driver and then use network: my-net in my container block in replicated.yml.
But with this solution I will have to tell my customer to always do a manual step before install (docker network create my-net)

Have you tried using docker0 IP address at install time? HostPrivateIpAddress template function returns the IP address specified at install time; it’s not necessarily any of the host addresses.

This configuration does require docker to be installed first so that the bridge network is created.

Basically we want to expose our nginx port to outside world but not our app and postgres port.
And yet the postgress should be accesible to app and app should be accesible to nginx.
Do you have pointer to some more detailed documentation or example pointers?

FWIW I have already goine through Native Scheduler Examples and do not see any being applicable.

We already use HostPrivateIpAddress in our config for inter container communication.
Do we still publish the ports on each container using



Okay I gave it a try anyway.
I added --bip

ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock --bip=

Then restarted docker and inspected default bridge network. Now it is using 192.x.x.x instead of default 172.x.x.x range.

"Name": "bridge",
       "Id": "dca46da3f9db393948aa5468faf8f34dc69f343b405f9bf1fc35c571f3459020",
       "Created": "2022-08-07T00:31:48.166002055Z",
       "Scope": "local",
       "Driver": "bridge",
       "EnableIPv6": false,
       "IPAM": {
           "Driver": "default",
           "Options": null,
           "Config": [
                   "Subnet": "",
                   "Gateway": ""
       "Internal": false,
       "Attachable": false,
       "Ingress": false,
       "ConfigFrom": {
           "Network": ""
       "ConfigOnly": false,
       "Containers": {
"3877470cfce14e18b01dfccd5a2bcc1ab32903d813065f88ea2fe2a69b4a7da2": {
               "Name": "replicated",
               "EndpointID": "12659f46238fdcd5331fdb721ecaf3f85719f834bcb9c5bc252b3c23fcd33a50",
               "MacAddress": "02:42:c0:a8:00:04",
               "IPv4Address": "",
               "IPv6Address": ""

Containers also changed to 192.x.x.x range. So far everything was as expected. Tell me if you see something wrong in above

Then I restarted my app and I see log being printed in replicated container that HostPrivateIpAddress function was called.

WARN 2022-08-07T00:55:11+00:00 [replicated-native] templates_context_start.go:106 Node private ip address template function used within current component container configuration

(on a side note I do not know why this is a warning)

I checked the config file that got generated. It still generates host private ip address that is not 192.168.x.x but 10.x.x.x.

- targets: ['{{repl HostPrivateIpAddress "prometheus" "prom/prometheus" }}:{{repl ConfigOption "prometheus_external_port"}}']

resolved to

- targets: [

Okay figured out.

  • I can publish port to docker0 interface instead of default eth0
    then I can still reach the container from other containers on docker0 gateway ip - (by default on and public_port

  • Then I also add

    disable_publish_all_ports: true

so that it does not automatically publish to as random port on host interface
Not sure if this later part is needed. But will keep it for good measure.

I pretty much relied github code search and found your test code to figure his out.
None of this behavior is documented well though :frowning: