Rancher implements a CNI framework, which provides the ability to select different network drivers within Rancher. To leverage the CNI framework, an environment is required to use the Network Services infrastructure service deployed. By default, all environment templates have the Network Services enabled. These services host internal DNS server and manage routing to published ports on the host (via iptables).
Besides the Network Services infrastructure service, select which type of networking plugin/driver that you’d like your services to use. In our default environment templates, we have enabled IPsec network driver to create a simple and secure overlay network using IPsec tunneling.
When a network driver is launched into the environment, it automatically creates a default network. Any services using the managed
network will be using this default network.
When using Rancher’s IPsec networking prior to the 1.2 release, a container in the managed
network would be assigned with both a Docker bridge IP (172.17.0.0/16
) and a Rancher managed IP (10.42.0.0/16
) on the default docker0
bridge. With the adoption of the CNI framework, any container launched in managed
network will only have the Rancher managed IP (default subnet: 10.42.0.0/16
).
The Rancher managed IP address will not be present in Docker metadata, which means it will not appear in docker inspect
. Certain images may not work if it requires a Docker bridge IP. Any ports published on a host will not be shown in docker ps
as Rancher manages separate IPtables for the networking.
By default, all containers within the same environment are reachable via the managed
network. If you want to alter the communication between containers, you can set up a network policy.
If you are facing issues with cross host communication, please refer to our troubleshooting documentation.
Services launched in the UI can change their networking options by navigating to the Networking tab when adding a service. In the UI, all options are available for services except for container
networking. In order to use container
networking for a service, you can use either Rancher CLI, Rancher Compose or Docker CLI to launch the container.
By default, containers launched in Rancher using the UI or Rancher CLI use the managed
network, which uses Rancher’s managed overlay network. All containers in the managed
network are able to communicate with each other regardless of which host the container was deployed on. Most of Rancher’s features, such as load balancers or DNS service, require the service to be in the managed
network.
Inside the container, the ip addr
or ifconfig
commands will show one network interface (i.e. eth0
) along with the loopback interface (i.e. lo
). The IP address of the network interface would be one from Rancher’s managed subnet. The default subnet is 10.42.0.0/16
, but can be configured to your own subnet.
Note: For any containers relying on any networking launched from a network driver (i.e.
managed
or based on the name of the network driver), if the network infrastructure service (e.g.ipsec
) is deleted, then the networking will fail for that container.
For any containers launched through the Docker CLI, an extra label --label io.rancher.container.network=true
can be used to launch the container into the managed
network. Without this label, containers launched from the Docker CLI will be using the bridge
network.
If you want to launch a container in only the managed
network, you’d need to add --net=none
and --label io.rancher.container.network=true
for the container to be started without the bridge
network.
When a container is launched with none
for networking, the container is launched with no networking enabled. This is equivalent to launching a container from the Docker command line with the option --net=none
.
Inside the container, the ip addr
or ifconfig
commands will not show any network interfaces except for the loopback (i.e. lo
).
When a container is launched with host
networking, the container is launched with the same networking interfaces available to the host. This is equivalent to launching a container from the Docker command line with the option --net=host
.
Inside the container, the ip addr
or ifconfig
commands will show the same networking interfaces as the host.
When a container is launched with bridge
networking, the container is launched on Docker’s default bridge. By default, it is docker0
unless the system administrator has changed it on the host. This is equivalent to launching a container from the Docker command line with the option --net=bridge
.
Inside the container, the ip addr
or ifconfig
commands will show one network interface (i.e. eth0
) along with the loopback interface (i.e. lo
). The IP address of the network interface will be one from the Docker’s subnet. The default subnet used by Docker is 172.17.0.0/16
.
When a container is launched with networking from another container, the container is launched using the networking resources of the other container. This is equivalent to launching a container from the Docker command line with the option --net=container:<CONTAINER_NAME>
.
Inside the container, the ip addr
or ifconfig
commands will show the same networking interfaces as the container that was selected. The actual IP address depends on the network mode of the original container. If the original container had bridge
mode, then the IP address would be in the docker’s subnet.
To leverage the CNI framework, you can enable a network infrastructure service, which is created from a network driver in a yaml file. In the network_driver
key of the yaml, there are several options that are defined.
Here is an example of Rancher’s IPsec infrastructure service. The network_driver
is configured in the rancher-compose.yml
file.
ipsec:
network_driver:
name: Rancher IPsec
default_network:
name: ipsec
host_ports: true
subnets:
- network_address: $SUBNET
dns:
- 169.254.169.250
dns_search:
- rancher.internal
cni_config:
'10-rancher.conf':
name: rancher-cni-network
type: rancher-bridge
bridge: $DOCKER_BRIDGE
bridgeSubnet: $SUBNET
logToFile: /var/log/rancher-cni.log
isDebugLevel: ${RANCHER_DEBUG}
isDefaultGateway: true
hostNat: true
hairpinMode: true
mtu: ${MTU}
linkMTUOverhead: 98
ipam:
type: rancher-cni-ipam
logToFile: /var/log/rancher-cni.log
isDebugLevel: ${RANCHER_DEBUG}
routes:
- dst: 169.254.169.250/32
The name of the network driver.
The default_network
defines options for the networking in the environment.
The name of the default network can be used when selecting a network mode for a service.
By default, ports on a host are allowed to be published, but you can create a network that does not allow publishing ports on the host.
You can select the network addresses for the subnet of the managed overlay network.
The values in DNS and DNS Search will be autopopulated in the containers.
For the network driver, you can set the CNI configuration within the cni_config
. The example above shows the CNI configuration for Rancher’s IPsec infrastructure service.
Most of the options specified in the CNI config are generic except a few which are specific to Rancher’s CNI plugin implementation. This configuration can be customized by third party CNI plugin providers when they are integrating with Rancher.
Specify the bridge name to be used by the CNI plugin. This is a generic CNI bridge plugin option.
For the “Rancher IPsec” plugin, the default is
docker0
The subnet to use with this network plugin. This is Rancher specific option.
For the “Rancher IPsec” plugin, the default network is
10.42.0.0/16
. If you would like to use a different subnet other than this, this configuration option needs to be customized when creating a new environment template which is later used to deploy a new environment with the customized subnet.
Different cloud providers have different MTU values in their networks. This option allows you to customize it to your needs. This is also a Rancher specific option. Be aware that MTU settings should be set on every component involved in providing networking; on the host itself, on the Docker daemon and inside the IPsec or VXLAN infrastructure service. Also, every host in the same environment has to have the same setting. If not all the hosts inside an environment are configured with the same MTU value, you will experience random network failures.
Rancher’s IPsec overlay network has an overhead of 98 bytes.
MTU of the container's network interface = MTU of the network - 98
For example, if your cloud provider’s MTU is 1200 bytes then you would see an MTU of 1102 (= 1200 - 98) inside the container when you type
ip addr
orifconfig
Changing the MTU should be done on every component involved in providing networking; on the host itself, on the Docker daemon and in the IPsec or VXLAN infrastructure service (by defining a new environment template). Also, every host in the same environment should be configured equally. You can follow these steps if you want to change the MTU.
/etc/docker/daemon.json
like the example below, for more information, please refer to the Docker documentation on Customize the docker0 bridge{
"mtu": 1450
}
MTU should only be configured in environment templates, configuring a different MTU in an existing environment is not recommended as the change will only have effect on new containers.