Rancher provides the ability to use different load balancer drivers within Rancher. A load balancer can be used to distribute network and application traffic to individual containers by adding rules to target services. Any target service will have all its underlying containers automatically registered as load balancer targets by Rancher. With Rancher, it’s easy to add a load balancer to your stack.
By default, Rancher has provided a managed load balancer using HAProxy that can be manually scaled to multiple hosts. The rest of our examples in this document will cover the different options for load balancers, but specifically referencing our HAProxy load balancer service. We are planning to add in additional load balancer providers, and the options for all load balancers will be the same regardless of load balancer provider.
We use a round robin algorithm to distribute traffic to the target services. The algorithm can be customized in the custom HAProxy configuration. Alternatively, you can configure the load balancer to route traffic to target containers that are on the same host as the load balancer container. By adding a specific label to the load balancer, it will configure the load balancer to target either only the container on the same host as the load balancer (i.e. io.rancher.lb_service.target=only-local
) or prioritize these containers over containers on a different host (i.e. io.rancher.lb_service.target=prefer-local
).
We’ll review the options for our load balancer for the UI and Rancher Compose and show examples using the UI and Rancher Compose.
Available as of v1.6.11+
By default, if a targeted service of a load balancer is stopped when a request is made to the load balancer, the existing connections to the service will be immediately terminated. Users may get errors like HTTP Bad Gateway (502)
when trying to access the load balancer as the connection to the target service has been dropped. Dropped connections are typically seen when the target service is being upgraded.
To avoid these dropped connections, services can be programmed with a drain timeout so that when load balancers target services, these connections will be drained completely before being terminated.
drain_timeout_ms
.stopping
state, and the load balancer will have removed this container from its list of backends.stopping
state, which usually happens during service upgrade, service reconcile or direct container stop.NOTE: By default, the drain timeout is
0
for a service and connection draining will not happen.
host
network and standalone containers.rancher/lb-service-haproxy:v0.7.15
or later.io.rancher.container.agent.role: environmentAdmin
)
on the load balancer.docker-compose.yml
version: '2'
services:
web:
image: nginx
stdin_open: true
tty: true
lb:
image: rancher/lb-service-haproxy:v0.7.15
ports:
- 9797:9797/tcp
labels:
io.rancher.container.agent.role: environmentAdmin,agent
io.rancher.container.agent_service.drain_provider: 'true'
io.rancher.container.create_agent: 'true'
rancher-compose.yml
version: '2'
services:
web:
scale: 1
start_on_create: true
drain_timeout_ms: 10000
lb:
scale: 1
lb_config:
port_rules:
- priority: 1
protocol: https
source_port: 9797
target_port: 80
service: web
health_check:
port: 42
interval: 2000
unhealthy_threshold: 3
healthy_threshold: 2
response_timeout: 2000
We’ll walk through how to set up a load balancer for our “letschat” application created earlier in the adding services section.
First, you start by creating a load balancer, by clicking on the dropdown icon next to “Add Service” and clicking Add Load Balancer. By default, the scale will be of 1 container. Provide a name like “LetsChatLB”.
For the port rules, use the default Public
access, the default http
protocol, a source port of 80
, select the “letschat” service, and use a target port of 8080
. Click on Create.
Now, let’s see the load balancer in action. In the stack view, there is a link to port 80
that you’ve used as the source port for your load balancer. If you click on it, it will automatically bring up a new tab in your browser and point to one of the hosts that has the load balancer launched. The request is re-directed to one of the “LetsChat” containers. If you were to refresh, the load balancer would redirect the new request to the other container in the “letschat” service.
Rancher provides a load balancer running HAProxy software inside the container to direct traffic to the target services.
Note: Load balancers will only work for services that are using the managed network. If you select any other network choice for your target services, it will not work with the load balancer.
You add a load balancer by clicking the dropdown icon next to the Add Service button and selecting Add Load Balancer.
You can use the slider to select the scale, i.e. how many containers of the load balancer. Alternatively, you can select Always run one instance of this container on every host. With this option, your load balancer will scale for any additional hosts that are added to your environment. If you have scheduling rules in the Scheduling section, Rancher will only start containers on the hosts that meet the scheduling rules. If you add a host to your environment that does not meet the scheduling rules, a container will not be started on the host.
Note: The scale of the load balancer cannot exceed the number of hosts in the environment, otherwise there will be a port conflict and the load balancer service will be stuck in an activating state. It will continue to try and find an available host and open port until you edit the scale of this load balancer or add additional hosts.
You will need to provide a Name and if desired, Description of the load balancer.
Next, you’ll define the port rules for a load balancer. There are two types of port rules that can be created. There are service rules that target existing services and selector rules that will target services that match the selector criteria.
When creating service and selector rules, the hostname and path rules are matched top-to-bottom in the order shown in the UI.
Service rules are port rules to target existing services in Rancher.
In the Access section, you will decide if this load balancer port will be accessible publicly (i.e. accessible outside of the host) or only internally in the environment. By default, Rancher has assumed you want the port to be public, but you can select Internal
if you want the port to only be accessed by services within the same environment.
Select the Protocol. Read more about our protocol options. If you choose to select a protocol that requires SSL termination (i.e. https
or tls
), you will add in your certificates in the SSL Termination tab.
Next, you’ll provide the request host, source port and path for where the traffic will be coming from.
Note: Port
42
cannot be used as a source port for load balancers because Rancher uses this port for health checks.
The request host can be a specific HTTP host header for each service. The request path can be a specific path. The request host and request path can be used independently or in conjunction to create a specific request.
domain1.com -> Service1
domain2.com -> Service2
domain3.com -> Service1
domain3.com/admin -> Service2
Rancher supports wildcards when adding host based routing. The following wildcard syntax is supported.
*.domain.com -> hdr_end(host) -i .domain.com
domain.com.* -> hdr_beg(host) -i domain.com.
For each service rule, you select the specific target service to direct traffic to. The list of services is based on all the services within that environment. Along with the service, you select which port to direct the traffic to on the service. This private port on the service is typically the exposed port on the image.
For a selector rule, instead of targeting a specific service, you would provide a selector value. The selector is used to pick up target services based on the labels of a service. When the load balancer is created, the selector rules will be evaluated against any existing services in the environment to see if there are any existing target services. Any additional services or changes to labels on a service would be compared against the selector values to see if the service should be a target service.
For each source port, you can add in request host and/or path. The selector value is provided under target and you can provide a specific port to direct the traffic to on the service. This private port on the service is typically the exposed port on the image.
100
; Selector: foo=bar
; Port: 80
200
; Selector: foo1=bar1
; Port: 80
foo=bar
label and would match the first selector rule. Any traffic to 100
would be directed to Service A.foo1=bar
label and would match the second selector rule. Any traffic to 200
would be directed to Service B.foo=bar
and foo1=bar1
labels and match both selector rules. Traffic from either source port would be directed to Service C.Note: Currently, if you want to use one selector source port rule for multiple hostnames/paths, you would need to use Rancher Compose to set the hostname/path values on the target services.
The SSL Termination tab provides the ability to add certificates to use for the https
and tls
protocols. In the Certificate dropdown, you can select the main certificate for the load balancers.
To add a certificate to Rancher, please read about how to add certificates in the Infrastructure tab.
It is possible to provide multiple certificates for the load balancer such that the appropriate certificate is presented to the client based on the hostname requested (see Server Name Indication). This may not work with older clients, which don’t support SNI (those will get the main certificate). For modern clients, they will be offered the certificate from the list for which there is a match or the main certificate if there is no match.
You can select the Stickiness of the load balancer. Stickiness is the cookie policy that you want to use for when using cookies of the website.
The two options supported in Rancher are:
Since Rancher is using an HAProxy specific load balancer, you can customize the HAProxy configuration of the load balancer. Whatever you define in this section will be appended to the configuration generated by Rancher.
global
maxconn 4096
maxpipes 1024
defaults
log global
mode tcp
option tcplog
frontend 80
balance leastconn
frontend 90
balance roundrobin
backend mystack_foo
cookie my_cookie insert indirect nocache postonly
server $IP <server parameters>
backend customUUID
server $IP <server parameters>
We provide the ability to add labels to load balancers and schedule where the load balancer will be launched. Read more details about labels and scheduling here.
We’ll walk through how to set up a load balancer for our “letschat” application created earlier in the adding services section.
Read more about how to set up Rancher Compose.
Note: In our examples, we will use
<version>
as the image tag for our load balancers. Each version of Rancher will have a specific version oflb-service-haproxy
that is supported for load balancers.
We’ll set up the same example that we used above in the UI example. To get started, you will need to create a docker-compose.yml
file and a rancher-compose.yml
file. With Rancher Compose, we can launch the load balancer.
docker-compose.yml
version: '2'
services:
letschatlb:
ports:
- 80
image: rancher/lb-service-haproxy:<version>
rancher-compose.yml
version: '2'
services:
letschatlb:
scale: 1
lb_config:
port_rules:
- source_port: 80
target_port: 8080
service: letschat
health_check:
port: 42
interval: 2000
unhealthy_threshold: 3
healthy_threshold: 2
response_timeout: 2000
Rancher provides a load balancer running HAProxy software inside the container to direct traffic to the target services.
Note: Load balancers will only work for services that are using the managed network. If you select any other network choice for your target services, it will not work with the load balancer.
A load balancer can be scheduled like any other service. Read more about scheduling load balancers using Rancher Compose.
Load balancing is configured with a combination of ports exposed on a host and a load balancer configuration, which can include specific port rules for each target service, custom configuration and stickiness policies.
When working with services that contain sidekicks, you need to use the primary service as a target service, which is the service that contains the sidekick
label.
When creating a load balancer, you can add any ports you want exposed on the host. Any of these ports can be used as source ports in the port rules of a load balancer. If you want an internal load balancer, you would not expose any ports on the load balancer, and only add in port rules in the load balancer configuration.
Note: Port
42
cannot be used as a port for load balancers because it’s internally used for health checks.
docker-compose.yml
version: '2'
services:
lb1:
image: rancher/lb-service-haproxy:<version>
# Any ports listed will be exposed on the host that is running the load balancer
# To direct traffic to specific service, a port rule will need to be added.
ports:
- 80
- 81
- 90
All load balancer configuration options are defined in the rancher-compose.yml
under the lb_config
key.
version: '2'
services:
lb1:
scale: 1
# All load balancer options are configured in this key
lb_config:
port_rules:
- source_port: 80
target_port: 80
service: web1
health_check:
port: 42
interval: 2000
unhealthy_threshold: 3
healthy_threshold: 2
response_timeout: 2000
web1:
scale: 2
Port rules are defined in the rancher-compose.yml
. Since port rules are defined individually, there may be multiple port rules defined for the same service. By default, Rancher will prioritize these port rules based on a specific priority ordering. If you would like to change the ordering of the prioritization, you can also set a specific priority order of the rules.
The source port is one of the ports exposed on the host (i.e. a port that is in the docker-compose.yml
).
If you want to create internal load balancer, then the source port does not need to match any of the ports in the docker-compose.yml
file.
The target port is the private port on the service. This port correlates to the port exposed on the image used to start your service.
There are multiple protocol types that are supported in the Rancher load balancer drivers.
http
- By default, if no protocol is set, the load balancer uses http
. HAProxy doesn’t decrypt the traffic and passes the traffic directly throughtcp
- HAProxy doesn’t decrypt the traffic and passes the traffic directly throughhttps
- SSL termination is required. Traffic is decrypted by HAProxy using the provided certificates, which must be added into Rancher before being used in a load balancer. Traffic from the load balancer to the target service is unencrypted.tls
- SSL termination is required. Traffic is decrypted by HAProxy using the provided certificates, which must be added into Rancher before being used in a load balancer. Traffic from the load balancer to the target service is unencrypted.sni
- Traffic is encrypted to the load balancer and to the services. Multiple certificates are provided for the load balancer such that the appropriate certificate is presented to the client based on the hostname requested. (see Server Name Indication for more details).udp
- This is not supported for Rancher’s HAProxy provider.Any additional load balancer providers might support only a subset of the protocols.
Hostname routing is only supported for http
, https
and sni
. Only http
and https
support path based routing as well.
The service name that you want the load balancer to direct traffic to. If the service is in the same stack, then you use the service name. If the service is in a different stack, then you would use <stack_name>/<service_name>
.
rancher-compose.yml
version: '2'
services:
lb1:
scale: 1
lb_config:
port_rules:
- source_port: 81
target_port: 2368
# Service in the same stack
service: ghost
- source_port: 80
target_port: 80
# Target a service in a different stack
service: differentstack/web1
health_check:
port: 42
interval: 2000
unhealthy_threshold: 3
healthy_threshold: 2
response_timeout: 2000
ghost:
scale: 2
Rancher’s HAProxy load balancer supports L7 load balancing by being able to specify host header and path in the port rules.
rancher-compose.yml
version: '2'
services:
lb1:
scale: 1
lb_config:
port_rules:
- source_port: 81
target_port: 2368
service: ghost
protocol: http
hostname: example.com
path: /path/a
health_check:
port: 42
interval: 2000
unhealthy_threshold: 3
healthy_threshold: 2
response_timeout: 2000
ghost:
scale: 2
Rancher supports wildcards when adding host based routing. The following wildcard syntax is supported.
*.domain.com -> hdr_end(host) -i .domain.com
domain.com.* -> hdr_beg(host) -i domain.com.
By default, Rancher prioritizes port rules targeting the same service, but if you wanted to, you could customize your own prioritization of the port rules (lower number is higher priority).
rancher-compose.yml
version: '2'
services:
lb1:
scale: 1
lb_config:
port_rules:
- source_port: 88
target_port: 2368
service: web1
protocol: http
hostname: foo.com
priority: 2
- source_port: 80
target_port: 80
service: web2
protocol: http
priority: 1
health_check:
port: 42
interval: 2000
unhealthy_threshold: 3
healthy_threshold: 2
response_timeout: 2000
web1:
scale: 2
Instead of targeting a specific service, you can set up a selector. By using selectors, you can define the service links and hostname routing rules on the target service instead of on the load balancer. Services with labels matching the selector become a target in the load balancer.
When using a selector in a load balancer, the lb_config
can be set on the load balancer and any target services that are matching the selector.
In the load balancer, the selector value is set in the lb_config
under selector
. The port rule in the lb_config
of the load balancer cannot have a service and would typically not have a target port. Instead, the target port is set in port rules on the target service. If you choose to use hostname routing, the hostname and path would be set on the target service.
Note: For any load balancers using the v1 load balancer yaml fields that uses selector labels will not be converted to the v2 load balancer as the port rules on the service would not be updated.
docker-compose.yml
version: '2'
services:
lb1:
image: rancher/lb-service-haproxy:<version>
ports:
- 81
# These services (web1 and web2) will be picked up by the load balancer as a target
web1:
image: nginx
labels:
foo: bar
web2:
image: nginx
labels:
foo: bar
rancher-compose.yml
version: '2'
services:
lb1:
scale: 1
lb_config:
port_rules:
- source_port: 81
# Target any service that has foo=bar as a label
selector: foo=bar
protocol: http
health_check:
port: 42
interval: 2000
unhealthy_threshold: 3
healthy_threshold: 2
response_timeout: 2000
# web1 and web2 are targeted with the same source port but with the different hostname and path rules
web1:
scale: 1
lb_config:
port_rules:
- target_port: 80
hostname: test.com
web2:
scale: 1
lb_config:
port_rules:
- target_port: 80
hostname: example.com/test
If you want to explicitly label your backend in your load balancer configuration, you would use the backend_name
. This option can be useful if you want to configure custom config parameters for a particular backend.
If you are using https
or tls
protocol, you can use certificates that are either added directly into Rancher or from a directory mounted in the load balancer container.
The certificates are referenced in the lb_config
section of the load balancer container.
version: '2'
services:
lb:
scale: 1
lb_config:
certs:
- <certName>
default_cert: <defaultCertName>
Only supported in Compose Files
Certificates can be mounted directly into a load balancer container as a volume. The load balancer container expects the certificates to be in a specific directory structure. If you are using LetsEncrypt client to generate your certificates, then your directory structure is already configured in the format that Rancher expects. If you are not using LetsEncrypt, then the director and names of the certificates will need to be structured in a specific way.
Rancher’s load balancer will poll the certificate directories for updates. Any addition/removal of the certificates will be synced via polling every 30 seconds.
All certificates will be located under a single base certificate directory. This directory name will be used in a label on the load balancer service to inform the load balancer where the certificates are.
In this base directory, each certificate that is generated for a specific domain is required to be placed in a sub-directory folder. The folder name should be the domain name for the certificate and each folder should contain the private key (i.e. privkey.pem
) and certificate chain (fullchain.pem
). For the default certificate, it can be placed in any subdirectory name, but the files in the folder must contain the same naming conventions as the certificates (i.e. privkey.pem
and fullchain.pem
).
-- certs
|-- foo.com
| |-- privkey.pem
| |-- fullchain.pem
|-- bar.com
| |-- privkey.pem
| |-- fullchain.pem
|-- default_cert_dir_optional
| |-- privkey.pem
| |-- fullchain.pem
...
When launching a load balancer, you must specify the location of the certificates and the location of the default certificate by using labels. If these labels are on the load balancer, the load balancer will ignore any certificates that are in the lb_config
key of the load balancer.
Note: You cannot use the certificates added into Rancher in conjunction with mounting certificates into the container through a volume.
labels:
io.rancher.lb_service.cert_dir: <CERTIFICATE_LOCATION>
io.rancher.lb_service.default_cert_dir: <DEFAULT_CERTIFICATE_LOCATION>
Certificates can be mounted into the load balancer container by using host bind mounts or using a named volume with our storage drivers as a volume drivers.
docker-compose.yml
version: '2'
services:
lb:
image: rancher/lb-service-haproxy:<TAG_BASED_ON_RELEASE>
volumes:
- /location/on/hosts:/certs
ports:
- 8087:8087/tcp
labels:
io.rancher.container.agent.role: environmentAdmin
io.rancher.container.create_agent: 'true'
io.rancher.lb_service.cert_dir: /certs
io.rancher.lb_service.default_cert_dir: /certs/default.com
myapp:
image: nginx:latest
stdin_open: true
tty: true
rancher-compose.yml
version: '2'
services:
lb:
scale: 1
start_on_create: true
lb_config:
certs: []
port_rules:
- priority: 1
protocol: https
service: myapp
source_port: 8087
target_port: 80
health_check:
healthy_threshold: 2
response_timeout: 2000
port: 42
unhealthy_threshold: 3
interval: 2000
strategy: recreate
myapp:
scale: 1
start_on_create: true
For advanced users, you can specify custom configuration to the load balancer in the rancher-compose.yml
. Please refer to the HAProxy documentation for details on the available options you can add for the Rancher’s HAProxy load balancer.
rancher-compose.yml
version: '2'
services:
lb:
scale: 1
lb_config:
config: |-
global
maxconn 4096
maxpipes 1024
defaults
log global
mode tcp
option tcplog
frontend 80
balance leastconn
frontend 90
balance roundrobin
backend mystack_foo
cookie my_cookie insert indirect nocache postonly
server $$IP <server parameters>
backend customUUID
health_check:
port: 42
interval: 2000
unhealthy_threshold: 3
healthy_threshold: 2
response_timeout: 2000
If you want to specify stickiness policy, you can update the policies in rancher-compose.yml
.
rancher-compose.yml
version: '2'
services:
lb:
scale: 1
lb_config:
stickiness_policy:
name: <policyName>
cookie: <cookieInfo>
domain: <domainName>
indirect: false
nocache: false
postonly: false
mode: <mode>
health_check:
port: 42
interval: 2000
unhealthy_threshold: 3
healthy_threshold: 2
response_timeout: 2000
docker-compose.yml
version: '2'
services:
web:
image: nginx
lb:
image: rancher/lb-service-haproxy
ports:
- 80
- 82
rancher-compose.yml
version: '2'
services:
lb:
scale: 1
lb_config:
port_rules:
- source_port: 80
target_port: 8080
service: web1
hostname: app.example.com
path: /foo
- source_port: 82
target_port: 8081
service: web2
hostname: app.example.com
path: /foo/bar
health_check:
port: 42
interval: 2000
unhealthy_threshold: 3
healthy_threshold: 2
response_timeout: 2000
To set up an internal load balancer, no ports are listed, but you can still set up port rules to direct traffic to the service.
docker-compose.yml
version: '2'
services:
lb:
image: rancher/lb-service-haproxy
web:
image: nginx
rancher-compose.yml
version: '2'
services:
lb:
scale: 1
lb_config:
port_rules:
- source_port: 80
target_port: 80
service: web
health_check:
port: 42
interval: 2000
unhealthy_threshold: 3
healthy_threshold: 2
response_timeout: 2000
web:
scale: 1
The certificates must be added into Rancher and are defined in the rancher-compose.yml
.
docker-compose.yml
version: '2'
services:
lb:
image: rancher/lb-service-haproxy
ports:
- 443
web:
image: nginx
rancher-compose.yml
version: '2'
services:
lb:
scale: 1
lb_config:
certs:
- <certName>
default_cert: <defaultCertName>
port_rules:
- source_port: 443
target_port: 443
service: web
protocol: https
web:
scale: 1