Load Balancer

Load balancer is used to distribute incoming traffic between already existing virtual machines within the same region.


Term Definition
Load balancer Load balancer distributes incoming streams between servers in accordance with the specified parameters configured in the rule
Server Existing virtual machines that are added to process traffic flow
Rule Configurations for forwarding traffic from the load balancer to the server group. In the Control panel interface, the rule combines forwarding configurations, as well as availability checks, algorithm and connection configurations, and other configurations related to this traffic stream
Availability check Parameters that help the load balancer determine that the server can handle traffic. Only healthy servers will receive traffic
Balancing algorithm A method for distributing tasks between multiple servers to optimize resource use and reduce requests serving time
Connection configurations Configurations for all connections passing through the balancer: number of requests, connection timeouts and response waiting time

Creating a Load Balancer

Follow these steps to create a load balancer in an existing project in the Control panel:

  1. Go to the Load balancers section in the project.
  2. Click Create load balancer.
  3. Enter the balancer name. You can leave the automatically generated name.
  4. Select the required region.
  5. Specify balancer address by selecting a subnet. The load balancer IP field will be filled with one of the available subnet addresses. Connect a floating IP if necessary.
  6. Enter rule configurations for the balancer: specify basic rule configurations.
  7. In the Configuration section of the Servers tab, select the servers to send traffic to for this rule.
  8. Click Create load balancer.

To create a balancer, just select the protocols, ports of the balancer and servers, and at least one server. All other rule configurations are filled by default. You can change them when creating the balancer or after creating it.

When changing the load balancer configuration, a short-term connection failure may occur. It is usually invisible, but in some configurations or under heavy load it can take several seconds.

Basic Rule Configurations

Protocol and Port

You can change the entered protocol and port for both the balancer and the server in each rule:

  • HTTP, HTTPS, UDP, TCP protocols for the balancer;
  • HTTP, HTTPS, UDP, TCP, PROXY protocols for servers.

Not all protocols are compatible with each other, and this is taken into consideration in the interface — it is impossible to select inappropriate protocols.

Standard ports are automatically assigned for the selected protocol.

Note: you can select a different port for the server when configuring it.

The set port value is shared by all servers.

Note: you cannot change the protocols and ports of a rule in a working balancer; you can only recreate a rule with the necessary configurations.

SSL Certificate

To balance HTTPS traffic, it is recommended to add an SSL certificate to the rule so that the traffic is correctly recognized and distributed to the servers.

Adding an SSL certificate is required to configure the HTTPS → HTTP stream.

If HTTPS → HTTPS stream is selected and the SSL certificate is not uploaded, the data will be transmitted using PROXY protocol without decryption.

If the balancer is deleted, the SSL certificate will be deleted together with it.


When creating a balancer, select the servers that you want to add to the rule. Specify the required IP if the server has several ports, and set server weight.

After creating a balancer, you can:

  • assign the server a backup role — the server will not receive connections until all other rule servers fail;
  • suspend — the server will not receive connections until it is returned to work manually;
  • delete from the rule — the server will no longer participate in balancing, but will continue to work.

You can add new servers to the rule of an already working balancer by clicking the Add server button.

Balancing Algorithm

You can choose one of the two balancing algorithms:

  • Round Robin — a cyclical scheduling function: the first request is sent to one server, the second to another, and so on. Once the last server has been sent a request, the cycle starts anew;
  • Least connections — this algorithm takes into consideration the number of active connections. Each subsequent request is sent to the server with the least number of active connections.
  • Sticky Sessions — an algorithm for distributing incoming requests whereby connections are forwarded to a specific server in a group. Using this method, requests are distributed to servers using:

    • APP-cookie — an already existing cookie that is set in the application code;
    • HTTP-cookie — a cookie that the balancer creates and attaches to the session;
    • Source IP — the client’s IP address is hashed and divided by the weight of each server in the rule to determine the server that will process requests.

    Requests from the same client will always be passed to the same server. If the designated server is unavailable, the request will be forwarded to another one.

Availability Check

You can set the configurations to check the connection to balanced servers.

Availability checks help the balancer to understand if one of the servers is out of order and cannot be used. If one server is unavailable, connections will be redirected to a working server.

Availability checks are enabled by default, but you can disable them on the appropriate tab. Note: if checks are disabled, the server status will be NO MONITOR.

You can configure the following check parameters:

  • check type: HTTP, PING, TCP;
  • the interval with which the balancer sends checking requests to the servers;
  • connection timeout — how long a response from the destination takes. If there is no answer, the connection is lost as soon as this time has expired;
  • you can configure URL and expected response codes for HTTP and HTTPS protocols;
  • healthy threshold — the number of successive requests in a row, after which the server is put into operation state;
  • unhealthy threshold — the number of unsuccessful requests in a row, after which the server operation is suspended.

Connection Configurations

You can set connection time configurations between:

  • incoming requests and the balancer by indicating the connection timeout, for which you can set a limit value;

Note: if a limit value is set, you must specify the maximum number of requests.

  • the balancer and servers, specifying the connection timeout, inactivity timeout, and TCP packet wait timeout.

HTTP Headers

In regular mode, the balancer only transfers the original http request body to the server, replacing the client’s IP address with its own. Include the necessary types of additional headers in the request so that the servers receive this information for correct operation or analysis:

  • X-Forwarded-For;
  • X-Forwarded-Port;
  • X-Forwarded-Proto.

Server Statuses

Status Comment
ONLINE The service is functioning. Service availability checks are successful
CREATING The server is being added to the rule
UPDATING Server settings are being updated
OFFLINE The server does not accept requests, while the server itself may be active
BACKUP The server has been moved to backup status and will be enabled if the other servers of the rule are not working. To return the server to normal operation mode, select the Return to work server menu option
NO_MONITOR Service health is not checked while performing availability checks. If all the load balancer services have this status, the load balancer status will be ONLINE
DRAINING The server does not accept new connections, but it continues to process the current ones
ERROR The service on the specified port either does not respond or does not pass the response type check
DELETING The server is being removed from the load balancer, but the server health is not affected

Load Balancer Statuses

The Load balancer status depends on the status of the server to which it distributes requests and also on its own state.

Status Comment
ONLINE All availability checks are successful
CREATING Load balancer is being created
UPDATING Balancer configurations are being updated
OFFLINE The balancer is not processing requests and is disabled
DEGRADED One of the balancer’s components has ERROR status
ERROR The reasons for this status could be: all servers have ERROR status, or an error in the work of the balancer has occurred. If you see that even with this balancer status all servers are working correctly and have ONLINE status, please contact technical support
DELETING The balancer is being deleted


In the balancer card, you can view statistics and find out:

  • how many active connections there currently are;
  • how many connections there are in total;
  • how many connections were not processed;
  • how many bytes were received and how many bytes were sent.

Redirecting HTTP Requests to HTTPS

You can create a balancer using my.selectel.ru Control panel or API Openstack Octavia.

The rule consists of:

  • Listener — a handler that intercepts the traffic stream coming to the balancer and responses on the configured port and protocol. It redirects traffic to the required server group;
  • Pool — a group of servers of one Listener. It binds to the Listener;
  • Server (Members) — specific servers within one Pool.

Our rule is a simplified model of Openstack Octavia terminology. You can use the API to solve more complex problems.

One of these tasks is the need to redirect any regular HTTP requests (port 80) to HTTPS (port 443). Follow these steps to do this:

  1. Create http_listener on port 80:

    openstack loadbalancer listener create --name http_listener --protocol HTTP --protocol-port 80 lb1
  2. Configure the L7 Policy1 on http_listener with the REDIRECT_TO_URL action pointing to the URL:

    openstack loadbalancer l7policy create --action REDIRECT_TO_URL --redirect-url https://example.com/ --name policy1 http_listener
  3. Add the L7-rule to policy1 that matches all requests:

    openstack loadbalancer l7rule create --compare-type STARTS_WITH --type PATH --value / policy1


The TCP → PROXY rule can be used when the servers behind the balancer need to know the real IP addresses of the users who access it.

When balancing and terminating HTTP(S) traffic, this task is solved by adding the X-Forwarded-For header and processing it, for example, on nginx servers behind the balancer using the ngx_http_realip_module module.

When balancing TCP traffic from HTTPS clients and transmitting it directly to the web servers (without modification or termination), this header cannot be added. Therefore, you can use the solutions provided by the Proxy Protocol to transfer the header.

Since the Cloud platform Load balancer is based on HAproxy, the solutions provided below are also relevant for those based on the HAProxy nginx bundle.

Follow these steps to configure all the necessary rules on the Load balancer:

  1. Create a rule for balancing TCP traffic on port 80 (HTTP). Specify TCP as the Load balancer protocol and PROXY as the server protocol.
  2. Create a rule for balancing TCP traffic on port 443 (HTTPS). Specify TCP as the Load balancer protocol and PROXY as the server protocol.
  3. Configure the servers that traffic will be distributed to. Specify the balancing algorithm and the necessary settings for availability checks and connection timeouts according to the configuration instructions.

If you don’t need HTTP or HTTPS balancing, you don’t need to configure any of these rules depending on the tasks to be solved.

To configure nginx on the servers themselves (to work with the PROXY protocol), enable support for the nginx proxy_protocol directive. Learn more in the official nginx documentation.

An example of nginx.conf configured to accept real IP addresses from the Cloud platform Load balancer:

server {
    server_name localhost;

    # ATTENTION! Working with the proxy_protocol directive is only possible 
    # if bundled with haproxy
    # For direct access, this directive must be disabled
    listen 443 ssl default_server proxy_protocol;

    ssl_certificate      /etc/nginx/ssl/public.example.com.pem;
    ssl_certificate_key  /etc/nginx/ssl/public.example.com.key;

    # HAProxy address
    # We recommend specifying CIDR, because balancer IP 
    # can change in some cases
    real_ip_header proxy_protocol;

    root /usr/share/nginx/html;
    index index.html index.htm;

    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root /usr/share/nginx/html;

    location ~ /\.ht {
        deny all;

Please note that using LVS and Direct Server Return method is not supported according to the Cloud platform security policy.