Managing a Load Balancer

Server Statuses

Status Comment
ONLINE The service is functioning. Service availability checks are successful
CREATING The server is being added to the rule
UPDATING Server settings are being updated
OFFLINE The server does not accept requests, while the server itself may be active
BACKUP The server has been moved to backup status and will be enabled if the other servers of the rule are not working. To return the server to normal operation mode, select the Return to work server menu option
NO_MONITOR Service health is not checked while performing availability checks. If all the load balancer services have this status, the load balancer status will be ONLINE
DRAINING The server does not accept new connections, but it continues to process the current ones
ERROR The service on the specified port either does not respond or does not pass the response type check
DELETING The server is being removed from the load balancer, but the server health is not affected

Load Balancer Statuses

The load balancer status depends on the status of the server to which it distributes requests and also on its own state.

Status Comment
ONLINE All availability checks are successful
CREATING Load balancer is being created
UPDATING Balancer configurations are being updated
OFFLINE The balancer is not processing requests and is disabled
DEGRADED One of the balancer’s components has ERROR status
ERROR The reasons for this status could be: all servers have ERROR status, or an error in the work of the balancer has occurred. If you see that even with this balancer status all servers are working correctly and have ONLINE status, please contact technical support
DELETING The balancer is being deleted

Statistics

From the balancer card, you can view statistics and find out:

  • how many active connections there currently are;
  • how many connections there are in total;
  • how many connections were not processed;
  • how many bytes were received and how many bytes were sent.

Redirecting HTTP Requests to HTTPS

Follow these steps to redirect any regular HTTP requests (port 80) to HTTPS (port 443):

  1. Create a load balancer.

  2. Create listener on port 80:

    openstack loadbalancer listener create --name <listener name> --protocol HTTP --protocol-port 80 <loadbalancer name>
  3. Configure the L7 Policy1 on http_listener with the REDIRECT_TO_URL action pointing to the URL:

    openstack loadbalancer l7policy create --action REDIRECT_TO_URL --redirect-url https://example.com/ --name <policy name> <listener name>
  4. Add the L7 rule to policy1 that matches all requests:

    openstack loadbalancer l7rule create --compare-type STARTS_WITH --type PATH --value / policy1

TCP → PROXY

The TCP → PROXY rule can be used when the servers behind the balancer need to know the real IP addresses of the users who access it.

When balancing and terminating HTTP(S) traffic, this task is solved by adding the X-Forwarded-For header and processing it, for example, on nginx servers behind the balancer using the ngx_http_realip_module module.

When balancing TCP traffic from HTTPS clients and transmitting it directly to the web servers (without modification or termination), this header cannot be added. Therefore, you can use the solutions provided by the Proxy Protocol to transfer the header.

Since the Cloud platform load balancer is based on HAproxy, the solutions provided below are also relevant for those based on the HAProxy – nginx bundle.

Follow these steps to configure all the necessary rules on the load balancer:

  1. Create a rule for balancing TCP traffic on port 80 (HTTP). Specify TCP as the load balancer protocol and PROXY as the server protocol.
  2. Create a rule for balancing TCP traffic on port 443 (HTTPS). Specify TCP as the load balancer protocol and PROXY as the server protocol.
  3. Configure the servers that traffic will be distributed to. Specify the balancing algorithm and the necessary settings for availability checks and connection timeouts according to the basic rule configurations.

If you don’t need HTTP or HTTPS balancing, you don’t need to configure any of these rules depending on the tasks to be solved.

To configure nginx on the servers themselves (to work with the PROXY protocol), enable support for the nginx proxy_protocol directive. Learn more in the official nginx documentation.

An example of nginx.conf configured to accept real IP addresses from the Cloud platform load balancer:

server {
    server_name localhost;

    # ATTENTION! Working with the proxy_protocol directive is only 
    possible if bundled with haproxy
    # For direct access, this directive must be disabled.
    listen 443 ssl default_server proxy_protocol;

    ssl_certificate      /etc/nginx/ssl/public.example.com.pem;
    ssl_certificate_key  /etc/nginx/ssl/public.example.com.key;

    # HAProxy address
    set_real_ip_from 192.168.1.0/24;
    # We recommend specifying CIDR, because balancer IP can change in some
    cases
    real_ip_header proxy_protocol;

    root /usr/share/nginx/html;
    index index.html index.htm;

    error_page 500 502 503 504 /50x.html;
    location = /50x.html {
        root /usr/share/nginx/html;
    }

    location ~ /\.ht {
        deny all;
    }
}

Please note that using LVS and Direct Server Return method is not supported according to the Cloud platform security policy.