Load balancer is used to distribute incoming traffic between already existing virtual machines within the same region.
|Load balancer||Load balancer distributes incoming streams between servers in accordance with the specified parameters configured in the rule|
|Server||Existing virtual machines that are added to process traffic flow|
|Rule||Configurations for forwarding traffic from the load balancer to the server group. In the Control panel interface, the rule combines forwarding configurations, as well as availability checks, algorithm and connection configurations, and other configurations related to this traffic stream|
|Availability check||Parameters that help the load balancer determine that the server can handle traffic. Only healthy servers will receive traffic|
|Balancing algorithm||A method for distributing tasks between multiple servers to optimize resource use and reduce requests serving time|
|Connection configurations||Configurations for all connections passing through the balancer: number of requests, connection timeouts and response waiting time|
Creating and Configuring a Balancer
Creating a Balancer
Follow these steps to create a load balancer in an existing project in the Control panel:
- Go to Load balancers subsection in the project’s card.
- Click Create load balancer.
- Enter the balancer name. You can leave the automatically generated name.
- Select the required region.
- Specify balancer address by selecting a subnet. The load balancer IP field will be filled with one of the available subnet addresses. Connect a floating IP if necessary.
- Enter rule configurations for the balancer: specify basic rule configurations.
- In the Configuration section of the Servers tab, select the servers to send traffic to for this rule.
- Click Create load balancer.
To create a balancer, just select the protocols, ports of the balancer and servers, and at least one server. All other rule configurations are filled by default. You can change them when creating the balancer or after creating it.
Basic Rule Configurations
Protocol and Port
You can change the entered protocol and port for both the balancer and the server in each rule:
- HTTP, HTTPS, UDP, TCP protocols for the balancer;
- HTTP, HTTPS, UDP, TCP, PROXY protocols for servers.
Not all protocols are compatible with each other, and this is taken into consideration in the interface — it is impossible to select inappropriate protocols.
Standard ports are automatically assigned for the selected protocol.
Note: you can select a different port for the server when configuring it.
The set port value is shared by all servers.
Note: you cannot change the protocols and ports of a rule in a working balancer; you can only recreate a rule with the necessary configurations.
To balance HTTPS traffic, it is recommended to add an SSL certificate to the rule so that the traffic is correctly recognized and distributed to the servers.
Adding an SSL certificate is required to configure the HTTPS → HTTP stream.
If HTTPS → HTTPS stream is selected and the SSL certificate is not uploaded, the data will be transmitted using PROXY protocol without decryption.
If the balancer is deleted, the SSL certificate will be deleted together with it.
When creating a balancer, select the servers that you want to add to the rule. Specify the required IP if the server has several ports, and set server weight.
After creating a balancer, you can:
- assign the server a backup role — the server will not receive connections until all other rule servers fail;
- suspend — the server will not receive connections until it is returned to work manually;
- delete from the rule — the server will no longer participate in balancing, but will continue to work.
You can add new servers to the rule of an already working balancer by clicking the Add server button.
You can choose one of the two balancing algorithms:
- Round Robin — a cyclical scheduling function: the first request is sent to one server, the second to another, and so on. Once the last server has been sent a request, the cycle starts anew;
- Least connections — this algorithm takes into consideration the number of active connections. Each subsequent request is sent to the server with the least number of active connections.
Sticky Sessions — an algorithm for distributing incoming requests whereby connections are forwarded to a specific server in a group. Using this method, requests are distributed to servers using:
- APP-cookie — an already existing cookie that is set in the application code;
- HTTP-cookie — a cookie that the balancer creates and attaches to the session;
- Source IP — the client’s IP address is hashed and divided by the weight of each server in the rule to determine the server that will process requests.
Requests from the same client will always be passed to the same server. If the designated server is unavailable, the request will be forwarded to another one.
You can set the configurations to check the connection to balanced servers.
Availability checks help the balancer to understand if one of the servers is out of order and cannot be used. If one server is unavailable, connections will be redirected to a working server.
Availability checks are enabled by default, but you can disable them on the appropriate tab. Note: if checks are disabled, the server status will be NO MONITOR.
You can configure the following check parameters:
- check type: HTTP, PING, TCP;
- the interval with which the balancer sends checking requests to the servers;
- connection timeout — how long a response from the destination takes. If there is no answer, the connection is lost as soon as this time has expired;
- you can configure URL and expected response codes for HTTP and HTTPS protocols;
- healthy threshold — the number of successive requests in a row, after which the server is put into operation state;
- unhealthy threshold — the number of unsuccessful requests in a row, after which the server operation is suspended.
You can set connection time configurations between:
- incoming requests and the balancer by indicating the connection timeout, for which you can set a limit value;
Note: if a limit value is set, you must specify the maximum number of requests.
- the balancer and servers, specifying the connection timeout, inactivity timeout, and TCP packet wait timeout.
In regular mode, the balancer only transfers the original http request body to the server, replacing the client’s IP address with its own. Include the necessary types of additional headers in the request so that the servers receive this information for correct operation or analysis:
|ONLINE||The service is functioning. Service availability checks are successful|
|CREATING||The server is being added to the rule|
|UPDATING||Server settings are being updated|
|OFFLINE||The server does not accept requests, while the server itself may be active|
|BACKUP||The server has been moved to backup status and will be enabled if the other servers of the rule are not working. To return the server to normal operation mode, select the Return to work server menu option|
|NO_MONITOR||Service health is not checked while performing availability checks. If all the load balancer services have this status, the load balancer status will be ONLINE|
|DRAINING||The server does not accept new connections, but it continues to process the current ones|
|ERROR||The service on the specified port either does not respond or does not pass the response type check|
|DELETING||The server is being removed from the load balancer, but the server health is not affected|
Load Balancer Statuses
The Load balancer status depends on the status of the server to which it distributes requests and also on its own state.
|ONLINE||All availability checks are successful|
|CREATING||Load balancer is being created|
|UPDATING||Balancer configurations are being updated|
|OFFLINE||The balancer is not processing requests and is disabled|
|DEGRADED||One of the balancer’s components has ERROR status|
|ERROR||The reasons for this status could be: all servers have ERROR status, or an error in the work of the balancer has occurred. If you see that even with this balancer status all servers are working correctly and have ONLINE status, please contact technical support|
|DELETING||The balancer is being deleted|
In the balancer card, you can view statistics and find out:
- how many active connections there currently are;
- how many connections there are in total;
- how many connections were not processed;
- how many bytes were received and how many bytes were sent.
Redirecting http requests to https
You can create a balancer using my.selectel.ru Control panel or API Openstack Octavia.
The rule consists of:
- Listener — a handler that intercepts the traffic stream coming to the balancer and responses on the configured port and protocol. It redirects traffic to the required server group;
- Pool — a group of servers of one Listener. It binds to the Listener;
- Server (Members) — specific servers within one Pool.
Our rule is a simplified model of Openstack Octavia terminology. You can use the API to solve more complex problems.
One of these tasks is the need to redirect any regular HTTP requests (port 80) to HTTPS (port 443). Follow these steps to do this:
Create http_listener on port 80:
openstack loadbalancer listener create --name http_listener --protocol HTTP --protocol-port 80 lb1
Configure the L7 Policy1 on http_listener with the REDIRECT_TO_URL action pointing to the URL:
openstack loadbalancer l7policy create --action REDIRECT_TO_URL --redirect-url https://example.com/ --name policy1 http_listener
Add the L7-rule to policy1 that matches all requests:
openstack loadbalancer l7rule create --compare-type STARTS_WITH --type PATH --value / policy1