Managed Kubernetes

Description

Kubernetes is a container orchestration system. By using Selectel Kubernetes service, you simplify the deployment, scaling, and container infrastructure maintenance. Selectel is responsible for updating versions, providing security and availability of the Control Plane Kubernetes.

The service is built on the basis of the Selectel Cloud platform and uses its resources for cluster worker nodes: servers, load balancers, networks, and volumes.

Control Plane consists of 3 master nodes that run in different Availability zones of a single region.

containerd is used as the Container Runtime Environment (CRI).

Glossary

Term Definition
Managed Kubernetes cluster An object consisting of several master nodes and one or more node groups
Master nodes A component managing the cluster
Node group Cluster nodes of the same configuration grouped together and located in the same zone. A structural cluster unit in which you can change the amount of nodes
Node A virtual machine on which the user deploys containers and hosts the services of their cluster
Boot volume A volume for node operating system and temporary container data

Payment

The virtual machine resources used as cluster nodes, volumes, load balancers, and network resources, are charged at the cost of the Cloud platform resources.

When the beta testing period is over, Kubernetes clusters will be charged like the rest of the cloud resources at the cost provided on our website.

Network

A private network will be created when creating a cluster in the Control panel. All cluster nodes will be added to this network.

If you want to create a cluster in an already existing private network, please create a ticket.

Calico is used as CNI in the Managed Kubernetes clusters.

Use the Service with the type LoadBalancer to provide a public address for applications running in a cluster. After creating a service with the type LoadBalancer, the system will create and connect a Load balancer with a public IP address.

An example of a service description with the LoadBalancer type:

# nginx-loadbalancer.yaml
apiVersion: v1
kind: Service
metadata:
  name: nginx-loadbalancer
  labels:
    app: webservice
spec:
  type: LoadBalancer
  ports:
  - port: 80
    protocol: TCP
  selector:
    app: webservice

The load balancer will appear in the Load balancers section of your project in the Сontrol panel.

We recommend you to perform all actions with the load balancers using kubectl to avoid errors.

Persistent Volumes

You can create a pod with a Persistent Volume that will remain when the pod is deleted. An example of specification for creating a pod with a persistent volume:

# nginx-with-volume.yaml
apiVersion: v1
kind: Pod
metadata:
  name: nginx
  labels:
    app: webservice
spec:
  containers:
  - name: nginx
    image: library/nginx:1.17-alpine
    ports:
    - containerPort: 80
    volumeMounts:
      - mountPath: "/var/www/html"
        name: data
  volumes:
    - name: data
      persistentVolumeClaim:
        claimName: my-pv-claim

More information about the specifications for creating a pod can be found in the official documentation.

Persistent volumes are available only within a single zone.

A StorageClass object is used when creating a persistent volume.

StorageClass allows you to pre-describe the configuration of persistent volumes that you will need in the cluster. You must specify the volume type and the zone in the Storage Class object.

For example, to create a fast volume in the ru-1a zone, specify the fast.ru-1a type and the ru-1a availability zone in the Storage Class description in parameters field.

Examples of existing Storage Class manifests.

Fast, basic, and universal volumes are the network volumes of the Selectel Cloud platform.

An example of the PersistentVolumeClaim using this StorageClass looks as follows:

# my-pv-claim.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pv-claim
spec:
  storageClassName: fast.ru-1a
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 10Gi

Please note that the zone in the PVC type must match the zone of the node to which it must be connected. When using multiple zones for the cluster nodes and the PVC, you need to specify their binding to the zone in the description of Pod objects.

You can only work with volumes in the ReadWriteOnce mode.

All volumes are displayed in the Volumes section of the Control panel.

We recommend you to perform all actions with volumes using kubectl to avoid errors.

Maintenance Window

Automatic actions may be performed to maintain your clusters and auto-update system certificates during the maintenance window.

Every day at the specified hour, the cluster switches to the MAINTENANCE mode. Any cluster scaling is unavailable during the maintenance window. This period may last up to two hours. You can specify the start time in the Control panel. When creating a cluster in the Control panel, the default time will be set to 4 a.m. in your time zone.

The cluster operates as usual and is accessible over the network with the MAINTENANCE status. The applications running in a cluster will run without changes, stops, or any delays.

Cluster Statuses

  • ACTIVE — the cluster is available
  • PENDING_CREATE — the cluster is being created
  • PENDING_ROTATE_CERTS — certificates and keys for Kubernetes Control Plane are being updated
  • PENDING_DELETE — the cluster is being deleted
  • PENDING_RESIZE — the amount of nodes or node groups is being changed
  • PENDING_NODE_REINSTALL — one of the nodes is being reinstalled
  • ERROR — the cluster is not running, please create a ticket
  • MAINTENANCE — the cluster is in the maintenance window

Please note that for all statuses except ACTIVE one, some cluster actions can be blocked at the service API level.

Project Quotas

The Cloud platform resources used in the cluster are limited by quotas in the region. Resource quotas can be configured on the Projects page in the card of the desired project.

Service Limits

Limit type Value
Maximum number of Kubernetes clusters in a single project of one region 2
Maximum amount of node groups in one Kubernetes cluster 4
Maximum amount of nodes in one node group 15
Maximum amount of vCPU nodes 8
Maximum amount of RAM nodes 64 GB
Maximum capacity of the node boot volume 512 GB