Knowledge Base

Articles list
Scroll to top

Kubernetes for VPC

Kubernetes for VPC helps you manage Virtual Private Cloud server clusters. By using Kubernetes for VPC in projects with multiple applications launched as containers, you simplify the deployment, scaling, and container infrastructure maintenance.

The Virtual Private Cloud is built on OpenStack. Private Kubernetes clusters are created with OpenStack Magnum.

Virtual machines serve as cluster nodes. Cluster configurations can be entered from the control panel or API.

image

Glossary

Term Definition
ETCD volumeA volume containing all cluster data
Docker volumeA volume for temporarily storing Docker container data
Kubernetes network driverThe mechanism for providing cluster pods with network connectivity
Docker driver volumeThe data storage mechanism for Docker containers
Ingress controllerIngress is a mechanism that provides traffic routing on the application level (L7) and is provided from an Ingress controller
(Selectel uses traefik)
KubectlThe Kubernetes console client

Creating Kubernetes Clusters

Volume types are selected when a cluster is first created. Detailed information about the types of volumes available can be found under Resources.

Two independent Kubernetes clusters can be created in existing VPC projects.

Please note that node configurations cannot be changed once they have been created, only their quantities.

To create a Kubernetes cluster in an existing VPC project:

  1. In the control panel, open your existing VPC project and click the Kubernetes tab.
  2. Click Create cluster
  3. Choose a location by selecting a Region and Zone
  4. If necessary, change the automatically generated cluster Name
  5. Assign resources to the Masters and Nodes
  6. Enter your SSH key
  7. You can add extra volumes and enter additional settings by expanding the Volumes and other settings block:
    1. To add another volume, select a volume type and size
    2. You can also choose to create a floating IP address for each node
  8. Click Create

The cluster is ready once it displays the CREATE_COMPLETE status.

In the cluster list under the Kubernetes tab, you can:

  • Delete full clusters
  • Scale clusters by increasing or decreasing the number of nodes

Load Balancer

When creating two or more master nodes to access Kubernetes, the API will be connected to an external load balancer.

image

You can also create a Service with the Type LoadBalancer inside a Kubernetes cluster. In this case, a load balancer will be created inside the project and automatically connect to the cluster.

Volumes and Other Settings

To add extra volumes to a cluster, select the volume type and size. More detailed information on Docker volumes can be found in the official documentation.

If necessary, you can create floating IP addresses for each node while creating the cluster. Once a cluster has been created, you will not be able to remove a floating IP.

Getting Started with Kubernetes CLI

To access a Kubernetes cluster, download the config.yaml configuration file by clicking Download config.yaml under Access on the cluster page.

Export the environment variable KUBECONFIG containing the path to this file:

export KUBECONFIG=~/config.yaml

Download and install the kubectl console client. Installation instructions can be found in the official documentation.

Once kubectl has been launched, you can view the status of cluster objects by entering:

kubectl get nodes                                                                                                                                                      366ms Tue Sep 11 11:27:18 2018
NAME                                STATUS ROLES AGE VERSION
k8s-cluster-y7x4r3ga2u3b-minion-0   Ready <none> 24d v1.11.1
k8s-cluster-y7x4r3ga2u3b-minion-1   Ready <none> 24d v1.11.1

You can use all of the options (tasks) available in kubectl to manage your clusters. For a detailed description of these tasks, please see the official documentation.

Kubernetes Dashboard and Grafana

To access the Kubernetes Dashboard, you will need to know the token value for the admin account that was automatically created in the cluster.

To find this "secret" object, enter the command:

kubectl get secret --namespace=kube-system | grep "admin-user-token"
admin-user-token-***** kubernetes.io/service-account-token 3 8sD"

To view the token value of the "secret" object named admin-user-token-*****, enter:

kubectl describe secret admin-user-token-***** --namespace=kube-system | grep "token:"
token:      XXXXXX...

The token value will be given in the response.

To open the Kubernetes Dashboard, open the kubectl console client. To launch a local proxy, enter:

kubectl proxy

To access the dashboard, open the following link in you browser:

http://localhost:8001/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/

The dashboard login screen will open.

image

Enter the token you retrieved earlier. After logging in, you will be granted access to the Kubernetes Dashboard.

image

You can access Granfana by following the same steps, but by entering the following link in your browser:

http://localhost:8001/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy/

After logging in, the Grafana dashboard will open.

image

Launching Applications in Kubernetes Clusters

To launch simple applications, create a description in YAML format:

# nginx-basic.yaml
apiVersion: v1
kind: Pod
metadata:
 name: nginx
spec:
 containers:
 - name: nginx
   image: library/nginx:1.14-alpine
   ports:
   - containerPort: 80

Launch the kubectl console client:

kubectl create \
 -f https://raw.githubusercontent.com/selectel/kubernetes-examples/master/pods/nginx-basic.yaml

To check the status of the nginx pod, run:

kubectl get pod nginx                                                                                                                                              148ms Tue Sep 11 11:41:40 2018
NAME      READY  STATUS    RESTARTS AGE
nginx     1/1    Running   0        14s

For more information on managing objects, please read this article in our blog.

FAQ

What versions of Kubernetes are available?

We currently offer the latest releases of versions 1.10, 1.1,1 and 1.12. The latest release of version 1.13 will soon be available.

A list of current versions can be retrieved in the API. For REGION, enter the region where you plan to create your cluster (ru-1, ru-2, or ru-3):

curl -XGET https://api.REGION.selvpc.ru/container-infra-extra/v1/capabilities

Are configurations available with multiple master nodes?

Yes, you can enter the number of master nodes you want when creating a cluster. With this configuration, a load balancer will be automatically created. ETCD databases will be deployed on each master node.

Can I change the number of nodes in a cluster after it has been created?

You can increase and decrease the number of nodes from the dashboard and API.

Master nodes can only made when creating a cluster and cannot be changed afterwards.

What StorageClass descriptions can I use?

You can use StorageClass for OpenStack Cinder (documentation).

Now when creating a cluster, StorageClass will automatically be created in accordance with the zone and volume type of the cluster.

Is Service with Kind=LoadBalancer supported?

Yes, you can set Service to the LoadBalancer type.

We provided OpenStack Octavia as a load balancer service.

Is Service with Kind=Ingress supported?

You, can use any IngressController to create Ingress services.

When creating clusters from the dashboard, you can choose to automatically create an IngressController, in which case traefik will be set up in the cluster.

What network model do clusters use?

We currently provide clusters with Flannel. Flannel configurations use host-gw.

In the future, we plan on adding Calico and making it possible to choose between Calico and Flannel when creating clusters.

Do you update the Kubernetes versions available and how often?

We try to offer the latest stable releases of Kubernetes.

There is currently no exact release date schedule.

Are Metrics Server and HorizontalPodAutoscalers supported?

Yes, however we currently do not offer auto install for metrics server.

Please consult the official documentation to install either of these.

Can I use Istio service mesh?

Yes, but we currently do not offer auto install for Istio.

Please consult the official documentation to install Istio

When setting up Istio, keep in mind that the load balancer service we provide is OpenStack Octavia (Octavia user documentation).