Getting Started with Terraform

You can deploy and manage cloud infrastructure at Selectel using the Terraform utility by HashiCorp.

Using Terraform, you can work with the following services:

The infrastructure and its components are described in HCL (HashiCorp Configuration Language) in configuration files with the .tf extension (manifests).

Two Terraform providers are used to work with Selectel services:

  • OpenStack provider for managing OpenStack resources, such as virtual machines (cloud servers), volumes, and networks;
  • Selectel provider that interfaces with the Selectel API to manage projects and quotas, users, their roles, and tokens, DNS, Managed Kubernetes clusters and node groups, Cloud databases.

To deploy your infrastructure through Terraform:

  1. Install Terraform.
  2. Create a manifest, initialize the Terraform OpenStack and Selectel providers in it, and describe the infrastructure plan.
  3. Check the configuration and deploy the infrastructure.

Installing Terraform

Before getting started, install Terraform on the cloud server or local computer.

Use the guide on the official Terraform website depending on your operating system.

Creating a Manifest

The infrastructure plan is described in manifests, which are files with a .tf extension.

When running the terraform apply command that creates the infrastructure (more details below), Terraform loads all the manifests that are in the same directory, and all the described resources are created. Therefore, the files describing one infrastructure should be in a separate directory.

Create a directory and a file in it, such as main.tf. The plan description files may have any name.

Provider Configuration

In the manifest, you need to list the Terraform providers needed to create the infrastructure. Usually two providers (Selectel and OpenStack) are used. In some cases, only OpenStack is required, for example, if the Cloud platform project has already been created.

Add the following block to the file and list the providers in it:

terraform {
required_version = ">= 0.14.0"
  required_providers {
    openstack = {
      source  = "terraform-provider-openstack/openstack"
      version = "~> 1.43.0"
    }
     selectel = {
      source  = "selectel/selectel"
      version = "~> 3.6.2"
   }
  }
}

Current versions of the providers can be found in the official documentation (Selectel and OpenStack) by clicking USE PROVIDER.

To authorize an OpenStack provider, add the following to the manifest:

provider "openstack" {
  auth_url    = "https://api.selvpc.ru/identity/v3"
  domain_name = "selectel_account"
  tenant_id = "project_id"
  user_name   = "user_name"
  password    = "user_password"
  region      = “region”
}

Specify:

  • auth_url — URL for authentication in the Selectel API;
  • domain_name — ID of the Selectel account (contract number) that can found in the Control panel;
  • tenant_id — ID of the Cloud platform project;
  • user_name — OpenStack user associated with the Cloud Platform project;
  • password — OpenStack user password;
  • region is the pool in which the infrastructure will be deployed.

To authorize the Selectel provider:

provider "selectel" {
  token = "selectel_token"
}

Specify:

  • token — account token (Selectel API key) that you can obtain using the [API Keys] instructions.

Infrastructure Plan

Describe the infrastructure plan in a file with the .tf extension. You can:

Please note that an example manifest for creating an infrastructure is provided below.

Creating the Infrastructure

Run the following commands in the directory in which the created manifests are located:

  1. Initialize the Terraform environment:

    terraform init
  2. Check that the plan has been created without errors:

    terraform plan

    If there are no errors in the description, a list of resources ready to be created will be displayed. If there are errors, they need to be fixed.

  3. Deploy the infrastructure and create resources:

    terraform apply
  4. Confirm the creation by typing yes and press Enter.

The created resources will automatically appear in the Control panel.

Editing and Deleting Resources

To edit an already created infrastructure or its components, just edit the manifest. Terraform will determine what you need to create additionally or delete.

Please note that if you edited the infrastructure through the Control panel, the changes will not appear in the manifests.

To make changes to the infrastructure, edit the manifest and then apply the changes:

terraform apply

To remove resources, run the following in the manifest directory:

terraform destroy

A list of resources to be deleted will be displayed. Confirm the deletion by typing yes and press Enter.

An Example of the Infrastructure Plan

Applying this plan will create an infrastructure in the ru-3 pool, which will contain:

  • a cloud server with a boot network volume created from an Ubuntu 20.04 LTS 64-bit image, with a custom configuration with 1 vCPU and 1 GB RAM;
  • a private network with a subnet;
  • a virtual router connected to the external-network;
  • a floating IP address associated with the cloud server.

The example uses the My First Project in the Cloud platform, which is created automatically when registering an account. You can also create a project through Terraform.

The plan is described in two files — main.tf and vars.tf. The first one stores the description of the resources to be created, the second one stores the declared variables that can be reused in main.tf.

vars.tf file

# Cloud Platform Region
variable "region" {
  default = "ru-3"
}
# SSH key to access the cloud server
variable "public_key" {
  default = "key_value"
}
# Availability Zone
variable "az_zone" {
  default = "ru-3b"
}
# Type of the network volume that the server is created from
variable "volume_type" {
  default = "fast.ru-3b"
}
# Subnet CIDR
variable "subnet_cidr" {
  default = "10.10.0.0/24"
}

Where:

  • ru-3 is the pool where the infrastructure will be deployed;
  • key_value is the SSH key value. It can be created in the Control panel or with the OpenStack CLI commands ssh-keygen -t rsa and openstack keypair create --public-key ~/.ssh/id_rsa.pub <ssh_name>
  • ru-3b is the availability zone;
  • fast.ru-3b is the type of the network volume that the server is created from. You can see the types available for creation with the OpenStack CLI command openstack volume type list
  • 10.10.0.0/24 is the subnet CIDR.

main.tf file

# Terraform initialization and provider configuration
# You can find a description of all the parameters above under Provider Configuration
terraform {
required_version = ">= 0.14.0"
  required_providers {
    openstack = {
      source  = "terraform-provider-openstack/openstack"
      version = "~> 1.35.0"
    }
     selectel = {
      source  = "selectel/selectel"
      version = "~> 3.6.2"
   }
  }
}
provider "openstack" {
  auth_url    = "https://api.selvpc.ru/identity/v3"
  domain_name = "selectel_account"
  tenant_id = "project_id"
  user_name   = "user_name"
  password    = "user_password"
  region      = var.region
}
provider "selectel" {
  token = "sel_token"
}

# Creating the SSH key
resource "openstack_compute_keypair_v2" "key_tf" {
  name       = "key_tf"
  region     = var.region
  public_key = var.public_key
}

# Request external-network ID by name
data "openstack_networking_network_v2" "external_net" {
  name = "external-network"
}

# Creating a router
resource "openstack_networking_router_v2" "router_tf" {
  name                = "router_tf"
  external_network_id = data.openstack_networking_network_v2.external_net.id
}

# Creating a network
resource "openstack_networking_network_v2" "network_tf" {
  name = "network_tf"
}

# Creating a subnet
resource "openstack_networking_subnet_v2" "subnet_tf" {
  network_id = openstack_networking_network_v2.network_tf.id
  name       = "subnet_tf"
  cidr       = var.subnet_cidr
}

# Connecting the router to the subnet
resource "openstack_networking_router_interface_v2" "router_interface_tf" {
  router_id = openstack_networking_router_v2.router_tf.id
  subnet_id = openstack_networking_subnet_v2.subnet_tf.id
}

# Searching for the image ID (that the server will be created from) by its name
data "openstack_images_image_v2" "ubuntu_image" {
  most_recent = true
  visibility  = "public"
  name        = "Ubuntu 20.04 LTS 64-bit"
}

# Creating a unique flavor name
resource "random_string" "random_name_server" {
  length  = 16
  special = false
}

# Creating a server configuration with 1 vCPU and 1 GB RAM
# Parameter  disk = 0  makes the network volume a boot one
resource "openstack_compute_flavor_v2" "flavor_server" {
  name      = "server-${random_string.random_name_server.result}"
  ram       = "1024"
  vcpus     = "1"
  disk      = "0"
  is_public = "false"
}

# Creating a 5 GB network boot volume from the image
resource "openstack_blockstorage_volume_v3" "volume_server" {
  name                 = "volume-for-server1"
  size                 = "5"
  image_id             = data.openstack_images_image_v2.ubuntu_image.id
  volume_type          = var.volume_type
  availability_zone    = var.az_zone
  enable_online_resize = true
  lifecycle {
    ignore_changes = [image_id]
  }
}

# Creating a server
resource "openstack_compute_instance_v2" "server_tf" {
  name              = "server_tf"
  flavor_id         = openstack_compute_flavor_v2.flavor_server.id
  key_pair          = openstack_compute_keypair_v2.key_tf.id
  availability_zone = var.az_zone
  network {
    uuid = openstack_networking_network_v2.network_tf.id
  }
  block_device {
    uuid             = openstack_blockstorage_volume_v3.volume_server.id
    source_type      = "volume"
    destination_type = "volume"
    boot_index       = 0
  }
  vendor_options {
    ignore_resize_confirmation = true
  }
  lifecycle {
    ignore_changes = [image_id]
  }
}

# Creating a floating IP
resource "openstack_networking_floatingip_v2" "fip_tf" {
  pool = "external-network"
}

# Associating the floating IP to the server
resource "openstack_compute_floatingip_associate_v2" "fip_tf" {
  floating_ip = openstack_networking_floatingip_v2.fip_tf.address
  instance_id = openstack_compute_instance_v2.server_tf.id
}