Deploy Node Pool

Application Scenario

Cloud Container Engine (CCE) is a high-reliability, high-performance enterprise-grade container management service that supports Kubernetes community native applications and tools. Node pool is a collection used to manage a group of node resources with the same configuration in a CCE cluster. Through node pools, you can uniformly manage node specifications, images, storage, and other configurations, and support automatic scaling functionality. By creating a node pool, you can quickly add compute nodes to a CCE cluster, achieving elastic scaling of container workloads. This best practice will introduce how to use Terraform to automatically deploy a CCE node pool, including querying availability zones and instance flavors, as well as creating VPC, subnet, Elastic IP, CCE cluster, key pair, and node pool.

This best practice involves the following main resources and data sources:

Data Sources

Resources

Resource/Data Source Dependencies

Implementation Steps

1. Script Preparation

Prepare the TF file (such as main.tf) for writing the current best practice script in the specified workspace, ensuring that it (or other TF files in the same directory) contains the provider version declaration and Huawei Cloud authentication information required for deploying resources. For configuration introduction, refer to the introduction in Preparation Before Deploying Huawei Cloud Resources.

2. Query Availability Zones Required for Node Pool Resource Creation Through Data Source

Add the following script to the TF file (such as main.tf) to inform Terraform to perform a data source query, the query results are used to create node pool related resources:

Parameter Description: This data source does not require additional parameters and queries all available availability zone information in the current region by default.

3. Create VPC Resource (Optional)

Add the following script to the TF file to inform Terraform to create VPC resources (if VPC ID is not specified):

Parameter Description:

  • count: The number of resource creations, used to control whether to create VPC resource, only when both var.vpc_id and var.subnet_id are empty, the VPC resource is created

  • name: The name of the VPC, assigned by referencing input variable vpc_name

  • cidr: The CIDR block of the VPC, assigned by referencing input variable vpc_cidr, default is "192.168.0.0/16"

4. Create VPC Subnet Resource (Optional)

Add the following script to the TF file to inform Terraform to create VPC subnet resources (if subnet ID is not specified):

Parameter Description:

  • count: The number of resource creations, used to control whether to create VPC subnet resource, only when var.subnet_id is empty, the VPC subnet resource is created

  • vpc_id: The VPC ID to which the subnet belongs, if the VPC ID is specified, use that value, otherwise assign by referencing the ID of the VPC resource (huaweicloud_vpc.test[0])

  • name: The name of the subnet, assigned by referencing input variable subnet_name

  • cidr: The CIDR block of the subnet, if the subnet CIDR is specified, use that value, otherwise automatically calculate based on the VPC's CIDR block using the cidrsubnet function

  • gateway_ip: The gateway IP of the subnet, if the gateway IP is specified, use that value, otherwise automatically calculate based on the subnet CIDR or automatically calculated subnet CIDR using the cidrhost function

  • availability_zone: The availability zone where the subnet is located, if the availability zone is specified, use that value, otherwise use the first availability zone from the availability zone list query data source

5. Create Elastic IP Resource (Optional)

Add the following script to the TF file to inform Terraform to create Elastic IP resources (if EIP address is not specified):

Parameter Description:

  • count: The number of resource creations, used to control whether to create Elastic IP resource, only when var.eip_address is empty, the Elastic IP resource is created

  • publicip: Public IP configuration block

    • type: Public IP type, assigned by referencing input variable eip_type, default is "5_bgp" for full dynamic BGP

  • bandwidth: Bandwidth configuration block

    • name: The name of the bandwidth, assigned by referencing input variable bandwidth_name

    • size: Bandwidth size (Mbps), assigned by referencing input variable bandwidth_size, default is 5

    • share_type: Bandwidth share type, assigned by referencing input variable bandwidth_share_type, default is "PER" for dedicated

    • charge_mode: Bandwidth charge mode, assigned by referencing input variable bandwidth_charge_mode, default is "traffic" for pay-per-traffic

6. Create CCE Cluster Resource

Add the following script to the TF file to inform Terraform to create CCE cluster resources:

Parameter Description:

  • name: The name of the CCE cluster, assigned by referencing input variable cluster_name

  • flavor_id: The flavor ID of the CCE cluster, assigned by referencing input variable cluster_flavor_id, default is "cce.s1.small" for small cluster

  • cluster_version: The version of the CCE cluster, assigned by referencing input variable cluster_version, if null, the latest version will be used

  • cluster_type: The type of the CCE cluster, assigned by referencing input variable cluster_type, default is "VirtualMachine" for virtual machine type

  • container_network_type: Container network type, assigned by referencing input variable container_network_type, default is "overlay_l2" for L2 network

  • vpc_id: VPC ID, if the VPC ID is specified, use that value, otherwise assign by referencing the ID of the VPC resource (huaweicloud_vpc.test[0])

  • subnet_id: Subnet ID, if the subnet ID is specified, use that value, otherwise assign by referencing the ID of the VPC subnet resource (huaweicloud_vpc_subnet.test[0])

  • eip: Elastic IP address, if the EIP address is specified, use that value, otherwise assign by referencing the address of the Elastic IP resource (huaweicloud_vpc_eip.test[0])

7. Query Instance Flavors Required for Node Pool Resource Creation Through Data Source

Add the following script to the TF file to inform Terraform to query instance flavors that meet the conditions:

Parameter Description:

  • performance_type: Performance type, assigned through input variable node_performance_type, default is "general" for general purpose

  • cpu_core_count: CPU core count, assigned through input variable node_cpu_core_count, default is 4 cores

  • memory_size: Memory size (GB), assigned through input variable node_memory_size, default is 8GB

  • availability_zone: The availability zone where the instance flavor is located, using the first availability zone from the availability zone list query data source

8. Create Key Pair Resource

Add the following script to the TF file to inform Terraform to create key pair resources:

Parameter Description:

  • name: The name of the key pair, assigned by referencing input variable keypair_name

9. Create CCE Node Pool Resource

Add the following script to the TF file to inform Terraform to create CCE node pool resources:

Parameter Description:

  • cluster_id: CCE cluster ID, assigned by referencing the ID of the CCE cluster resource (huaweicloud_cce_cluster.test)

  • type: The type of the node pool, assigned by referencing input variable node_pool_type, default is "vm" for virtual machine type

  • name: The name of the node pool, assigned by referencing input variable node_pool_name

  • flavor_id: Node flavor ID, assigned by using the first flavor ID from the instance flavor list query data source

  • availability_zone: The availability zone where the nodes are located, assigned by using the first availability zone from the availability zone list query data source

  • os: The operating system type of the nodes, assigned by referencing input variable node_pool_os_type, default is "EulerOS 2.9"

  • initial_node_count: Initial node count, assigned by referencing input variable node_pool_initial_node_count, default is 2

  • min_node_count: Minimum node count, assigned by referencing input variable node_pool_min_node_count, default is 2

  • max_node_count: Maximum node count, assigned by referencing input variable node_pool_max_node_count, default is 10

  • scale_down_cooldown_time: Scale down cooldown time (minutes), assigned by referencing input variable node_pool_scale_down_cooldown_time, default is 10 minutes

  • priority: The priority of the node pool, assigned by referencing input variable node_pool_priority, default is 1, higher values indicate higher priority

  • key_pair: Key pair name, assigned by referencing the name of the key pair resource (huaweicloud_kps_keypair.test)

  • tags: Node pool tags, assigned by referencing input variable node_pool_tags, used for resource classification and management

  • root_volume: Root volume configuration block

    • volumetype: Root volume type, assigned by referencing input variable root_volume_type, default is "SATA"

    • size: Root volume size (GB), assigned by referencing input variable root_volume_size, default is 40GB

  • data_volumes: Data volume configuration block, creates multiple data volume configurations through dynamic block (dynamic block) based on local variable flattened_data_volumes

    • volumetype: Data volume type, assigned through volumetype in local variable flattened_data_volumes

    • size: Data volume size (GB), assigned through size in local variable flattened_data_volumes

    • kms_key_id: KMS key ID, assigned through kms_key_id in local variable flattened_data_volumes, used for data volume encryption

    • extend_params: Extended parameters, assigned through extend_params in local variable flattened_data_volumes

  • storage: Storage configuration block, used to configure selectors, groups, and virtual spaces for data volumes, only created when data volume configuration includes virtual spaces

    • selectors: Selector configuration block, used to select data volumes that meet conditions

      • name: Selector name, "cceUse" for CCE-used data volumes, "user1", "user2", etc. for user data volumes

      • type: Selector type, set to "evs" for cloud volumes

      • match_label_volume_type: Match label - volume type, assigned through volumetype in data volume configuration

      • match_label_size: Match label - size, assigned through size in data volume configuration

      • match_label_count: Match label - count, assigned through count in data volume configuration

      • match_label_metadata_encrypted: Match label - encryption identifier, "1" if KMS key is configured, otherwise "0"

      • match_label_metadata_cmkid: Match label - KMS key ID, use this value if KMS key is configured, otherwise null

    • groups: Group configuration block, used to group data volumes and configure virtual spaces

      • name: Group name, "vgpaas" for CCE-used data volumes, "vguser1", "vguser2", etc. for user data volumes

      • cce_managed: Whether managed by CCE, true for CCE-used data volumes, false for user data volumes

      • selector_names: Selector name list, references corresponding selector names

      • virtual_spaces: Virtual space configuration block, creates virtual spaces through dynamic block based on virtual_spaces in data volume configuration

        • name: Virtual space name, such as "kubernetes", "runtime", "user", etc.

        • size: Virtual space size, can be a percentage (e.g., "10%") or fixed size

        • lvm_lv_type: LVM logical volume type, e.g., "linear" for linear volume

        • lvm_path: LVM path, used to specify mount path, e.g., "/workspace"

        • runtime_lv_type: Runtime logical volume type

Note: The lifecycle block with ignore_changes is used to ignore changes to certain fields, avoiding unnecessary resource recreation after node pool creation due to changes in these fields. depends_on is used to ensure the key pair resource is created before the node pool.

10. Preset Input Parameters Required for Resource Deployment

In this practice, some resources and data sources use input variables to assign configuration content, and these input parameters need to be manually entered during subsequent deployment. At the same time, Terraform provides a method to preset these configurations through tfvars files, which can avoid repeated input during each execution.

Create a terraform.tfvars file in the working directory, example content is as follows:

Usage:

  1. Save the above content as a terraform.tfvars file in the working directory (this filename allows users to automatically import the content in the tfvars file when executing terraform commands, other naming requires adding .auto before tfvars, such as variables.auto.tfvars)

  2. Modify the parameter values according to actual needs

  3. When executing terraform plan or terraform apply, Terraform will automatically read the variable values in this file

In addition to using the terraform.tfvars file, you can also set variable values through the following methods:

  1. Command line parameters: terraform apply -var="vpc_name=my-vpc" -var="subnet_name=my-subnet"

  2. Environment variables: export TF_VAR_vpc_name=my-vpc

  3. Custom named variable file: terraform apply -var-file="custom.tfvars"

Note: If the same variable is set through multiple methods, Terraform will use variable values according to the following priority: command line parameters > variable file > environment variables > default values.

11. Initialize and Apply Terraform Configuration

After completing the above script configuration, execute the following steps to create resources:

  1. Run terraform init to initialize the environment

  2. Run terraform plan to view the resource creation plan

  3. After confirming that the resource plan is correct, run terraform apply to start creating node pool

  4. Run terraform show to view the created node pool

Reference Information

Last updated