Lark Mullins

Husband. Father. Leader.

« Posts

Provisioning an EKS Cluster with Terraform: A Step-by-Step Guide

10/02/2024

Amazon Elastic Kubernetes Service (EKS) is a fully managed Kubernetes service provided by AWS, making it easier to run Kubernetes without needing to manage the control plane yourself. Terraform, an Infrastructure as Code (IaC) tool, enables the automation of cloud infrastructure provisioning, including EKS clusters. By using Terraform to provision an EKS cluster, you can create, manage, and scale a Kubernetes environment in a consistent and repeatable way.

In this guide, we’ll walk through the steps to provision an EKS cluster using Terraform, from setting up your environment to deploying the cluster itself.

Prerequisites

Before we begin, make sure you have the following in place:

  1. AWS Account – Ensure you have an active AWS account with appropriate IAM permissions to provision EKS resources.
  2. Terraform Installed – If you don’t already have Terraform installed, you can download it here.
  3. AWS CLI Configured – Set up the AWS CLI with credentials by running aws configure.
  4. kubectl Installedkubectl is required for interacting with your Kubernetes cluster. You can download it here.

Step 1: Setup the Directory and Files

Create a directory for your Terraform files:

mkdir eks-cluster
cd eks-cluster

Inside the directory, create the following files:

  • main.tf: For defining the core EKS resources.
  • outputs.tf: For outputting necessary information after the cluster is created.
  • variables.tf: To store variables for reusability.

Step 2: Define AWS Provider and VPC

Provider

In the main.tf file, specify the AWS provider. This tells Terraform to use AWS as the cloud platform.

provider "aws" {
  region = var.aws_region
}

VPC and Subnets

EKS requires a VPC with subnets for worker nodes to run. You can either create a new VPC or use an existing one. Here’s how to provision a new VPC:

module "vpc" {
  source  = "terraform-aws-modules/vpc/aws"
  version = "3.14.0"

  name = "eks-vpc"
  cidr = "10.0.0.0/16"

  azs             = ["us-west-2a", "us-west-2b", "us-west-2c"]
  public_subnets  = ["10.0.1.0/24", "10.0.2.0/24", "10.0.3.0/24"]
  private_subnets = ["10.0.4.0/24", "10.0.5.0/24", "10.0.6.0/24"]

  enable_nat_gateway = true
}

The above module provisions a VPC with both public and private subnets, as well as a NAT gateway, which is needed for worker nodes in private subnets.

Step 3: Provision the EKS Cluster

Now, we’ll define the EKS cluster. Still in main.tf, add the following:

module "eks" {
  source          = "terraform-aws-modules/eks/aws"
  version         = "19.9.0"
  
  cluster_name    = var.cluster_name
  cluster_version = "1.27"
  subnets         = module.vpc.private_subnets
  vpc_id          = module.vpc.vpc_id

  node_groups = {
    eks_nodes = {
      desired_capacity = 2
      max_capacity     = 3
      min_capacity     = 1

      instance_type = "t3.medium"
      key_name      = var.key_name
    }
  }
  
  manage_aws_auth = true
}

This block provisions the EKS cluster along with an autoscaling node group. The node_groups attribute defines a worker node group with a minimum of 1 node, a desired capacity of 2, and a maximum of 3 nodes. The t3.medium instance type is used for the worker nodes, but you can adjust it based on your requirements.

Variables

In the variables.tf file, define the variables used in main.tf:

variable "aws_region" {
  default = "us-west-2"
}

variable "cluster_name" {
  default = "my-eks-cluster"
}

variable "key_name" {
  description = "SSH key pair name"
}

Outputs

In the outputs.tf file, specify the outputs that will be useful after the cluster is created:

output "cluster_endpoint" {
  description = "EKS cluster endpoint"
  value       = module.eks.cluster_endpoint
}

output "cluster_security_group_id" {
  description = "EKS cluster security group ID"
  value       = module.eks.cluster_security_group_id
}

These outputs will give you the cluster’s API endpoint and security group ID, which can be useful for future integrations and configurations.

Step 4: Initialize and Apply Terraform

Once you have defined all your configurations, run the following commands:

  1. Initialize the project:

    terraform init
    

    This will download the necessary provider plugins and modules.

  2. Validate the configuration:

    terraform validate
    

    This step ensures your configuration files are valid.

  3. Apply the configuration:

    terraform apply
    

    Terraform will prompt you to confirm the changes it plans to make. Type yes to proceed. The process may take several minutes as Terraform provisions the VPC, subnets, EKS cluster, and worker nodes.

Step 5: Configure kubectl to Access the Cluster

Once the EKS cluster is provisioned, you need to configure kubectl to interact with the cluster. You can do this by running the following command:

aws eks --region us-west-2 update-kubeconfig --name my-eks-cluster

Replace us-west-2 and my-eks-cluster with your AWS region and cluster name, respectively. This command sets up your kubeconfig to access the EKS cluster.

Now, test the setup by running:

kubectl get svc

You should see the Kubernetes services running within your EKS cluster.

Conclusion

In this post, we walked through how to provision an Amazon EKS cluster using Terraform. By leveraging Terraform, we not only automated the process of creating the EKS cluster, but we also made the configuration reusable and consistent for future deployments. With your EKS cluster up and running, you can start deploying your containerized applications and take full advantage of the Kubernetes ecosystem on AWS.

Happy provisioning!

© Copyright 2024 Lark Mullins