Using terraform to create a kubernetes cluster and a nginx ingress

Renan Vaz
4 min readJan 1, 2021

--

In this tutorial, we are going to build a kubernetes cluster using terraform and EKS. And we also we will setup a nginx ingress that will be responsible to handle all the incoming requests and proxy to pods inside the cluster.

When you finish this tutorial, you will have this resources created on your AWS account:

  • EKS cluster
  • VPC, Subnets, Nat gateway, etc
  • IAM policy
  • 2 kubernetes pods for tests
  • 1 AWS Load balancer
  • 1 ingress-nginx to receive requests and proxy for pods

All the resources needed to execute this tutorial are on my github: https://github.com/renandeandradevaz/terraform-kubernetes

First, we need to clone the repo and init terraform:

git clone git@github.com:renandeandradevaz/terraform-kubernetes
cd terraform-kubernetes
terraform init

To keep it simple, I decided to keep the AWS region inside the file main.tf. But, probably, in a production environment, it could be organized in a separated file. If you want to use another AWS region, you must change the main.tf file.

Then, we have to apply the resources on the cloud:

terraform apply

This step may take a few minutes.

While your cluster is being created, let’s take a look at the terraform file main.tf

provider "aws" {
region = "us-west-1"
}

data "aws_eks_cluster" "cluster" {
name = module.eks.cluster_id
}

data "aws_eks_cluster_auth" "cluster" {
name = module.eks.cluster_id
}

provider "kubernetes" {
host = data.aws_eks_cluster.cluster.endpoint
cluster_ca_certificate = base64decode(data.aws_eks_cluster.cluster.certificate_authority.0.data)
token = data.aws_eks_cluster_auth.cluster.token
load_config_file = false
version = "~> 1.11"
}

data "aws_availability_zones" "available" {
}

locals {
cluster_name = "my-cluster"
}

module "vpc" {
source = "terraform-aws-modules/vpc/aws"
version = "2.47.0"

name = "k8s-vpc"
cidr = "172.16.0.0/16"
azs = data.aws_availability_zones.available.names
private_subnets = ["172.16.1.0/24", "172.16.2.0/24", "172.16.3.0/24"]
public_subnets = ["172.16.4.0/24", "172.16.5.0/24", "172.16.6.0/24"]
enable_nat_gateway = true
single_nat_gateway = true
enable_dns_hostnames = true

public_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/elb" = "1"
}

private_subnet_tags = {
"kubernetes.io/cluster/${local.cluster_name}" = "shared"
"kubernetes.io/role/internal-elb" = "1"
}
}

module "eks" {
source = "terraform-aws-modules/eks/aws"
version = "12.2.0"

cluster_name = "${local.cluster_name}"
cluster_version = "1.17"
subnets = module.vpc.private_subnets

vpc_id = module.vpc.vpc_id

node_groups = {
first = {
desired_capacity = 2
max_capacity = 3
min_capacity = 1

instance_type = "m5.large"
}
}

write_kubeconfig = true
config_output_path = "./"

workers_additional_policies = [aws_iam_policy.worker_policy.arn]
}

resource "aws_iam_policy" "worker_policy" {
name = "iam-worker-policy"
description = "Worker policy"

policy = file("iam-policy.json")
}

In this file above, we are going to create 1 vpc, some subnets, nat gateway, iam policy and also the EKS cluster. The eks cluster is using EC2 machines with instance type = m5.large. The minimum number of machines is 1 and the max number is 3.

After the cluster created, a file called kubeconfig_my-cluster was created as well. This file contains the information about our cluster on EKS. And we use it to connect to our cluster.

Now, let’s create an environment variable to keep kubeconfig data:

export KUBECONFIG=${PWD}/kubeconfig_my-cluster

We are going to use now kubectl to deploy some resources to our cluster.

Create the nginx ingress inside the cluster:

kubectl apply -f ingress-nginx.yaml

And then, let’s create 2 pods to test our configuration. If you want, you can use another image/container

kubectl apply -f banana.yaml
kubectl apply -f apple.yaml

The pod’s yaml file is very simple:

kind: Pod
apiVersion: v1
metadata:
name: apple-app
labels:
app: apple
spec:
containers:
- name: apple-app
image: hashicorp/http-echo
args:
- "-text=apple"

---

kind: Service
apiVersion: v1
metadata:
name: apple-service
spec:
selector:
app: apple
ports:
- port: 5678

In this file above, we have: pod, service, image, port. The other file is pretty similiar. But the args is different, is banana instead of apple. It’s just to differentiate one container from another.

Now, we must deploy the routes:

kubectl apply -f routes.yaml

The routes file is where we are going to create our routes based on the path.

- http:
paths:
- path: /apple
backend:
serviceName: apple-service
servicePort: 5678
- path: /banana
backend:
serviceName: banana-service
servicePort: 5678

(Optional) And to finish, if you want, you can activate the auto scaler on your eks cluster:

kubectl apply -f auto-scaler.yaml

PS: The cluster’s name is inside of the file above. The name is “my-cluster”. If you choose a different name for the cluster, you must rename the cluster inside the file.

After all resources created. Now it’s time to test. Go to https://console.aws.amazon.com/ec2/v2/home#LoadBalancers and copy the DNS name of the balancer that was created recently.

And test if application returns something:

curl {AWS_LOAD_BALANCER_URL}/banana

and

curl {AWS_LOAD_BALANCER_URL}/apple

And, in the end, you can remove all created resources to save money:

terraform destroy

--

--