Monday, November 24, 2025

AWS and HTTPs


This proved surprisingly hard but the take-away points are:
  • A Kubernetes service account must be given permissioned to access AWS infra structure
  • The Kubernetes cluster needs AWS specific pods to configure the K8s ingress such that it receives traffic from outside the cloud
  • The ingress is where the SSL de/encryption is performed.
  • Creating the certificate is easy when using the AWS web console and there you just associate it with the domain name.
The recipe

The following steps assume you have an ingress and a service already up and running. I did the mapping between the two in Terraform. What follows below is how to allow these K8s primitives to use AWS so they can be contacted by the outside world.

You need to associate an OpenID provider with the cluster and create a Kubernetes service account that is permissioned to use the AWS load balancer. Note that lines that are predominantly Kubernetes are blue and AWS lines are red.

eksctl utils associate-iam-oidc-provider --cluster $CLUSTERNAME --approve

curl -o iam_policy.json https://raw.githubusercontent.com/kubernetes-sigs/aws-load-balancer-controller/main/docs/install/iam_policy.json

aws iam create-policy --policy-name AWSLoadBalancerControllerIAMPolicy     --policy-document file://iam_policy.json

eksctl create iamserviceaccount --cluster $CLUSTERNAME --namespace kube-system   --name aws-load-balancer-controller --attach-policy-arn arn:aws:iam::$AWS_ACCOUNT_ID:policy/AWSLoadBalancerControllerIAMPolicy --approve

kubectl describe sa aws-load-balancer-controller -n kube-system # check it's there

Then you need to configure Kubernetes to use the AWS load balancer. 

helm install aws-load-balancer-controller eks/aws-load-balancer-controller -n kube-system --set clusterName=$CLUSTERNAME --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller

However, I could see my replicasets were failing when I ran:

kubectl get rs -A

with something like:

  Type     Reason        Age                   From                   Message
  ----     ------        ----                  ----                   -------
  Warning  FailedCreate  67s (x15 over 2m29s)  replicaset-controller  Error creating: pods "aws-load-balancer-controller-68f465f899-" is forbidden: error looking up service account kube-system/aws-load-balancer-controller: serviceaccount "aws-load-balancer-controller" not found

and there are no load balancer pods.

So, it seemed I need to: 

kubectl apply -f aws-lbc-serviceaccount.yaml

where aws-lbc-serviceaccount.yaml is:

apiVersion: v1
kind: ServiceAccount
metadata:
  name: aws-load-balancer-controller
  namespace: kube-system
  annotations:
    eks.amazonaws.com/role-arn: arn:aws:iam::AWS_ACCOUNT_ID:role/AmazonEKSLoadBalancerControllerRole
  labels:
    app.kubernetes.io/component: controller
    app.kubernetes.io/name: aws-load-balancer-controller
    app.kubernetes.io/instance: aws-load-balancer-controller

The pods were now starting but quickly failing with errors like:

{"level":"error","ts":"2025-11-21T17:31:22Z","logger":"setup","msg":"unable to initialize AWS cloud","error":"failed to get VPC ID: failed to fetch VPC ID from instance metadata: error in fetching vpc id through ec2 metadata: get mac metadata: operation error ec2imds: GetMetadata, canceled, context deadline exceeded"}

We can set it with:

helm upgrade aws-load-balancer-controller eks/aws-load-balancer-controller   --namespace kube-system --set clusterName=$CLUSTE_NAME --set vpcId=$VPC_ID --set serviceAccount.create=false --set serviceAccount.name=aws-load-balancer-controller

and now the pods are running.

However, my domain name was still not resolving. So, run this to get the OIDC (OpenID Connector) issuer:

aws eks describe-cluster --name $CLUSTERNAME --query "cluster.identity.oidc.issuer" --output text | sed -e "s/^https:\/\///"

Note that this value changes every time the cluster is created.

Then run:

aws iam create-role \
    --role-name ${IAM_ROLE_NAME} \
    --assume-role-policy-document file://lbc-trust-policy.json

where lbc-trust-policy.json is:

{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Principal": {
        "Federated": "arn:aws:iam::AWS_ACCOUNT_ID:oidc-provider/OIDC_ISSUER"
      },
      "Action": "sts:AssumeRoleWithWebIdentity",
      "Condition": {
        "StringEquals": {
          "OIDC_ISSUER:aud": "sts.amazonaws.com",
          "OIDC_ISSUER:sub": "system:serviceaccount:kube-system:aws-load-balancer-controller"
        }
      }
    }
  ]
}

Get the ARN of that policy:

POLICY_ARN=$(aws iam list-policies --scope Local --query "Policies[?PolicyName=='AWSLoadBalancerControllerIAMPolicy'].Arn" --output text)

Create the role:

aws iam create-role --role-name AmazonEKSLoadBalancerControllerRole --assume-role-policy-document file://lbc-trust-policy.json

attach the policy:

aws iam attach-role-policy --role-name AmazonEKSLoadBalancerControllerRole --policy-arn ${POLICY_ARN}

then inform the cluster:

kubectl annotate serviceaccount aws-load-balancer-controller -n kube-system eks.amazonaws.com/role-arn="arn:aws:iam::AWS_ACCOUNT_ID:role/AmazonEKSLoadBalancerControllerRole" --overwrite

If you're logging the aws-load-balancer-controller-XXX pod, youll see it register this change if you restart the ingress with:

kubectl rollout restart deployment aws-load-balancer-controller -n kube-system

then check its status with:

kubectl describe ingress $INGRESS_NAME

Note the ADDRESS. It will be of the form k8s-XXX.REGION.elb.amazonaws.com. Let's define it as:

INGRESS_HOSTNAME=$(kubectl get ingress $INGRESS_NAME  -o jsonpath='{.status.loadBalancer.ingress[0].hostname}')

Find the HostedZoneId of your load balancer with:

aws elbv2 describe-load-balancers   --query "LoadBalancers[?DNSName=='INGRESS_HOSTNAME'].CanonicalHostedZoneId" --output text   --region $REGION

Registering domain in Route 53

Create the A type DNS entry with:

HOSTED_ZONE_ID=$(aws route53 list-hosted-zones-by-name \
    --dns-name $FQDN \
    --query "HostedZones[0].Id" --output text | awk -F'/' '{print $3}')

aws route53 change-resource-record-sets --hosted-zone-id "$HOSTED_ZONE_ID"  --change-batch file://route53_change.json

where route53_change.json is:

{
      "Comment": "ALIAS record for EKS ALB Ingress",
      "Changes": [
        {
          "Action": "UPSERT",
          "ResourceRecordSet": {
            "Name": "FQDN",
            "Type": "A",
            "AliasTarget": {
              "HostedZoneId": "YOUR_HOST_ZONE_ID",
              "DNSName": "INGRESS_HOSTNAME",
              "EvaluateTargetHealth": false
            }
          }
        }
      ]
    }

After a few minutes, you'll see that IP address of the domain name and INGRESS_HOSTNAME are the same.

You can create your own hosted zone with:

aws route53 create-hosted-zone --name "polarishttps.emryspolaris.click"     --caller-reference "$(date +%Y-%m-%d-%H-%M-%S)"

but this can lead to complications. 
"Public-hosted zones have a route to internet-facing resources and resolve from the internet using global routing policies. Meanwhile, private hosted zones have a route to VPC resources and resolve from inside the VPC." - AWS for Solution Architects, O'Reilly

Certificate

We've now linked the domain name to an endpoint. Now we need to create a certificate. I did this through the AWS web console and after just a few clicks, it gave me the ARN.

You might need to wait a few minutes for it to become live but you can see the status of a certificate with:

aws acm describe-certificate --certificate-arn "$CERT_ARN" --region eu-west-1 --query "Certificate.Status"  --output text

No comments:

Post a Comment