Eks Ultimate Guide
Eks Ultimate Guide
Eks Ultimate Guide
User Guide
Amazon EKS User Guide
Amazon EKS User Guide
Amazon's trademarks and trade dress may not be used in connection with any product or service that is not
Amazon's, in any manner that is likely to cause confusion among customers, or in any manner that disparages or
discredits Amazon. All other trademarks not owned by Amazon are the property of their respective owners, who may
or may not be affiliated with, connected to, or sponsored by Amazon.
Amazon EKS User Guide
Table of Contents
What Is Amazon EKS? ......................................................................................................................... 1
Amazon EKS Control Plane Architecture ........................................................................................ 1
How Does Amazon EKS Work? ..................................................................................................... 2
Getting Started with Amazon EKS ........................................................................................................ 3
Getting Started with eksctl ...................................................................................................... 3
Prerequisites ...................................................................................................................... 3
Create Your Amazon EKS Cluster and Worker Nodes ............................................................... 6
Next Steps ....................................................................................................................... 10
Getting Started with the Console ............................................................................................... 10
Amazon EKS Prerequisites ................................................................................................. 10
Step 1: Create Your Amazon EKS Cluster ............................................................................. 15
Step 2: Create a kubeconfig File ...................................................................................... 16
Step 3: Launch a Managed Node Group .............................................................................. 17
Next Steps ....................................................................................................................... 20
Clusters ........................................................................................................................................... 21
Creating a Cluster .................................................................................................................... 21
Updating Kubernetes Version ..................................................................................................... 29
Cluster Endpoint Access ............................................................................................................ 35
Modifying Cluster Endpoint Access ..................................................................................... 35
Accessing a Private Only API Server .................................................................................... 39
Control Plane Logging .............................................................................................................. 40
Enabling and Disabling Control Plane Logs .......................................................................... 40
Viewing Cluster Control Plane Logs .................................................................................... 42
Deleting a Cluster .................................................................................................................... 42
Kubernetes Versions ................................................................................................................. 45
Available Amazon EKS Kubernetes Versions ......................................................................... 45
Kubernetes 1.15 ............................................................................................................... 45
Kubernetes 1.14 ............................................................................................................... 46
Kubernetes 1.13 ............................................................................................................... 47
Amazon EKS Version Deprecation ....................................................................................... 47
Platform Versions ..................................................................................................................... 48
Kubernetes version 1.15 .................................................................................................... 48
Kubernetes version 1.14 .................................................................................................... 49
Kubernetes version 1.13 .................................................................................................... 51
Kubernetes version 1.12 .................................................................................................... 53
Windows Support ..................................................................................................................... 54
Considerations ................................................................................................................. 54
Enabling Windows Support ................................................................................................ 55
Deploy a Windows Sample Application ............................................................................... 59
Arm Support ............................................................................................................................ 60
Considerations ................................................................................................................. 60
Prerequisites .................................................................................................................... 60
Create a cluster ................................................................................................................ 61
Enable Arm Support ......................................................................................................... 61
Launch Worker Nodes ....................................................................................................... 62
Join Worker Nodes to a Cluster .......................................................................................... 63
(Optional) Deploy an Application ........................................................................................ 64
Viewing API Server Flags ........................................................................................................... 64
Worker Nodes .................................................................................................................................. 66
Amazon EKS-Optimized Linux AMI ............................................................................................. 67
Amazon EKS-Optimized AMI Build Scripts ........................................................................... 71
Amazon EKS-Optimized AMI with GPU Support ................................................................... 72
Amazon EKS-Optimized Linux AMI Versions ......................................................................... 76
Retrieving Amazon EKS-Optimized AMI IDs .......................................................................... 78
iv
Amazon EKS User Guide
v
Amazon EKS User Guide
vi
Amazon EKS User Guide
vii
Amazon EKS User Guide
viii
Amazon EKS User Guide
Amazon EKS Control Plane Architecture
Amazon EKS runs Kubernetes control plane instances across multiple Availability Zones to ensure high
availability. Amazon EKS automatically detects and replaces unhealthy control plane instances, and it
provides automated version upgrades and patching for them.
Amazon EKS is also integrated with many AWS services to provide scalability and security for your
applications, including the following:
Amazon EKS runs up-to-date versions of the open-source Kubernetes software, so you can use all the
existing plugins and tooling from the Kubernetes community. Applications running on Amazon EKS are
fully compatible with applications running on any standard Kubernetes environment, whether running
in on-premises data centers or public clouds. This means that you can easily migrate any standard
Kubernetes application to Amazon EKS without any code modification required.
This control plane consists of at least two API server nodes and three etcd nodes that run across three
Availability Zones within a Region. Amazon EKS automatically detects and replaces unhealthy control
plane instances, restarting them across the Availability Zones within the Region as needed. Amazon EKS
leverages the architecture of AWS Regions in order to maintain high availability. Because of this, Amazon
EKS is able to offer an SLA for API server endpoint availability.
Amazon EKS uses Amazon VPC network policies to restrict traffic between control plane components to
within a single cluster. Control plane components for a cluster cannot view or receive communication
from other clusters or other AWS accounts, except as authorized with Kubernetes RBAC policies.
This secure and highly-available configuration makes Amazon EKS reliable and recommended for
production workloads.
1
Amazon EKS User Guide
How Does Amazon EKS Work?
1. First, create an Amazon EKS cluster in the AWS Management Console or with the AWS CLI or one of
the AWS SDKs.
2. Then, launch worker nodes that register with the Amazon EKS cluster. We provide you with an AWS
CloudFormation template that automatically configures your nodes.
3. When your cluster is ready, you can configure your favorite Kubernetes tools (such as kubectl) to
communicate with your cluster.
4. Deploy and manage applications on your Amazon EKS cluster the same way that you would with any
other Kubernetes environment.
For more information about creating your required resources and your first Amazon EKS cluster, see
Getting Started with Amazon EKS (p. 3).
2
Amazon EKS User Guide
Getting Started with eksctl
• Getting Started with eksctl (p. 3): This getting started guide helps you to install all of the
required resources to get started with Amazon EKS using eksctl, a simple command line utility for
creating and managing Kubernetes clusters on Amazon EKS. At the end of this tutorial, you will have
a running Amazon EKS cluster with worker nodes, and the kubectl command line utility will be
configured to use your new cluster. This is the fastest and simplest way to get started with Amazon
EKS.
• Getting Started with the AWS Management Console (p. 10): This getting started guide helps you to
create all of the required resources to get started with Amazon EKS in the AWS Management Console.
In this guide, you manually create each resource in the Amazon EKS or AWS CloudFormation consoles,
and the workflow described here gives you complete visibility into how each resource is created and
how they interact with each other.
Prerequisites
This section helps you to install and configure the binaries you need to create and manage an Amazon
EKS cluster.
If you already have pip and a supported version of Python, you can install or upgrade the AWS CLI with
the following command:
Note
Your system's Python version must be 2.7.9 or later. Otherwise, you receive hostname
doesn't match errors with AWS CLI calls to Amazon EKS. For more information, see What are
"hostname doesn't match" errors? in the Python Requests FAQ.
For more information about other methods of installing or upgrading the AWS CLI for your platform, see
the following topics in the AWS Command Line Interface User Guide.
3
Amazon EKS User Guide
Prerequisites
If you are unable to install version 1.18.17 or later of the AWS CLI on your system, you must ensure
that the AWS IAM Authenticator for Kubernetes is installed on your system. For more information, see
Installing aws-iam-authenticator (p. 179).
$ aws configure
AWS Access Key ID [None]: AKIAIOSFODNN7EXAMPLE
AWS Secret Access Key [None]: wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY
Default region name [None]: region-code
Default output format [None]: json
When you type this command, the AWS CLI prompts you for four pieces of information: access key,
secret access key, AWS Region, and output format. This information is stored in a profile (a collection of
settings) named default. This profile is used unless you specify another one.
For more information, see Configuring the AWS CLI in the AWS Command Line Interface User Guide.
Install eksctl
This section helps you to install the eksctl command line utility. For more information, see the https://
eksctl.io/.
Choose the tab below that best represents your client setup.
macOS
Important
The current release is a release candidate. To install the release candidate, you must
download an archive file for your operating system from https://github.com/weaveworks/
eksctl/releases/tag/0.15.0-rc.2, extract eksctl, and then execute it, rather than using the
numbered steps below.
The easiest way to get started with Amazon EKS and macOS is by installing eksctl with Homebrew.
The eksctl Homebrew recipe installs eksctl and any other dependencies that are required for
Amazon EKS, such as kubectl and the aws-iam-authenticator.
1. If you do not already have Homebrew installed on macOS, install it with the following
command.
4
Amazon EKS User Guide
Prerequisites
4. Test that your installation was successful with the following command.
eksctl version
Note
The GitTag version should be at least 0.15.0-rc.2. If not, check your terminal
output for any installation or upgrade errors, or manually download an archive of
the release for your operating system from https://github.com/weaveworks/eksctl/
releases, extract eksctl, and then execute it.
Linux
Important
The current release is a release candidate. To install the release candidate, you must
download an archive file for your operating system from https://github.com/weaveworks/
eksctl/releases/tag/0.15.0-rc.2, extract eksctl, and then execute it, rather than using the
numbered steps below.
1. Download and extract the latest release of eksctl with the following command.
3. Test that your installation was successful with the following command.
eksctl version
Note
The GitTag version should be at least 0.15.0-rc.2. If not, check your terminal
output for any installation or upgrade errors, or manually download an archive of
the release for your operating system from https://github.com/weaveworks/eksctl/
releases, extract eksctl, and then execute it.
Windows
Important
The current release is a release candidate. To install the release candidate, you must
download an archive file for your operating system from https://github.com/weaveworks/
eksctl/releases/tag/0.15.0-rc.2, extract eksctl, and then execute it, rather than using the
numbered steps below.
1. If you do not already have Chocolatey installed on your Windows system, see Installing
Chocolatey.
2. Install or upgrade eksctl and the aws-iam-authenticator.
5
Amazon EKS User Guide
Create Your Amazon EKS Cluster and Worker Nodes
3. Test that your installation was successful with the following command.
eksctl version
Note
The GitTag version should be at least 0.15.0-rc.2. If not, check your terminal
output for any installation or upgrade errors, or manually download an archive of
the release for your operating system from https://github.com/weaveworks/eksctl/
releases, extract eksctl, and then execute it.
• You have multiple options to download and install kubectl for your operating system.
• The kubectl binary is available in many operating system package managers, and this option is
often much easier than a manual download and install process. You can follow the instructions
for your specific operating system or package manager in the Kubernetes documentation to
install.
• Amazon EKS also vends kubectl binaries that you can use that are identical to the upstream
kubectl binaries with the same version. To install the Amazon EKS-vended binary for your
operating system, see Installing kubectl (p. 174).
1. Choose a tab below that matches your workload requirements. If you want to create a cluster that
only runs pods on AWS Fargate, choose AWS Fargate-only cluster. If you only intend to run Linux
workloads on your cluster, choose Cluster with Linux-only workloads. If you want to run Linux and
Windows workloads on your cluster, choose Cluster with Linux and Windows workloads.
This procedure assumes that you have installed eksctl, and that your eksctl version is at
least 0.15.0-rc.2. You can check your version with the following command:
6
Amazon EKS User Guide
Create Your Amazon EKS Cluster and Worker Nodes
eksctl version
Create your Amazon EKS cluster with Fargate support with the following command. Replace the
example values with your own values. For --region, specify a supported region (p. 105).
Your new Amazon EKS cluster is created without a worker node group. However, eksctl creates
a pod execution role, a Fargate profile for the default and kube-system namespaces, and it
patches the coredns deployment so that it can run on Fargate. For more information see AWS
Fargate (p. 105).
Cluster with Linux-only workloads
This procedure assumes that you have installed eksctl, and that your eksctl version is at
least 0.15.0-rc.2. You can check your version with the following command:
eksctl version
Create your Amazon EKS cluster and Linux worker nodes with the following command. Replace
the example values with your own values.
Important
Kubernetes version 1.12 is now deprecated on Amazon EKS. On May 11th, 2020,
Kubernetes version 1.12 will no longer be supported on Amazon EKS. On this date,
you will no longer be able to create new 1.12 clusters, and all existing Amazon EKS
clusters running Kubernetes version 1.12 will eventually be automatically updated to
version 1.13. We recommend that you update any 1.12 clusters to version 1.13 or later
in order to avoid service interruption. For more information, see Amazon EKS Version
Deprecation (p. 47).
Kubernetes API versions available through Amazon EKS are officially supported by
AWS, until we remove the ability to create clusters using that version. This is true
even if upstream Kubernetes is no longer supporting a version available on Amazon
EKS. We backport security fixes that are applicable to the Kubernetes versions
supported on Amazon EKS. Existing clusters are always supported, and Amazon EKS
will automatically update your cluster to a supported version if you have not done so
manually by the version end of life date.
7
Amazon EKS User Guide
Create Your Amazon EKS Cluster and Worker Nodes
--managed
Note
• The --managed option for Amazon EKS Managed Node Groups (p. 82) is currently
only supported on Kubernetes 1.14 and later clusters. We recommend that you use
the latest version of Kubernetes that is available in Amazon EKS to take advantage
of the latest features. If you choose to use an earlier Kubernetes version, you must
remove the --managed option.
For more information on the available options for eksctl create cluster, see the
project README on GitHub or view the help page with the following command.
Output:
You'll see several lines of output as the cluster and worker nodes are created. The last line of
output is similar to the following example line.
This procedure assumes that you have installed eksctl, and that your eksctl version is at
least 0.15.0-rc.2. You can check your version with the following command:
eksctl version
Familiarize yourself with the Windows support considerations (p. 54), which include
supported values for instanceType in the example text below. Replace the example values
with your own values. Save the text below to a file named cluster-spec.yaml. The
configuration file is used to create a cluster and both Linux and Windows worker node groups.
Even if you only want to run Windows workloads in your cluster, all Amazon EKS clusters must
contain at least one Linux worker node. We recommend that you create at least two worker
nodes in each node group for availability purposes. The minimum required Kubernetes version
for Windows workloads is 1.14.
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: windows-prod
region: region-code
managedNodeGroups:
- name: linux-ng
8
Amazon EKS User Guide
Create Your Amazon EKS Cluster and Worker Nodes
instanceType: t2.large
minSize: 2
nodeGroups:
- name: windows-ng
instanceType: m5.large
minSize: 2
volumeSize: 100
amiFamily: WindowsServer2019FullContainer
Create your Amazon EKS cluster and Windows and Linux worker nodes with the following
command.
Note
The managedNodeGroups option for Amazon EKS Managed Node Groups (p. 82)
is currently only supported on Kubernetes 1.14 and later clusters. We recommend
that you use the latest version of Kubernetes that is available in Amazon EKS to take
advantage of the latest features. If you choose to use an earlier Kubernetes version, you
must remove the --managed option.
For more information on the available options for eksctl create cluster, see the project
README on GitHub or view the help page with the following command.
Output:
You'll see several lines of output as the cluster and worker nodes are created. The last line of
output is similar to the following example line.
2. Cluster provisioning usually takes between 10 and 15 minutes. When your cluster is ready, test that
your kubectl configuration is correct.
Note
If you receive the error "aws-iam-authenticator": executable file not found
in $PATH, your kubectl isn't configured for Amazon EKS. For more information, see
Installing aws-iam-authenticator (p. 179).
If you receive any other authorization or resource type errors, see Unauthorized or Access
Denied (kubectl) (p. 275) in the troubleshooting section.
Output:
3. (Linux GPU workers only) If you chose a GPU instance type and the Amazon EKS-optimized AMI with
GPU support, you must apply the NVIDIA device plugin for Kubernetes as a DaemonSet on your
cluster with the following command.
9
Amazon EKS User Guide
Next Steps
Next Steps
Now that you have a working Amazon EKS cluster with worker nodes, you are ready to start installing
Kubernetes add-ons and deploying applications to your cluster. The following documentation topics help
you to extend the functionality of your cluster.
• Cluster Autoscaler (p. 130) — Configure the Kubernetes Cluster Autoscaler to automatically adjust
the number of nodes in your node groups.
• Launch a Guest Book Application (p. 192) — Create a sample guest book application to test your
cluster and Linux worker nodes.
• Deploy a Windows Sample Application (p. 59) — Deploy a sample application to test your cluster
and Windows worker nodes.
• Tutorial: Deploy the Kubernetes Web UI (Dashboard) (p. 202) — This tutorial guides you through
deploying the Kubernetes dashboard to your cluster.
• Using Helm with Amazon EKS (p. 201) — The helm package manager for Kubernetes helps you
install and manage applications on your cluster.
• Installing the Kubernetes Metrics Server (p. 195) — The Kubernetes metrics server is an aggregator
of resource usage data in your cluster.
• Control Plane Metrics with Prometheus (p. 197) — This topic helps you deploy Prometheus into your
cluster with helm.
You can also choose to use the eksctl CLI to create your cluster and worker nodes. For more
information, see Getting Started with eksctl (p. 3).
You must also create a VPC and a security group for your cluster to use. Although the VPC and security
groups can be used for multiple EKS clusters, we recommend that you use a separate VPC for each EKS
cluster to provide better network isolation.
This section also helps you to install the kubectl binary and configure it to work with Amazon EKS.
10
Amazon EKS User Guide
Amazon EKS Prerequisites
AWS CloudFormation
1. Save the following AWS CloudFormation template to a text file on your local system.
---
AWSTemplateFormatVersion: '2010-09-09'
Description: 'Amazon EKS Service Role'
Resources:
eksServiceRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- eks.amazonaws.com
Action:
- sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AmazonEKSServicePolicy
- arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
Outputs:
RoleArn:
Description: The role that Amazon EKS will use to create AWS resources for
Kubernetes clusters
Value: !GetAtt eksServiceRole.Arn
Export:
Name: !Sub "${AWS::StackName}-RoleArn"
11
Amazon EKS User Guide
Amazon EKS Prerequisites
8. On the Review page, review your information, acknowledge that the stack might create IAM
resources, and then choose Create stack.
Choose the tab below that represents your desired VPC configuration.
https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-11-15/amazon-eks-
vpc-private-subnets.yaml
6. On the Specify Details page, fill out the parameters accordingly, and then choose Next.
• Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can
call it eks-vpc.
• VpcBlock: Choose a CIDR range for your VPC. You can keep the default value.
• PublicSubnet01Block: Specify a CIDR range for public subnet 1. We recommend that you
keep the default value so that you have plenty of IP addresses for pods to use.
• PublicSubnet02Block: Specify a CIDR range for public subnet 2. We recommend that you
keep the default value so that you have plenty of IP addresses for pods to use.
• PrivateSubnet01Block: Specify a CIDR range for private subnet 1. We recommend that you
keep the default value so that you have plenty of IP addresses for pods to use.
• PrivateSubnet02Block: Specify a CIDR range for private subnet 2. We recommend that you
keep the default value so that you have plenty of IP addresses for pods to use.
7. (Optional) On the Options page, tag your stack resources. Choose Next.
8. On the Review page, choose Create.
9. When your stack is created, select it in the console and choose Outputs.
10. Record the SecurityGroups value for the security group that was created. You need this
when you create your EKS cluster; this security group is applied to the cross-account elastic
network interfaces that are created in your subnets that allow the Amazon EKS control plane to
communicate with your worker nodes.
11. Record the VpcId for the VPC that was created. You need this when you launch your worker
node group template.
12. Record the SubnetIds for the subnets that were created. You need this when you create your
EKS cluster; these are the subnets that your worker nodes are launched into.
13. Tag your private subnets so that Kubernetes knows that it can use them for internal load
balancers.
12
Amazon EKS User Guide
Amazon EKS Prerequisites
Key Value
kubernetes.io/role/internal-elb 1
Key Value
kubernetes.io/role/elb 1
https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-11-15/amazon-eks-
vpc-sample.yaml
6. On the Specify Details page, fill out the parameters accordingly, and then choose Next.
• Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can
call it eks-vpc.
• VpcBlock: Choose a CIDR range for your VPC. You can keep the default value.
• Subnet01Block: Specify a CIDR range for subnet 1. We recommend that you keep the default
value so that you have plenty of IP addresses for pods to use.
• Subnet02Block: Specify a CIDR range for subnet 2. We recommend that you keep the default
value so that you have plenty of IP addresses for pods to use.
• Subnet03Block: Specify a CIDR range for subnet 3. We recommend that you keep the default
value so that you have plenty of IP addresses for pods to use.
7. (Optional) On the Options page, tag your stack resources. Choose Next.
8. On the Review page, choose Create.
9. When your stack is created, select it in the console and choose Outputs.
13
Amazon EKS User Guide
Amazon EKS Prerequisites
10. Record the SecurityGroups value for the security group that was created. You need this
when you create your EKS cluster; this security group is applied to the cross-account elastic
network interfaces that are created in your subnets that allow the Amazon EKS control plane to
communicate with your worker nodes.
11. Record the VpcId for the VPC that was created. You need this when you launch your worker
node group template.
12. Record the SubnetIds for the subnets that were created. You need this when you create your
EKS cluster; these are the subnets that your worker nodes are launched into.
13. Tag your public subnets so that Kubernetes knows that it can use them for external load
balancers.
Key Value
kubernetes.io/role/elb 1
e. Repeat these substeps for each public subnet in your VPC.
• You have multiple options to download and install kubectl for your operating system.
• The kubectl binary is available in many operating system package managers, and this option is
often much easier than a manual download and install process. You can follow the instructions
for your specific operating system or package manager in the Kubernetes documentation to
install.
• Amazon EKS also vends kubectl binaries that you can use that are identical to the upstream
kubectl binaries with the same version. To install the Amazon EKS-vended binary for your
operating system, see Installing kubectl (p. 174).
You can check your AWS CLI version with the following command:
aws --version
14
Amazon EKS User Guide
Step 1: Create Your Amazon EKS Cluster
Note
Your system's Python version must be 2.7.9 or later. Otherwise, you receive hostname
doesn't match errors with AWS CLI calls to Amazon EKS. For more information, see What are
"hostname doesn't match" errors? in the Python Requests FAQ.
If you are unable to install version 1.18.17 or later of the AWS CLI on your system, you must ensure
that the AWS IAM Authenticator for Kubernetes is installed on your system. For more information, see
Installing aws-iam-authenticator (p. 179).
15
Amazon EKS User Guide
Step 2: Create a kubeconfig File
Important
The worker node AWS CloudFormation template modifies the security group that you
specify here, so Amazon EKS strongly recommends that you use a dedicated security
group for each cluster control plane (one per cluster). If this security group is shared
with other resources, you might block or disrupt connections to those resources.
• Endpoint private access: Choose whether to enable or disable private access for your cluster's
Kubernetes API server endpoint. If you enable private access, Kubernetes API requests that
originate from within your cluster's VPC will use the private VPC endpoint. For more information,
see Amazon EKS Cluster Endpoint Access Control (p. 35).
• Endpoint public access: Choose whether to enable or disable public access for your cluster's
Kubernetes API server endpoint. If you disable public access, your cluster's Kubernetes API server
can only receive requests from within the cluster VPC. For more information, see Amazon EKS
Cluster Endpoint Access Control (p. 35).
• Logging – For each individual log type, choose whether the log type should be Enabled or
Disabled. By default, each log type is Disabled. For more information, see Amazon EKS Control
Plane Logging (p. 40)
• Tags – (Optional) Add any tags to your cluster. For more information, see Tagging Your Amazon
EKS Resources (p. 263).
Note
You might receive an error that one of the Availability Zones in your request doesn't have
sufficient capacity to create an Amazon EKS cluster. If this happens, the error output
contains the Availability Zones that can support a new cluster. Retry creating your cluster
with at least two subnets that are located in the supported Availability Zones for your
account. For more information, see Insufficient Capacity (p. 275).
4. On the Clusters page, choose the name of your newly created cluster to view the cluster
information.
5. The Status field shows CREATING until the cluster provisioning process completes. Cluster
provisioning usually takes between 10 and 15 minutes.
1. Ensure that you have version 1.18.17 or later of the AWS CLI installed. To install or upgrade the AWS
CLI, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide.
Note
Your system's Python version must be 2.7.9 or later. Otherwise, you receive hostname
doesn't match errors with AWS CLI calls to Amazon EKS.
You can check your AWS CLI version with the following command:
aws --version
Important
Package managers such yum, apt-get, or Homebrew for macOS are often behind several
versions of the AWS CLI. To ensure that you have the latest version, see Installing the AWS
Command Line Interface in the AWS Command Line Interface User Guide.
16
Amazon EKS User Guide
Step 3: Launch a Managed Node Group
2. Use the AWS CLI update-kubeconfig command to create or update your kubeconfig for your cluster.
• By default, the resulting configuration file is created at the default kubeconfig path (.kube/
config) in your home directory or merged with an existing kubeconfig at that location. You can
specify another path with the --kubeconfig option.
• You can specify an IAM role ARN with the --role-arn option to use for authentication when you
issue kubectl commands. Otherwise, the IAM entity in your default AWS CLI or SDK credential
chain is used. You can view your default AWS CLI or SDK identity by running the aws sts get-
caller-identity command.
• For more information, see the help page with the aws eks update-kubeconfig help command or
see update-kubeconfig in the AWS CLI Command Reference.
Note
To run the following command, your account must be assigned the
eks:DescribeCluster IAM permission for the cluster name that you specify.
Note
If you receive the error "aws-iam-authenticator": executable file not found
in $PATH, your kubectl isn't configured for Amazon EKS. For more information, see
Installing aws-iam-authenticator (p. 179).
If you receive any other authorization or resource type errors, see Unauthorized or Access
Denied (kubectl) (p. 275) in the troubleshooting section.
Output:
The Amazon EKS worker node kubelet daemon makes calls to AWS APIs on your behalf. Worker nodes
receive permissions for these API calls through an IAM instance profile and associated policies. Before
you can launch worker nodes and register them into a cluster, you must create an IAM role for those
worker nodes to use when they are launched. For more information, see Amazon EKS Worker Node IAM
Role (p. 239). You can create the role using the AWS Management Console or AWS CloudFormation.
Select the tab with the name of the tool that you'd like to use to create the role.
Note
We recommend that you create a new worker node IAM role for each cluster. Otherwise, a node
from one cluster could authenticate with another cluster that it does not belong to.
17
Amazon EKS User Guide
Step 3: Launch a Managed Node Group
To create your Amazon EKS worker node role in the IAM console
AWS CloudFormation
To create your Amazon EKS worker node role using AWS CloudFormation
https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-11-15/amazon-eks-
nodegroup-role.yaml
5. On the Specify stack details page, for Stack name enter a name such as eks-node-group-
instance-role and choose Next.
6. (Optional) On the Configure stack options page, you can choose to tag your stack resources.
Choose Next.
7. On the Review page, check the box in the Capabilities section and choose Create stack.
8. When your stack is created, select it in the console and choose Outputs.
9. Record the NodeInstanceRole value for the IAM role that was created. You need this when you
create your node group.
1. Wait for your cluster status to show as ACTIVE. You cannot create a managed node group for a
cluster that is not yet ACTIVE.
2. Open the Amazon EKS console at https://console.aws.amazon.com/eks/home#/clusters.
3. Choose the name of the cluster that you want to create your managed node group in.
4. On the cluster page, choose Add node group.
5. On the Configure node group page, fill out the parameters accordingly, and then choose Next.
18
Amazon EKS User Guide
Step 3: Launch a Managed Node Group
• AMI type — Choose Amazon Linux 2 (AL2_x86_64) for non-GPU instances, or Amazon Linux 2
GPU Enabled (AL2_x86_64_GPU) for GPU instances.
• Instance type — Choose the instance type to use in your managed node group. Larger instance
types can accommodate more pods.
• Disk size — Enter the disk size (in GiB) to use for your worker node root volume.
7. On the Setup scaling policies page, fill out the parameters accordingly, and then choose Next.
Note
Amazon EKS does not automatically scale your node group in or out. However, you can
configure the Kubernetes Cluster Autoscaler (p. 130) to do this for you.
• Minimum size — Specify the minimum number of worker nodes that the managed node group
can scale in to.
• Maximum size — Specify the maximum number of worker nodes that the managed node group
can scale out to.
• Desired size — Specify the current number of worker nodes that the managed node group should
maintain at launch.
8. On the Review and create page, review your managed node group configuration and choose Create.
19
Amazon EKS User Guide
Next Steps
9. Watch the status of your nodes and wait for them to reach the Ready status.
10. (GPU workers only) If you chose a GPU instance type and the Amazon EKS-optimized AMI with GPU
support, you must apply the NVIDIA device plugin for Kubernetes as a DaemonSet on your cluster
with the following command.
Add Windows support to your cluster and launch Windows worker nodes. For more information, see
Windows Support (p. 54). All Amazon EKS clusters must contain at least one Linux worker node, even
if you only want to run Windows workloads in your cluster.
Next Steps
Now that you have a working Amazon EKS cluster with worker nodes, you are ready to start installing
Kubernetes add-ons and deploying applications to your cluster. The following documentation topics help
you to extend the functionality of your cluster.
• Cluster Autoscaler (p. 130) — Configure the Kubernetes Cluster Autoscaler to automatically adjust
the number of nodes in your node groups.
• Launch a Guest Book Application (p. 192) — Create a sample guest book application to test your
cluster and Linux worker nodes.
• Deploy a Windows Sample Application (p. 59) — Deploy a sample application to test your cluster
and Windows worker nodes.
• Tutorial: Deploy the Kubernetes Web UI (Dashboard) (p. 202) — This tutorial guides you through
deploying the Kubernetes dashboard to your cluster.
• Using Helm with Amazon EKS (p. 201) — The helm package manager for Kubernetes helps you
install and manage applications on your cluster.
• Installing the Kubernetes Metrics Server (p. 195) — The Kubernetes metrics server is an aggregator
of resource usage data in your cluster.
• Control Plane Metrics with Prometheus (p. 197) — This topic helps you deploy Prometheus into your
cluster with helm.
20
Amazon EKS User Guide
Creating a Cluster
The Amazon EKS control plane consists of control plane nodes that run the Kubernetes software, such
as etcd and the Kubernetes API server. The control plane runs in an account managed by AWS, and the
Kubernetes API is exposed via the Amazon EKS endpoint associated with your cluster. Each Amazon EKS
cluster control plane is single-tenant and unique, and runs on its own set of Amazon EC2 instances.
All of the data stored by the etcd nodes and associated Amazon EBS volumes is encrypted. Amazon EKS
uses master encryption keys that generate volume encryption keys which are managed by the Amazon
EKS service.
The cluster control plane is provisioned across multiple Availability Zones and fronted by an Elastic Load
Balancing Network Load Balancer. Amazon EKS also provisions elastic network interfaces in your VPC
subnets to provide connectivity from the control plane instances to the worker nodes (for example, to
support kubectl exec, logs, and proxy data flows).
Amazon EKS worker nodes run in your AWS account and connect to your cluster's control plane via the
API server endpoint and a certificate file that is created for your cluster.
If this is your first time creating an Amazon EKS cluster, we recommend that you follow one of our
Getting Started with Amazon EKS (p. 3) guides instead. They provide complete end-to-end walkthroughs
for creating an Amazon EKS cluster with worker nodes.
Important
When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is
added to the Kubernetes RBAC authorization table as the administrator (with system:master
permissions. Initially, only that IAM user can make calls to the Kubernetes API server using
kubectl. For more information, see Managing Users or IAM Roles for your Cluster (p. 185). If
you use the console to create the cluster, you must ensure that the same IAM user credentials
are in the AWS SDK credential chain when you are running kubectl commands on your cluster.
If you install and configure the AWS CLI, you can configure the IAM credentials for your user. If
the AWS CLI is configured properly for your user, then eksctl and the AWS IAM Authenticator
for Kubernetes can find those credentials as well. For more information, see Configuring the
AWS CLI in the AWS Command Line Interface User Guide.
Choose the tab below that corresponds to your desired cluster creation method:
eksctl
1. Choose a tab below that matches your workload requirements. If you want to create a cluster
that only runs pods on AWS Fargate, choose AWS Fargate-only cluster. If you only intend to
run Linux workloads on your cluster, choose Cluster with Linux-only workloads. If you want
to run Linux and Windows workloads on your cluster, choose Cluster with Linux and Windows
workloads.
21
Amazon EKS User Guide
Creating a Cluster
This procedure assumes that you have installed eksctl, and that your eksctl version is at
least 0.15.0-rc.2. You can check your version with the following command:
eksctl version
Create your Amazon EKS cluster with Fargate support with the following command.
Replace the example values with your own values. For --region, specify a supported
region (p. 105).
Your new Amazon EKS cluster is created without a worker node group. However, eksctl
creates a pod execution role, a Fargate profile for the default and kube-system
namespaces, and it patches the coredns deployment so that it can run on Fargate. For
more information see AWS Fargate (p. 105).
Cluster with Linux-only workloads
This procedure assumes that you have installed eksctl, and that your eksctl version is at
least 0.15.0-rc.2. You can check your version with the following command:
eksctl version
Create your Amazon EKS cluster and Linux worker nodes with the following command.
Replace the example values with your own values.
Important
Kubernetes version 1.12 is now deprecated on Amazon EKS. On May 11th, 2020,
Kubernetes version 1.12 will no longer be supported on Amazon EKS. On this date,
you will no longer be able to create new 1.12 clusters, and all existing Amazon EKS
clusters running Kubernetes version 1.12 will eventually be automatically updated
to version 1.13. We recommend that you update any 1.12 clusters to version 1.13
or later in order to avoid service interruption. For more information, see Amazon
EKS Version Deprecation (p. 47).
Kubernetes API versions available through Amazon EKS are officially supported
by AWS, until we remove the ability to create clusters using that version. This is
true even if upstream Kubernetes is no longer supporting a version available on
Amazon EKS. We backport security fixes that are applicable to the Kubernetes
versions supported on Amazon EKS. Existing clusters are always supported, and
Amazon EKS will automatically update your cluster to a supported version if you
have not done so manually by the version end of life date.
22
Amazon EKS User Guide
Creating a Cluster
--node-type t3.medium \
--nodes 3 \
--nodes-min 1 \
--nodes-max 4 \
--ssh-access \
--ssh-public-key my-public-key.pub \
--managed
Note
• The --managed option for Amazon EKS Managed Node Groups (p. 82) is
currently only supported on Kubernetes 1.14 and later clusters. We recommend
that you use the latest version of Kubernetes that is available in Amazon EKS to
take advantage of the latest features. If you choose to use an earlier Kubernetes
version, you must remove the --managed option.
For more information on the available options for eksctl create cluster, see the
project README on GitHub or view the help page with the following command.
Output:
You'll see several lines of output as the cluster and worker nodes are created. The last line
of output is similar to the following example line.
This procedure assumes that you have installed eksctl, and that your eksctl version is at
least 0.15.0-rc.2. You can check your version with the following command:
eksctl version
Familiarize yourself with the Windows support considerations (p. 54), which include
supported values for instanceType in the example text below. Replace the example
values with your own values. Save the text below to a file named cluster-spec.yaml.
The configuration file is used to create a cluster and both Linux and Windows worker node
groups. Even if you only want to run Windows workloads in your cluster, all Amazon EKS
clusters must contain at least one Linux worker node. We recommend that you create at
least two worker nodes in each node group for availability purposes. The minimum required
Kubernetes version for Windows workloads is 1.14.
---
23
Amazon EKS User Guide
Creating a Cluster
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
name: windows-prod
region: region-code
managedNodeGroups:
- name: linux-ng
instanceType: t2.large
minSize: 2
nodeGroups:
- name: windows-ng
instanceType: m5.large
minSize: 2
volumeSize: 100
amiFamily: WindowsServer2019FullContainer
Create your Amazon EKS cluster and Windows and Linux worker nodes with the following
command.
Note
The managedNodeGroups option for Amazon EKS Managed Node
Groups (p. 82) is currently only supported on Kubernetes 1.14 and later clusters.
We recommend that you use the latest version of Kubernetes that is available
in Amazon EKS to take advantage of the latest features. If you choose to use an
earlier Kubernetes version, you must remove the --managed option.
For more information on the available options for eksctl create cluster, see the
project README on GitHub or view the help page with the following command.
Output:
You'll see several lines of output as the cluster and worker nodes are created. The last line
of output is similar to the following example line.
2. Cluster provisioning usually takes between 10 and 15 minutes. When your cluster is ready, test
that your kubectl configuration is correct.
Note
If you receive the error "aws-iam-authenticator": executable file
not found in $PATH, your kubectl isn't configured for Amazon EKS. For more
information, see Installing aws-iam-authenticator (p. 179).
If you receive any other authorization or resource type errors, see Unauthorized or
Access Denied (kubectl) (p. 275) in the troubleshooting section.
Output:
24
Amazon EKS User Guide
Creating a Cluster
3. (Linux GPU workers only) If you chose a GPU instance type and the Amazon EKS-optimized AMI
with GPU support, you must apply the NVIDIA device plugin for Kubernetes as a DaemonSet on
your cluster with the following command.
• You have created a VPC and a dedicated security group that meet the requirements for an
Amazon EKS cluster. For more information, see Cluster VPC Considerations (p. 152) and Amazon
EKS Security Group Considerations (p. 154). The Getting Started with the AWS Management
Console (p. 10) guide creates a VPC that meets the requirements, or you can also follow Creating a
VPC for Your Amazon EKS Cluster (p. 150) to create one.
• You have created an Amazon EKS service role to apply to your cluster. The Getting Started with
Amazon EKS (p. 3) guide creates a service role for you, or you can also follow Amazon EKS IAM
Roles (p. 232) to create one manually.
25
Amazon EKS User Guide
Creating a Cluster
• Subnets – The subnets within the preceding VPC to use for your cluster. By default, the
available subnets in the VPC are preselected. Specify all subnets that will host resources for
your cluster (such as private subnets for worker nodes and public subnets for load balancers).
Your subnets must meet the requirements for an Amazon EKS cluster. For more information,
see Cluster VPC Considerations (p. 152).
• Security Groups: The SecurityGroups value from the AWS CloudFormation output that
you generated with Create your Amazon EKS Cluster VPC (p. 12). This security group has
ControlPlaneSecurityGroup in the drop-down name.
Important
The worker node AWS CloudFormation template modifies the security group that
you specify here, so Amazon EKS strongly recommends that you use a dedicated
security group for each cluster control plane (one per cluster). If this security
group is shared with other resources, you might block or disrupt connections to those
resources.
• Endpoint private access – Choose whether to enable or disable private access for your
cluster's Kubernetes API server endpoint. If you enable private access, Kubernetes API
requests that originate from within your cluster's VPC use the private VPC endpoint. For more
information, see Amazon EKS Cluster Endpoint Access Control (p. 35).
• Endpoint public access – Choose whether to enable or disable public access for your cluster's
Kubernetes API server endpoint. If you disable public access, your cluster's Kubernetes API
server can receive only requests from within the cluster VPC. For more information, see
Amazon EKS Cluster Endpoint Access Control (p. 35).
• Secrets encryption – Choose whether to enable or disable envelope encryption of Kubernetes
secrets using AWS Key Management Service (AWS KMS). If you enable envelope encryption,
the Kubernetes secrets are encrypted using the customer master key (CMK) that you select.
The CMK must be symmetric, created in the same region as the cluster, and if the CMK was
created in a different account, the user must have access to the CMK. For more information,
see Allowing Users in Other Accounts to Use a CMK in the AWS Key Management Service
Developer Guide. Kubernetes secrets encryption with an AWS KMS CMK requires Kubernetes
version 1.13 or later.
• Logging – For each individual log type, choose whether the log type should be Enabled
or Disabled. By default, each log type is Disabled. For more information, see Amazon EKS
Control Plane Logging (p. 40).
• Tags – (Optional) Add any tags to your cluster. For more information, see Tagging Your
Amazon EKS Resources (p. 263).
Note
You might receive an error that one of the Availability Zones in your request doesn't
have sufficient capacity to create an Amazon EKS cluster. If this happens, the error
output contains the Availability Zones that can support a new cluster. Retry creating
your cluster with at least two subnets that are located in the supported Availability
Zones for your account. For more information, see Insufficient Capacity (p. 275).
4. On the Clusters page, choose the name of your new cluster to view the cluster information.
5. The Status field shows CREATING until the cluster provisioning process completes. When your
cluster provisioning is complete (usually between 10 and 15 minutes), note the API server
endpoint and Certificate authority values. These are used in your kubectl configuration.
6. Now that you have created your cluster, follow the procedures in Installing aws-iam-
authenticator (p. 179) and Create a kubeconfig for Amazon EKS (p. 182) to enable
communication with your new cluster.
7. (Optional) If you want to run pods on AWS Fargate in your cluster, see Getting Started with AWS
Fargate on Amazon EKS (p. 106).
8. After you enable communication, follow the procedures in Launching Amazon EKS Linux Worker
Nodes (p. 88) to add Linux worker nodes to your cluster to support your workloads.
26
Amazon EKS User Guide
Creating a Cluster
9. (Optional) After you add Linux worker nodes to your cluster, follow the procedures in Windows
Support (p. 54) to add Windows support to your cluster and to add Windows worker nodes.
All Amazon EKS clusters must contain at least one Linux worker node, even if you only want to
run Windows workloads in your cluster.
AWS CLI
• You have created a VPC and a dedicated security group that meets the requirements for an
Amazon EKS cluster. For more information, see Cluster VPC Considerations (p. 152) and Amazon
EKS Security Group Considerations (p. 154). The Getting Started with the AWS Management
Console (p. 10) guide creates a VPC that meets the requirements, or you can also follow Creating a
VPC for Your Amazon EKS Cluster (p. 150) to create one.
• You have created an Amazon EKS service role to apply to your cluster. The Getting Started with
Amazon EKS (p. 3) guide creates a service role for you, or you can also follow Amazon EKS IAM
Roles (p. 232) to create one manually.
1. Create your cluster with the following command. Substitute your cluster name, the Amazon
Resource Name (ARN) of your Amazon EKS service role that you created in Create your Amazon
EKS Service Role (p. 10), and the subnet and security group IDs for the VPC that you created in
Create your Amazon EKS Cluster VPC (p. 12).
Important
Kubernetes version 1.12 is now deprecated on Amazon EKS. On May 11th, 2020,
Kubernetes version 1.12 will no longer be supported on Amazon EKS. On this date,
you will no longer be able to create new 1.12 clusters, and all existing Amazon EKS
clusters running Kubernetes version 1.12 will eventually be automatically updated to
version 1.13. We recommend that you update any 1.12 clusters to version 1.13 or later
in order to avoid service interruption. For more information, see Amazon EKS Version
Deprecation (p. 47).
Kubernetes API versions available through Amazon EKS are officially supported by
AWS, until we remove the ability to create clusters using that version. This is true
even if upstream Kubernetes is no longer supporting a version available on Amazon
EKS. We backport security fixes that are applicable to the Kubernetes versions
supported on Amazon EKS. Existing clusters are always supported, and Amazon EKS
will automatically update your cluster to a supported version if you have not done so
manually by the version end of life date.
Important
If you receive a syntax error similar to the following, you might be using a preview
version of the AWS CLI for Amazon EKS. The syntax for many Amazon EKS commands
has changed since the public service launch. Update your AWS CLI version to the latest
available and delete the custom service model directory at ~/.aws/models/eks.
27
Amazon EKS User Guide
Creating a Cluster
Note
If your IAM user doesn't have administrative privileges, you must explicitly add
permissions for that user to call the Amazon EKS API operations. For more information,
see Amazon EKS Identity-Based Policy Examples (p. 233).
Output:
{
"cluster": {
"name": "devel",
"arn": "arn:aws:eks:region-code:111122223333:cluster/devel",
"createdAt": 1527785885.159,
"version": "1.15",
"roleArn": "arn:aws:iam::111122223333:role/eks-service-role-
AWSServiceRoleForAmazonEKS-AFNL4H8HB71F",
"resourcesVpcConfig": {
"subnetIds": [
"subnet-a9189fe2",
"subnet-50432629"
],
"securityGroupIds": [
"sg-f5c54184"
],
"vpcId": "vpc-a54041dc",
"endpointPublicAccess": true,
"endpointPrivateAccess": false
},
"status": "CREATING",
"certificateAuthority": {}
}
}
Note
You might receive an error that one of the Availability Zones in your request doesn't
have sufficient capacity to create an Amazon EKS cluster. If this happens, the error
output contains the Availability Zones that can support a new cluster. Retry creating
your cluster with at least two subnets that are located in the supported Availability
Zones for your account. For more information, see Insufficient Capacity (p. 275).
To encrypt the Kubernetes secrets with a customer master key (CMK) from AWS Key
Management Service (AWS KMS), first create a CMK using the create-key operation.
--encryption-config '[{"resources":["secrets"],"provider":
{"keyArn":"$MY_KEY_ARN"}}]'
The keyArn member can contain either the alias or ARN of your CMK. The CMK must be
symmetric, created in the same region as the cluster, and if the CMK was created in a different
account, the user must have access to the CMK. For more information, see Allowing Users in
Other Accounts to Use a CMK in the AWS Key Management Service Developer Guide. Kubernetes
secrets encryption with an AWS KMS CMK requires Kubernetes version 1.13 or later.
2. Cluster provisioning usually takes between 10 and 15 minutes. You can query the status of your
cluster with the following command. When your cluster status is ACTIVE, you can proceed.
28
Amazon EKS User Guide
Updating Kubernetes Version
4. Now that you have created your cluster, follow the procedures in Installing aws-iam-
authenticator (p. 179) and Create a kubeconfig for Amazon EKS (p. 182) to enable
communication with your new cluster.
5. (Optional) If you want to run pods on AWS Fargate in your cluster, see Getting Started with AWS
Fargate on Amazon EKS (p. 106).
6. After you enable communication, follow the procedures in Launching Amazon EKS Linux Worker
Nodes (p. 88) to add worker nodes to your cluster to support your workloads.
7. (Optional) After you add Linux worker nodes to your cluster, follow the procedures in Windows
Support (p. 54) to add Windows support to your cluster and to add Windows worker nodes.
All Amazon EKS clusters must contain at least one Linux worker node, even if you only want to
run Windows workloads in your cluster.
The update process consists of Amazon EKS launching new API server nodes with the updated
Kubernetes version to replace the existing ones. Amazon EKS performs standard infrastructure and
readiness health checks for network traffic on these new nodes to verify that they are working as
expected. If any of these checks fail, Amazon EKS reverts the infrastructure deployment, and your cluster
remains on the prior Kubernetes version. Running applications are not affected, and your cluster is never
left in a non-deterministic or unrecoverable state. Amazon EKS regularly backs up all managed clusters,
and mechanisms exist to recover clusters if necessary. We are constantly evaluating and improving our
Kubernetes infrastructure management processes.
In order to upgrade the cluster, Amazon EKS requires 2-3 free IP addresses from the subnets which were
provided when you created the cluster. If these subnets do not have available IP addresses, then the
upgrade can fail. Additionally, if any of the subnets or security groups that were provided during cluster
creation have been deleted, the cluster upgrade process can fail.
Note
Although Amazon EKS runs a highly available control plane, you might experience minor service
interruptions during an update. For example, if you attempt to connect to an API server just
29
Amazon EKS User Guide
Updating Kubernetes Version
before or just after it's terminated and replaced by a new API server running the new version
of Kubernetes, you might experience API call errors or connectivity issues. If this happens, retry
your API operations until they succeed.
Amazon EKS does not modify any of your Kubernetes add-ons when you update a cluster. After updating
your cluster, we recommend that you update your add-ons to the versions listed in the following table
for the new Kubernetes version that you're updating to. Steps to accomplish this are included in the
update procedures.
Important
Kubernetes version 1.12 is now deprecated on Amazon EKS. On May 11th, 2020, Kubernetes
version 1.12 will no longer be supported on Amazon EKS. On this date, you will no longer be
able to create new 1.12 clusters, and all existing Amazon EKS clusters running Kubernetes
version 1.12 will eventually be automatically updated to version 1.13. We recommend that you
update any 1.12 clusters to version 1.13 or later in order to avoid service interruption. For more
information, see Amazon EKS Version Deprecation (p. 47).
Kubernetes API versions available through Amazon EKS are officially supported by AWS, until we
remove the ability to create clusters using that version. This is true even if upstream Kubernetes
is no longer supporting a version available on Amazon EKS. We backport security fixes that are
applicable to the Kubernetes versions supported on Amazon EKS. Existing clusters are always
supported, and Amazon EKS will automatically update your cluster to a supported version if you
have not done so manually by the version end of life date.
If you're using additional add-ons for your cluster that aren't listed in the previous table, update them to
the latest compatible versions after updating your cluster.
1. Compare the Kubernetes version of your cluster control plane to the Kubernetes version of your
worker nodes.
• Get the Kubernetes version of your cluster control plane with the following command.
• Get the Kubernetes version of your worker nodes with the following command.
If your worker nodes are more than one Kubernetes minor version older than your control plane,
then you must upgrade your worker nodes to a newer Kubernetes minor version before you update
your cluster's Kubernetes version. For more information, see Kubernetes version and version skew
support policy in the Kubernetes documentation.
We recommend that you update your worker nodes to your cluster's current pre-update Kubernetes
minor version prior to your cluster update. Your worker nodes must not run a newer Kubernetes
30
Amazon EKS User Guide
Updating Kubernetes Version
version than your control plane. For example, if your control plane is running version 1.14 and your
workers are running version 1.12, update your worker nodes to version 1.13 or 1.14 (recommended)
before you update your cluster’s Kubernetes version to 1.15. For more information, see Worker Node
Updates (p. 97).
2. The pod security policy admission controller is enabled on Amazon EKS clusters running Kubernetes
version 1.13 or later. If you are upgrading your cluster to Kubernetes version 1.13 or later, ensure
that the proper pod security policies are in place before you update to avoid any issues. You can
check for the default policy with the following command:
If you receive the following error, see To install or restore the default pod security policy (p. 260)
before proceeding.
3. Update your cluster. For instructions, select the tab with the name of the tool that you want to use
to update your cluster.
eksctl
This procedure assumes that you have installed eksctl, and that your eksctl version is at
least 0.15.0-rc.2. You can check your version with the following command:
eksctl version
Update your Amazon EKS cluster Kubernetes version with the following command, replacing
dev with your cluster name:
31
Amazon EKS User Guide
Updating Kubernetes Version
supported on Amazon EKS. Existing clusters are always supported, and Amazon EKS
will automatically update your cluster to a supported version if you have not done so
manually by the version end of life date.
Important
Because Amazon EKS runs a highly available control plane, you must update only
one minor version at a time. See Kubernetes Version and Version Skew Support
Policy for the rationale behind this requirement. Therefore, if your current version is
1.13 and you want to upgrade to 1.15, you must first upgrade your cluster to 1.14
and then upgrade it from 1.14 to 1.15. If you try to update directly from 1.13 to
1.15, the update version command throws an error.
4. For Cluster name, type the name of your cluster and choose Confirm.
Note
The cluster update should finish in a few minutes.
AWS CLI
1. Update your cluster with the following AWS CLI command. Substitute your cluster name and
desired Kubernetes minor version.
Important
Kubernetes version 1.12 is now deprecated on Amazon EKS. On May 11th, 2020,
Kubernetes version 1.12 will no longer be supported on Amazon EKS. On this date,
you will no longer be able to create new 1.12 clusters, and all existing Amazon EKS
clusters running Kubernetes version 1.12 will eventually be automatically updated
to version 1.13. We recommend that you update any 1.12 clusters to version 1.13 or
later in order to avoid service interruption. For more information, see Amazon EKS
Version Deprecation (p. 47).
Kubernetes API versions available through Amazon EKS are officially supported by
AWS, until we remove the ability to create clusters using that version. This is true
even if upstream Kubernetes is no longer supporting a version available on Amazon
EKS. We backport security fixes that are applicable to the Kubernetes versions
supported on Amazon EKS. Existing clusters are always supported, and Amazon EKS
will automatically update your cluster to a supported version if you have not done so
manually by the version end of life date.
Important
Because Amazon EKS runs a highly available control plane, you must update only
one minor version at a time. See Kubernetes Version and Version Skew Support
Policy for the rationale behind this requirement. Therefore, if your current version is
1.13 and you want to upgrade to 1.15, you must first upgrade your cluster to 1.14
and then upgrade it from 1.14 to 1.15. If you try to update directly from 1.13 to
1.15, the update version command throws an error.
Output:
{
"update": {
"id": "b5f0ba18-9a87-4450-b5a0-825e6e84496f",
"status": "InProgress",
"type": "VersionUpdate",
"params": [
{
"type": "Version",
"value": "1.15"
32
Amazon EKS User Guide
Updating Kubernetes Version
},
{
"type": "PlatformVersion",
"value": "eks.1"
}
],
"createdAt": 1577485455.5,
"errors": []
}
}
2. Monitor the status of your cluster update with the following command, using the cluster
name and update ID that the previous command returned. Your update is complete when the
status appears as Successful.
Note
The cluster update should finish in a few minutes.
Output:
{
"update": {
"id": "b5f0ba18-9a87-4450-b5a0-825e6e84496f",
"status": "Successful",
"type": "VersionUpdate",
"params": [
{
"type": "Version",
"value": "1.15"
},
{
"type": "PlatformVersion",
"value": "eks.1"
}
],
"createdAt": 1577485455.5,
"errors": []
}
}
4. Patch the kube-proxy daemonset to use the image that corresponds to your cluster's Region and
current Kubernetes version (in this example, 1.15.10).
5. Check your cluster's DNS provider. Clusters that were created with Kubernetes version 1.10 shipped
with kube-dns as the default DNS and service discovery provider. If you have updated a 1.10 cluster
to a newer version and you want to use CoreDNS for DNS and service discovery, then you must
install CoreDNS and remove kube-dns.
33
Amazon EKS User Guide
Updating Kubernetes Version
To check if your cluster is already running CoreDNS, use the following command.
If the output shows coredns in the pod names, you're already running CoreDNS in your cluster. If
not, see Installing or Upgrading CoreDNS (p. 167) to install CoreDNS on your cluster, update it to
the recommended version, return here, and skip steps 6-8.
6. Check the current version of your cluster's coredns deployment.
kubectl describe deployment coredns --namespace kube-system | grep Image | cut -d "/" -
f 3
Output:
coredns:v1.1.3
The recommended coredns versions for the corresponding Kubernetes versions are as follows:
7. If your current coredns version is 1.5.0 or later, but earlier than the recommended version, then
skip this step. If your current version is earlier than 1.5.0, then you need to modify the config map
for coredns to use the forward plug-in, rather than the proxy plug-in.
b. Replace proxy in the following line with forward. Save the file and exit the editor.
proxy . /etc/resolv.conf
8. Update coredns to the recommended version, replacing region-code with your Region and
1.6.6 with your cluster's recommended coredns version:
9. Check the version of your cluster's Amazon VPC CNI Plugin for Kubernetes. Use the following
command to print your cluster's CNI version.
kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -
f 2
Output:
amazon-k8s-cni:1.5.3
If your CNI version is earlier than 1.5.5, then use the following command to update your CNI version
to the latest recommended version:
34
Amazon EKS User Guide
Cluster Endpoint Access
10. (Clusters with GPU workers only) If your cluster has worker node groups with GPU support (for
example, p3.2xlarge), you must update the NVIDIA device plugin for Kubernetes DaemonSet on
your cluster with the following command.
11. After your cluster update is complete, update your worker nodes to the same Kubernetes version
of your updated cluster. For more information, see Worker Node Updates (p. 97). Any new pods
launched on Fargate will have a kubelet version that matches your cluster version. Existing Fargate
pods will not be changed.
When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server
that you use to communicate with your cluster (using Kubernetes management tools such as kubectl).
By default, this API server endpoint is public to the internet, and access to the API server is secured using
a combination of AWS Identity and Access Management (IAM) and native Kubernetes Role Based Access
Control (RBAC).
You can enable private access to the Kubernetes API server so that all communication between your
worker nodes and the API server stays within your VPC. You can limit the IP addresses that can access
your API server from the internet, or completely disable internet access to the API server.
Note
Because this endpoint is for the Kubernetes API server and not a traditional AWS PrivateLink
endpoint for communicating with an AWS API, it doesn't appear as an endpoint in the Amazon
VPC console.
When you enable endpoint private access for your cluster, Amazon EKS creates a Route 53 private hosted
zone on your behalf and associates it with your cluster's VPC. This private hosted zone is managed
by Amazon EKS, and it doesn't appear in your account's Route 53 resources. In order for the private
hosted zone to properly route traffic to your API server, your VPC must have enableDnsHostnames
and enableDnsSupport set to true, and the DHCP options set for your VPC must include
AmazonProvidedDNS in its domain name servers list. For more information, see Updating DNS Support
for Your VPC in the Amazon VPC User Guide.
Note
In addition to standard Amazon EKS permissions, your IAM user or role must have
route53:AssociateVPCWithHostedZone permissions to enable the cluster's endpoint
private access.
You can define your API server endpoint access requirements when you create a new cluster, and you can
update the API server endpoint access for a cluster at any time.
35
Amazon EKS User Guide
Modifying Cluster Endpoint Access
36
Amazon EKS User Guide
Modifying Cluster Endpoint Access
You can modify your cluster API server endpoint access using the AWS Management Console or AWS CLI.
For instructions, select the tab for the tool that you want to use.
AWS CLI
Complete the following steps using the AWS CLI version 1.18.17 or later. You can check your current
version with aws --version. To install or upgrade the AWS CLI, see Installing the AWS CLI.
37
Amazon EKS User Guide
Modifying Cluster Endpoint Access
1. Update your cluster API server endpoint access with the following AWS CLI command.
Substitute your cluster name and desired endpoint access values. If you set
endpointPublicAccess=true, then you can (optionally) enter single CIDR block, or a
comma-separated list of CIDR blocks for publicAccessCidrs. The blocks cannot include
reserved addresses. If you specify CIDR blocks, then the public API server endpoint will only
receive requests from the listed blocks. There is a maximum number of CIDR blocks that you can
specify. For more information, see Amazon EKS Service Quotas (p. 282). If you restrict access
to your public endpoint using CIDR blocks, it is recommended that you also enable private
endpoint access so that worker nodes and Fargate pods (if you use them) can communicate with
the cluster. Without the private endpoint enabled, your public access endpoint CIDR sources
must include the egress sources from your VPC. For example, if you have a worker node in a
private subnet that communicates to the internet through a NAT Gateway, you will need to add
the outbound IP address of the NAT gateway as part of a whitelisted CIDR block on your public
endpoint. If you specify no CIDR blocks, then the public API server endpoint receives requests
from all (0.0.0.0/0) IP addresses.
Note
The following command enables private access and public access from a single IP
address for the API server endpoint. Replace 203.0.113.5/32 with a single CIDR
block, or a comma-separated list of CIDR blocks that you want to restrict network
access to.
Output:
{
"update": {
"id": "e6f0905f-a5d4-4a2a-8c49-EXAMPLE00000",
"status": "InProgress",
"type": "EndpointAccessUpdate",
"params": [
{
"type": "EndpointPublicAccess",
"value": "true"
},
{
"type": "EndpointPrivateAccess",
"value": "true"
},
{
"type": "publicAccessCidrs",
"value": "[\203.0.113.5/32\"]"
}
],
"createdAt": 1576874258.137,
"errors": []
}
}
2. Monitor the status of your endpoint access update with the following command, using the
cluster name and update ID that was returned by the previous command. Your update is
complete when the status is shown as Successful.
38
Amazon EKS User Guide
Accessing a Private Only API Server
--name dev \
--update-id e6f0905f-a5d4-4a2a-8c49-EXAMPLE00000
Output:
{
"update": {
"id": "e6f0905f-a5d4-4a2a-8c49-EXAMPLE00000",
"status": "Successful",
"type": "EndpointAccessUpdate",
"params": [
{
"type": "EndpointPublicAccess",
"value": "true"
},
{
"type": "EndpointPrivateAccess",
"value": "true"
},
{
"type": "publicAccessCidrs",
"value": "[\203.0.113.5/32\"]"
}
],
"createdAt": 1576874258.137,
"errors": []
}
}
• Connected network – Connect your network to the VPC with an AWS Transit Gateway or other
connectivity option and then use a computer in the connected network. You must ensure that your
Amazon EKS control plane security group contains rules to allow ingress traffic on port 443 from your
connected network.
• Amazon EC2 bastion host – You can launch an Amazon EC2 instance into a public subnet in your
cluster's VPC and then log in via SSH into that instance to run kubectl commands. For more
information, see Linux Bastion Hosts on AWS. You must ensure that your Amazon EKS control plane
security group contains rules to allow ingress traffic on port 443 from your bastion host. For more
information, see Amazon EKS Security Group Considerations (p. 154).
When you configure kubectl for your bastion host, be sure to use AWS credentials that are already
mapped to your cluster's RBAC configuration, or add the IAM user or role that your bastion will
use to the RBAC configuration before you remove endpoint public access. For more information,
see Managing Users or IAM Roles for your Cluster (p. 185) and Unauthorized or Access Denied
(kubectl) (p. 275).
• AWS Cloud9 IDE – AWS Cloud9 is a cloud-based integrated development environment (IDE) that lets
you write, run, and debug your code with just a browser. You can create an AWS Cloud9 IDE in your
cluster's VPC and use the IDE to communicate with your cluster. For more information, see Creating
an Environment in AWS Cloud9. You must ensure that your Amazon EKS control plane security group
contains rules to allow ingress traffic on port 443 from your IDE security group. For more information,
see Amazon EKS Security Group Considerations (p. 154).
39
Amazon EKS User Guide
Control Plane Logging
When you configure kubectl for your AWS Cloud9 IDE, be sure to use AWS credentials that are
already mapped to your cluster's RBAC configuration, or add the IAM user or role that your IDE will
use to the RBAC configuration before you remove endpoint public access. For more information,
see Managing Users or IAM Roles for your Cluster (p. 185) and Unauthorized or Access Denied
(kubectl) (p. 275).
You can start using Amazon EKS control plane logging by choosing which log types you want to enable
for each new or existing Amazon EKS cluster. You can enable or disable each log type on a per-cluster
basis using the AWS Management Console, AWS CLI (version 1.16.139 or higher), or through the Amazon
EKS API. When enabled, logs are automatically sent from the Amazon EKS cluster to CloudWatch Logs in
the same account.
When you use Amazon EKS control plane logging, you're charged standard Amazon EKS pricing for each
cluster that you run. You are charged the standard CloudWatch Logs data ingestion and storage costs for
any logs sent to CloudWatch Logs from your clusters. You are also charged for any AWS resources, such
as Amazon EC2 instances or Amazon EBS volumes, that you provision as part of your cluster.
The following cluster control plane log types are available. Each log type corresponds to a component
of the Kubernetes control plane. To learn more about these components, see Kubernetes Components in
the Kubernetes documentation.
• Kubernetes API server component logs (api) – Your cluster's API server is the control plane
component that exposes the Kubernetes API. For more information, see kube-apiserver in the
Kubernetes documentation.
• Audit (audit) – Kubernetes audit logs provide a record of the individual users, administrators,
or system components that have affected your cluster. For more information, see Auditing in the
Kubernetes documentation.
• Authenticator (authenticator) – Authenticator logs are unique to Amazon EKS. These logs
represent the control plane component that Amazon EKS uses for Kubernetes Role Based Access
Control (RBAC) authentication using IAM credentials. For more information, see Managing Cluster
Authentication (p. 174).
• Controller manager (controllerManager) – The controller manager manages the core control
loops that are shipped with Kubernetes. For more information, see kube-controller-manager in the
Kubernetes documentation.
• Scheduler (scheduler) – The scheduler component manages when and where to run pods in your
cluster. For more information, see kube-scheduler in the Kubernetes documentation.
When you enable a log type, the logs are sent with a log verbosity level of 2.
40
Amazon EKS User Guide
Enabling and Disabling Control Plane Logs
aws --version
If your AWS CLI version is below 1.16.139, you must first update to the latest version. To install or
upgrade the AWS CLI, see Installing the AWS Command Line Interface in the AWS Command Line
Interface User Guide.
2. Update your cluster's control plane log export configuration with the following AWS CLI command.
Substitute your cluster name and desired endpoint access values.
Note
The following command sends all available log types to CloudWatch Logs.
Output:
{
"update": {
"id": "883405c8-65c6-4758-8cee-2a7c1340a6d9",
"status": "InProgress",
"type": "LoggingUpdate",
"params": [
{
"type": "ClusterLogging",
"value": "{\"clusterLogging\":[{\"types\":[\"api\",\"audit\",
\"authenticator\",\"controllerManager\",\"scheduler\"],\"enabled\":true}]}"
}
],
"createdAt": 1553271814.684,
"errors": []
}
}
3. Monitor the status of your log configuration update with the following command, using the cluster
name and the update ID that were returned by the previous command. Your update is complete
when the status appears as Successful.
Output:
41
Amazon EKS User Guide
Viewing Cluster Control Plane Logs
{
"update": {
"id": "883405c8-65c6-4758-8cee-2a7c1340a6d9",
"status": "Successful",
"type": "LoggingUpdate",
"params": [
{
"type": "ClusterLogging",
"value": "{\"clusterLogging\":[{\"types\":[\"api\",\"audit\",
\"authenticator\",\"controllerManager\",\"scheduler\"],\"enabled\":true}]}"
}
],
"createdAt": 1553271814.684,
"errors": []
}
}
To learn more about viewing, analyzing, and managing logs in CloudWatch, see the Amazon CloudWatch
Logs User Guide.
Deleting a Cluster
When you're done using an Amazon EKS cluster, you should delete the resources associated with it so
that you don't incur any unnecessary costs.
Important
If you have active services in your cluster that are associated with a load balancer, you must
delete those services before deleting the cluster so that the load balancers are deleted properly.
42
Amazon EKS User Guide
Deleting a Cluster
Otherwise, you can have orphaned resources in your VPC that prevent you from being able to
delete the VPC.
Choose the tab below that corresponds to your preferred cluster deletion method.
eksctl
This procedure assumes that you have installed eksctl, and that your eksctl version is at least
0.15.0-rc.2. You can check your version with the following command:
eksctl version
2. Delete any services that have an associated EXTERNAL-IP value. These services are fronted by
an Elastic Load Balancing load balancer, and you must delete them in Kubernetes to allow the
load balancer and associated resources to be properly released.
3. Delete the cluster and its associated worker nodes with the following command, replacing prod
with your cluster name.
Output:
2. Delete any services that have an associated EXTERNAL-IP value. These services are fronted by
an Elastic Load Balancing load balancer, and you must delete them in Kubernetes to allow the
load balancer and associated resources to be properly released.
43
Amazon EKS User Guide
Deleting a Cluster
a. Select the VPC stack to delete and choose Actions and then Delete Stack.
b. On the Delete Stack confirmation screen, choose Yes, Delete.
AWS CLI
2. Delete any services that have an associated EXTERNAL-IP value. These services are fronted by
an Elastic Load Balancing load balancer, and you must delete them in Kubernetes to allow the
load balancer and associated resources to be properly released.
a. List your available AWS CloudFormation stacks with the following command. Find the
worker node template name in the resulting output.
b. Delete the worker node stack with the following command, replacing worker-node-
stack with your worker node stack name.
4. Delete the cluster with the following command, replacing my-cluster with your cluster name.
a. List your available AWS CloudFormation stacks with the following command. Find the VPC
template name in the resulting output.
44
Amazon EKS User Guide
Kubernetes Versions
b. Delete the VPC stack with the following command, replacing my-vpc-stack with your VPC
stack name.
• 1.15.10
• 1.14.9
• 1.13.12
• 1.12.10
Important
Kubernetes version 1.12 is now deprecated on Amazon EKS. On May 11th, 2020, Kubernetes
version 1.12 will no longer be supported on Amazon EKS. On this date, you will no longer be
able to create new 1.12 clusters, and all existing Amazon EKS clusters running Kubernetes
version 1.12 will eventually be automatically updated to version 1.13. We recommend that you
update any 1.12 clusters to version 1.13 or later in order to avoid service interruption. For more
information, see Amazon EKS Version Deprecation (p. 47).
Kubernetes API versions available through Amazon EKS are officially supported by AWS, until we
remove the ability to create clusters using that version. This is true even if upstream Kubernetes
is no longer supporting a version available on Amazon EKS. We backport security fixes that are
applicable to the Kubernetes versions supported on Amazon EKS. Existing clusters are always
supported, and Amazon EKS will automatically update your cluster to a supported version if you
have not done so manually by the version end of life date.
Unless your application requires a specific version of Kubernetes, we recommend that you choose the
latest available Kubernetes version supported by Amazon EKS for your clusters. As new Kubernetes
versions become available in Amazon EKS, we recommend that you proactively update your clusters to
use the latest available version. For more information, see Updating an Amazon EKS Cluster Kubernetes
Version (p. 29).
Kubernetes 1.15
Kubernetes 1.15 is now available in Amazon EKS. For more information about Kubernetes 1.15, see the
official release announcement.
Important
Starting with 1.15, Amazon EKS no longer tags the VPC containing your cluster.
• For more information about VPC tagging, see ??? (p. 153).
45
Amazon EKS User Guide
Kubernetes 1.14
Important
Amazon EKS has set the re-invocation policy for the Pod Identity Webhook to IfNeeded.
This allows the webhook to be re-invoked if objects are changed by other mutating admission
webhooks like the App Mesh sidecar injector. For more information about the App Mesh sidecar
injector, see Install the Sidecar Injector.
The following features are now supported in Kubernetes 1.15 Amazon EKS clusters:
• EKS now supports configuring transport layer security (TLS) termination, access logs, and source
ranges for network load balancers. For more information, see Network Load Balancer Support on AWS
on GitHub.
• Improved flexibility of Customer Resource Definitions (CRD), including the ability to convert
between versions on the fly. For more information, see Extend the Kubernetes API with
CustomResourceDefinitions on GitHub.
• NodeLocal DNSCache is in beta for Kubernetes version 1.15 clusters. This feature can help improve
cluster DNS performance by running a DNS caching agent on cluster nodes as a DaemonSet. For more
information, see Using NodeLocal DNSCache in Kubernetes clusters on GitHub and Amazon EKS DNS
at scale and spikeiness
Kubernetes 1.14
Kubernetes 1.14 is now available in Amazon EKS. For more information about Kubernetes 1.14, see the
official release announcement.
Important
The --allow-privileged flag has been removed from kubelet on Amazon EKS 1.14 worker
nodes. If you have modified or restricted the Amazon EKS Default Pod Security Policy (p. 258)
on your cluster, you should verify that your applications have the permissions they need on 1.14
worker nodes.
The following features are now supported in Kubernetes 1.14 Amazon EKS clusters:
• Container Storage Interface Topology is in beta for Kubernetes version 1.14 clusters. For more
information, see CSI Topology Feature in the Kubernetes CSI Developer Documentation. The following
CSI drivers provide a CSI interface for container orchestrators like Kubernetes to manage the lifecycle
of Amazon EBS volumes, Amazon EFS file systems, and Amazon FSx for Lustre file systems:
• Amazon Elastic Block Store (EBS) CSI driver
• Amazon EFS CSI Driver
• Amazon FSx for Lustre CSI Driver
• Process ID (PID) limiting is in beta for Kubernetes version 1.14 clusters. This feature allows you to set
quotas for how many processes a pods can create, which can prevent resource starvation for other
applications on a cluster. For more information, see Process ID Limiting for Stability Improvements in
Kubernetes 1.14.
• Persistent Local Volumes are now GA and make locally attached storage available as a persistent
volume source. For more information, see Kubernetes 1.14: Local Persistent Volumes GA.
• Pod Priority and Preemption is now GA and allows pods to be assigned a scheduling priority level. For
more information, see Pod Priority and Preemption in the Kubernetes documentation.
• Windows worker node support is GA with Kubernetes 1.14.
46
Amazon EKS User Guide
Kubernetes 1.13
Kubernetes 1.13
The following features are now supported in Kubernetes 1.13 Amazon EKS clusters:
• The PodSecurityPolicy admission controller is now enabled. This admission controller allows
fine-grained control over pod creation and updates. For more information, see Pod Security
Policy (p. 258). If you do not have any pod security policies defined in your cluster when you upgrade
to 1.13, then Amazon EKS creates a default policy for you.
Important
If you have any pod security policies defined in your cluster, the default policy is not created
when you upgrade to Kubernetes 1.13. If your cluster does not have the default Amazon EKS
pod security policy, your pods may not be able to launch if your existing pod security policies
are too restrictive. You can check for any existing pod security policies with the following
command:
If you cluster has any pod security policies defined, you should also make sure that you have
the default Amazon EKS pod security policy (eks.privileged) defined. If not, you can apply
it by following the steps in To install or restore the default pod security policy (p. 260).
• Amazon ECR interface VPC endpoints (AWS PrivateLink) are supported. When you enable these
endpoints in your VPC, all network traffic between your VPC and Amazon ECR is restricted to the
Amazon network. For more information, see Amazon ECR Interface VPC Endpoints (AWS PrivateLink)
in the Amazon Elastic Container Registry User Guide.
• The DryRun feature is in beta in Kubernetes 1.13 and is enabled by default for Amazon EKS clusters.
For more information, see Dry run in the Kubernetes documentation.
• The TaintBasedEvictions feature is in beta in Kubernetes 1.13 and is enabled by default
for Amazon EKS clusters. For more information, see Taint based Evictions in the Kubernetes
documentation.
• Raw block volume support is in beta in Kubernetes 1.13 and is enabled by default for Amazon EKS
clusters. This is accessible via the volumeDevices container field in pod specs, and the volumeMode
field in persistent volume and persistent volume claim definitions. For more information, see Raw
Block Volume Support in the Kubernetes documentation.
• Node lease renewal is treated as the heartbeat signal from the node, in addition to its NodeStatus
update. This reduces load on the control plane for large clusters. For more information, see https://
github.com/kubernetes/kubernetes/pull/69241.
We will announce the deprecation of a given Kubernetes minor version at least 60 days before the end of
support date. Because of the Amazon EKS qualification and release process for new Kubernetes versions,
the deprecation of a Kubernetes version on Amazon EKS will be on or after the date the Kubernetes
project stops supporting the version upstream.
On the end of support date, Amazon EKS clusters running the deprecated version will begin to be
automatically updated to the next Amazon EKS-supported version of Kubernetes. This means that if the
deprecated version is 1.12, clusters will eventually be automatically updated to version 1.13. If a cluster
47
Amazon EKS User Guide
Platform Versions
is automatically updated by Amazon EKS, you must update the version of your worker nodes after the
update is complete. For more information, see Worker Node Updates (p. 97).
Kubernetes supports compatibility between masters and workers for at least two minor versions, so 1.12
workers will continue to operate when orchestrated by a 1.13 control plane. For more information, see
Kubernetes Version and Version Skew Support Policy in the Kubernetes documentation.
Platform Versions
Amazon EKS platform versions represent the capabilities of the cluster control plane, such as which
Kubernetes API server flags are enabled, as well as the current Kubernetes patch version. Each
Kubernetes minor version has one or more associated Amazon EKS platform versions. The platform
versions for different Kubernetes minor versions are independent.
When a new Kubernetes minor version is available in Amazon EKS, such as 1.15, the initial Amazon EKS
platform version for that Kubernetes minor version starts at eks.1. However, Amazon EKS releases new
platform versions periodically to enable new Kubernetes control plane settings and to provide security
fixes.
When new Amazon EKS platform versions become available for a minor version:
New Amazon EKS platform versions don't introduce breaking changes or cause service interruptions.
Note
Automatic upgrades of existing Amazon EKS platform versions are rolled out incrementally.
The roll-out process might take some time. If you need the latest Amazon EKS platform version
features immediately, you should create a new Amazon EKS cluster.
Clusters are always created with the latest available Amazon EKS platform version (eks.n) for the
specified Kubernetes version. If you update your cluster to a new Kubernetes minor version, your cluster
receives the current Amazon EKS platform version for the Kubernetes minor version that you updated to.
The current and recent Amazon EKS platform versions are described in the following tables.
48
Amazon EKS User Guide
Kubernetes version 1.14
49
Amazon EKS User Guide
Kubernetes version 1.14
50
Amazon EKS User Guide
Kubernetes version 1.13
51
Amazon EKS User Guide
Kubernetes version 1.13
52
Amazon EKS User Guide
Kubernetes version 1.12
53
Amazon EKS User Guide
Windows Support
Windows Support
This topic describes how to add Windows support to Amazon EKS clusters.
Considerations
Before deploying Windows worker nodes, be aware of the following considerations.
• Windows workloads are supported with Amazon EKS clusters running Kubernetes version 1.14 or later.
• Amazon EC2 instance types C3, C4, D2, I2, M4 (excluding m4.16xlarge), and R3 instances are not
supported for Windows workloads.
• Host networking mode is not supported for Windows workloads.
• Amazon EKS clusters must contain one or more Linux worker nodes to run core system pods that only
run on Linux, such as coredns and the VPC resource controller.
• The kubelet and kube-proxy event logs are redirected to the EKS Windows Event Log and are set to
a 200 MB limit.
• Windows worker nodes support one elastic network interface per node. The number of pods that you
can run per Windows worker node is equal to the number of IP addresses available per elastic network
interface for the node's instance type, minus one. For more information, see IP Addresses Per Network
Interface Per Instance Type in the Amazon EC2 User Guide for Linux Instances.
• Calico network policy enforcement has not been tested with Amazon EKS Windows nodes.
54
Amazon EKS User Guide
Enabling Windows Support
• Group Managed Service Accounts (GMSA) for Windows pods and containers is a Kubernetes 1.14
alpha feature that is not supported by Amazon EKS. You can follow the instructions in the Kubernetes
documentation to enable and test this alpha feature on your clusters.
eksctl
This procedure only works for clusters that were created with eksctl and assumes that your
eksctl version is 0.15.0-rc.2 or later. You can check your version with the following command.
eksctl version
For more information about installing or upgrading eksctl, see Installing or Upgrading
eksctl (p. 189).
1. Enable Windows support for your Amazon EKS cluster with the following eksctl command.
This command deploys the VPC resource controller and VPC admission controller webhook that
are required on Amazon EKS clusters to run Windows workloads.
2. After you have enabled Windows support, you can launch a Windows node group into your
cluster. For more information, see Launching Amazon EKS Windows Worker Nodes (p. 93).
After you add Windows support to your cluster, you must specify node selectors on your applications
so that the pods land on a node with the appropriate operating system. For Linux pods, use the
following node selector text in your manifests.
nodeSelector:
beta.kubernetes.io/os: linux
beta.kubernetes.io/arch: amd64
For Windows pods, use the following node selector text in your manifests.
nodeSelector:
beta.kubernetes.io/os: windows
beta.kubernetes.io/arch: amd64
Windows
In the following steps, replace the region-code with the region that your cluster resides in.
55
Amazon EKS User Guide
Enabling Windows Support
If output similar to the following example output is returned, then the cluster has the necessary
role binding.
NAME AGE
eks:kube-proxy-windows 10d
If the output includes Error from server (NotFound), then the cluster does not have the
necessary cluster role binding. Add the binding by creating a file named eks-kube-proxy-
windows-crb.yaml with the following content.
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: eks:kube-proxy-windows
labels:
k8s-app: kube-proxy
eks.amazonaws.com/component: kube-proxy
subjects:
- kind: Group
name: "eks:kube-proxy-windows"
roleRef:
kind: ClusterRole
name: system:node-proxier
apiGroup: rbac.authorization.k8s.io
4. After you have enabled Windows support, you can launch a Windows node group into your
cluster. For more information, see Launching Amazon EKS Windows Worker Nodes (p. 93).
56
Amazon EKS User Guide
Enabling Windows Support
After you add Windows support to your cluster, you must specify node selectors on your applications
so that the pods land on a node with the appropriate operating system. For Linux pods, use the
following node selector text in your manifests.
nodeSelector:
beta.kubernetes.io/os: linux
beta.kubernetes.io/arch: amd64
For Windows pods, use the following node selector text in your manifests.
nodeSelector:
beta.kubernetes.io/os: windows
beta.kubernetes.io/arch: amd64
To enable Windows support for your cluster with a macOS or Linux client
This procedure requires that the openssl library and jq JSON processor are installed on your client
system.
In the following steps, replace region-code with the region that your cluster resides in.
2. Create the VPC admission controller webhook manifest for your cluster.
./webhook-create-signed-cert.sh
57
Amazon EKS User Guide
Enabling Windows Support
If output similar to the following example output is returned, then the cluster has the necessary
role binding.
NAME AGE
eks:kube-proxy-windows 10d
If the output includes Error from server (NotFound), then the cluster does not have the
necessary cluster role binding. Add the binding by creating a file named eks-kube-proxy-
windows-crb.yaml with the following content.
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: eks:kube-proxy-windows
labels:
k8s-app: kube-proxy
eks.amazonaws.com/component: kube-proxy
subjects:
- kind: Group
name: "eks:kube-proxy-windows"
roleRef:
kind: ClusterRole
name: system:node-proxier
apiGroup: rbac.authorization.k8s.io
5. After you have enabled Windows support, you can launch a Windows node group into your
cluster. For more information, see Launching Amazon EKS Windows Worker Nodes (p. 93).
After you add Windows support to your cluster, you must specify node selectors on your applications
so that the pods land on a node with the appropriate operating system. For Linux pods, use the
following node selector text in your manifests.
nodeSelector:
beta.kubernetes.io/os: linux
beta.kubernetes.io/arch: amd64
For Windows pods, use the following node selector text in your manifests.
nodeSelector:
beta.kubernetes.io/os: windows
beta.kubernetes.io/arch: amd64
58
Amazon EKS User Guide
Deploy a Windows Sample Application
apiVersion: apps/v1
kind: Deployment
metadata:
name: windows-server-iis
spec:
selector:
matchLabels:
app: windows-server-iis
tier: backend
track: stable
replicas: 1
template:
metadata:
labels:
app: windows-server-iis
tier: backend
track: stable
spec:
containers:
- name: windows-server-iis
image: mcr.microsoft.com/windows/servercore:1809
ports:
- name: http
containerPort: 80
imagePullPolicy: IfNotPresent
command:
- powershell.exe
- -command
- "Add-WindowsFeature Web-Server; Invoke-WebRequest -UseBasicParsing
-Uri 'https://dotnetbinaries.blob.core.windows.net/servicemonitor/2.0.1.6/
ServiceMonitor.exe' -OutFile 'C:\\ServiceMonitor.exe'; echo '<html><body><br/
><br/><marquee><H1>Hello EKS!!!<H1><marquee></body><html>' > C:\\inetpub\\wwwroot\
\default.html; C:\\ServiceMonitor.exe 'w3svc'; "
nodeSelector:
beta.kubernetes.io/os: windows
---
apiVersion: v1
kind: Service
metadata:
name: windows-server-iis-service
namespace: default
spec:
ports:
- port: 80
protocol: TCP
targetPort: 80
selector:
app: windows-server-iis
tier: backend
track: stable
sessionAffinity: None
type: LoadBalancer
59
Amazon EKS User Guide
Arm Support
5. After your external IP address is available, point a web browser to that address to view the IIS home
page.
Note
It might take several minutes for DNS to propagate and for your sample application to load
in your web browser.
Arm Support
This topic describes how to create an Amazon EKS cluster and add worker nodes running on Amazon EC2
A1 instances to Amazon EKS clusters. Amazon EC2 A1 instances deliver significant cost savings for scale-
out and Arm-based applications such as web servers, containerized microservices, caching fleets, and
distributed data stores.
Note
These instructions and the assets that they reference are offered as a beta feature that is
administered by AWS. Use of these instructions and assets is governed as a beta under the AWS
Service Terms. While in beta, Amazon EKS does not support using Amazon EC2 A1 instances for
production Kubernetes workloads. Submit comments or questions in a GitHub issue.
Considerations
• Worker nodes can be any A1 instance type, but all worker nodes must be an A1 instance type.
• Worker nodes must be deployed with Kubernetes version 1.13 or 1.14.
Note
Kubernetes version 1.15 is not supported.
• To use A1 instance worker nodes, you must setup a new Amazon EKS cluster. You cannot add worker
nodes to a cluster that has existing worker nodes.
Prerequisites
• Have eksctl installed on your computer. If you don't have it installed, see Install eksctl (p. 4) for
installation instructions.
• Have kubectl and the AWS IAM authenticator installed on your computer. If you don't have them
installed, see Installing kubectl (p. 174) for installation instructions.
60
Amazon EKS User Guide
Create a cluster
Create a cluster
1. Run the following command to create an Amazon EKS cluster with no worker nodes. If you want to
create a cluster running Kubernetes version 1.13, then replace 1.14 with 1.13 in your command.
You can replace region-code with any Region that Amazon EKS is available in.
Launching an Amazon EKS cluster using eksctl creates an AWS CloudFormation stack. The launch
process for this stack typically takes 10 to 15 minutes. You can monitor the progress in the Amazon
EKS console.
2. When the cluster creation completes, open the AWS CloudFormation console. You will see a stack
named eksctl-a1-preview-cluster. Select this stack. Select the Resources tab. Record the
values of the IDs for the ControlPlaneSecurityGroup and VPC resources.
3. Confirm that the cluster is running with the kubectl get svc command. The command returns
output similar to the following example output.
1. Update the CoreDNS image ID using the command that corresponds to the version of the cluster
that you installed in a previous step.
Kubernetes 1.14
Kubernetes 1.13
2. Update the kube-proxy image ID using the command that corresponds to the version of the cluster
that you installed in a previous step.
Kubernetes 1.14
Kubernetes 1.13
61
Amazon EKS User Guide
Launch Worker Nodes
https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-11-15/amazon-eks-arm-
nodegroup.yaml
4. On the Specify stack details page, fill out the following parameters accordingly:
• Stack name – Choose a stack name for your AWS CloudFormation stack. For example, you can
name it a1-preview-worker-nodes.
• KubernetesVersion – Select the version of Kubernetes that you chose when launching your
Amazon EKS cluster.
• ClusterName – Enter the name that you used when you created your Amazon EKS cluster.
Important
This name must exactly match the name you used in Step 1: Create Your Amazon EKS
Cluster (p. 15); otherwise, your worker nodes cannot join the cluster.
• ClusterControlPlaneSecurityGroup – Choose the ControlPlaneSecurityGroup ID value
from the AWS CloudFormation output that you generated with the section called “Create a
cluster” (p. 61).
• NodeGroupName – Enter a name for your node group. This name can be used later to identify the
Auto Scaling node group that is created for your worker nodes.
• NodeAutoScalingGroupMinSize – Enter the minimum number of nodes that your worker node
Auto Scaling group can scale in to.
• NodeAutoScalingGroupDesiredCapacity – Enter the desired number of nodes to scale to when
your stack is created.
• NodeAutoScalingGroupMaxSize – Enter the maximum number of nodes that your worker node
Auto Scaling group can scale out to.
• NodeInstanceType – Choose one of the A1 instance types for your worker nodes, such as
a1.large.
• NodeVolumeSize – Specify a root volume size for your worker nodes, in GiB.
• KeyName – Enter the name of an Amazon EC2 SSH key pair that you can use to connect using SSH
into your worker nodes with after they launch. If you don't already have an Amazon EC2 key pair,
you can create one in the AWS Management Console. For more information, see Amazon EC2 Key
Pairs in the Amazon EC2 User Guide for Linux Instances.
Note
If you do not provide a key pair here, the AWS CloudFormation stack creation fails.
62
Amazon EKS User Guide
Join Worker Nodes to a Cluster
• BootstrapArguments – Arguments to pass to the bootstrap script. For details, see https://
github.com/awslabs/amazon-eks-ami/blob/master/files/bootstrap.sh.
• VpcId – Enter the ID for the VPC that you created in the section called “Create a cluster” (p. 61).
• Subnets – Choose the subnets that you created in the section called “Create a cluster” (p. 61).
• NodeImageAMI113 – The Amazon EC2 Systems Manager parameter for the 1.13 AMI image ID.
This value is ignored if you selected 1.14 for KubernetesVersion.
• NodeImageAMI114 – The Amazon EC2 Systems Manager parameter for the 1.14 AMI image ID.
This value is ignored if you selected 1.13 for KubernetesVersion.
5. Choose Next and then choose Next again.
6. Acknowledge that the stack might create IAM resources, and then choose Create stack.
7. When your stack has finished creating, select it in the console and choose Outputs.
8. Record the NodeInstanceRole for the node group that was created. You need this when you
configure your Amazon EKS worker nodes.
wget https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-11-15/aws-
auth-cm.yaml
b. Open the file with your favorite text editor. Replace the <ARN of instance role (not
instance profile)> snippet with the NodeInstanceRole value that you recorded in the
previous procedure, and save the file.
Important
Do not modify any other lines in this file.
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: <ARN of instance role (not instance profile)>
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
c. Apply the configuration. This command may take a few minutes to finish.
Note
If you receive the error "aws-iam-authenticator": executable file
not found in $PATH, your kubectl isn't configured for Amazon EKS. For more
information, see Installing aws-iam-authenticator (p. 179).
If you receive any other authorization or resource type errors, see Unauthorized or
Access Denied (kubectl) (p. 275) in the troubleshooting section.
2. Watch the status of your nodes and wait for them to reach the Ready status.
63
Amazon EKS User Guide
(Optional) Deploy an Application
clusterrole.rbac.authorization.k8s.io/cni-metrics-helper created
serviceaccount/cni-metrics-helper created
clusterrolebinding.rbac.authorization.k8s.io/cni-metrics-helper created
deployment.extensions/cni-metrics-helper created
2. Confirm that the CNI metrics helper is running with the following command.
The pod is running if you see the cni-metrics-helper pod returned in the output.
When a cluster is first created, the initial API server logs include the flags that were used to start the API
server. If you enable API server logs when you launch the cluster, or shortly thereafter, these logs are
sent to CloudWatch Logs and you can view them there.
1. If you have not already done so, enable API server logs for your Amazon EKS cluster.
64
Amazon EKS User Guide
Viewing API Server Flags
4. Scroll up to the earliest events (the beginning of the log stream). You should see the initial API
server flags for the cluster.
Note
If you don't see the API server logs at the beginning of the log stream, then it is likely that
the API server log file was rotated on the server before you enabled API server logging on
the server. Any log files that are rotated before API server logging is enabled cannot be
exported to CloudWatch.
However, you can create a new cluster with the same Kubernetes version and enable the API
server logging when you create the cluster. Clusters with the same platform version have
the same flags enabled, so your flags should match the new cluster's flags. When you finish
viewing the flags for the new cluster in CloudWatch, you can delete the new cluster.
65
Amazon EKS User Guide
Worker Nodes
Worker machines in Kubernetes are called nodes. Amazon EKS worker nodes run in your AWS account
and connect to your cluster's control plane via the cluster API server endpoint. You deploy one or more
worker nodes into a node group. A node group is one or more Amazon EC2 instances that are deployed
in an Amazon EC2 Auto Scaling group. All instances in a node group must:
A cluster can contain several node groups, and each node group can contain several worker nodes. If
you deploy managed node groups (p. 82), then there is a maximum number of nodes that can be in
a node group and a maximum number of node groups that you can have within a cluster. See service
quotas (p. 282) for details. With this information, you can determine how many node groups you may
need in a cluster to meet your requirements.
Amazon EKS worker nodes are standard Amazon EC2 instances, and you are billed for them based on
normal EC2 prices. For more information, see Amazon EC2 Pricing.
Amazon EKS provides a specialized Amazon Machine Image (AMI) called the Amazon EKS-optimized AMI.
This AMI is built on top of Amazon Linux 2, and is configured to serve as the base image for Amazon EKS
worker nodes. The AMI is configured to work with Amazon EKS out of the box, and it includes Docker,
kubelet, and the AWS IAM Authenticator. The AMI also contains a specialized bootstrap script that allows
it to discover and connect to your cluster's control plane automatically.
Note
You can track security or privacy events for Amazon Linux 2 at the Amazon Linux Security Center
or subscribe to the associated RSS feed. Security and privacy events include an overview of the
issue, what packages are affected, and how to update your instances to correct the issue.
Beginning with Kubernetes version 1.14 and platform version (p. 48) eks.3, Amazon EKS clusters
support Managed Node Groups (p. 82), which automate the provisioning and lifecycle management of
nodes. Earlier versions of Amazon EKS clusters can launch worker nodes with an Amazon EKS-provided
AWS CloudFormation template.
If you restrict access to your cluster's public endpoint using CIDR blocks, it is recommended that you
also enable private endpoint access so that worker nodes can communicate with the cluster. Without
the private endpoint enabled, the CIDR blocks that you specify for public access must include the egress
sources from your VPC. For more information, see Amazon EKS Cluster Endpoint Access Control (p. 35).
To add worker nodes to your Amazon EKS cluster, see Launching Amazon EKS Linux Worker
Nodes (p. 88). If you follow the steps in the guide, the required tag is added to the worker node for
you. If you launch workers manually, you must add the following tag to each worker node. For more
information, see Adding and Deleting Tags on an Individual Resource.
Key Value
kubernetes.io/cluster/<cluster-name> owned
66
Amazon EKS User Guide
Amazon EKS-Optimized Linux AMI
For more information about worker nodes from a general Kubernetes perspective, see Nodes in the
Kubernetes documentation.
The AMI IDs for the latest Amazon EKS-optimized AMI (with and without GPU support (p. 72)) are
shown in the following table. You can also retrieve the IDs with an AWS Systems Manager parameter
using different tools. For more information, see Retrieving Amazon EKS-Optimized AMI IDs (p. 78).
Note
The Amazon EKS-optimized AMI with GPU support only supports GPU instance types. Be sure
to specify these instance types in your worker node AWS CloudFormation template. By using
the Amazon EKS-optimized AMI with GPU support, you agree to NVIDIA's end user license
agreement (EULA).
67
Amazon EKS User Guide
Amazon EKS-Optimized Linux AMI
68
Amazon EKS User Guide
Amazon EKS-Optimized Linux AMI
69
Amazon EKS User Guide
Amazon EKS-Optimized Linux AMI
70
Amazon EKS User Guide
Amazon EKS-Optimized AMI Build Scripts
Important
These AMIs require the latest AWS CloudFormation worker node template. You can't use these
AMIs with a previous version of the worker node template; they will fail to join your cluster. Be
sure to upgrade any existing AWS CloudFormation worker stacks with the latest template (URL
shown below) before you attempt to use these AMIs.
https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-11-15/amazon-eks-
nodegroup.yaml
The AWS CloudFormation worker node template launches your worker nodes with Amazon EC2 user data
that triggers a specialized bootstrap script. This script allows your worker nodes to discover and connect
to your cluster's control plane automatically. For more information, see Launching Amazon EKS Linux
Worker Nodes (p. 88).
The Amazon EKS-optimized AMI is built on top of Amazon Linux 2, specifically for use as a worker node
in Amazon EKS clusters. You can use this repository to view the specifics of how the Amazon EKS team
configures kubelet, Docker, the AWS IAM Authenticator for Kubernetes, and more.
The build scripts repository includes a HashiCorp Packer template and build scripts to generate an AMI.
These scripts are the source of truth for Amazon EKS-optimized AMI builds, so you can follow the GitHub
repository to monitor changes to our AMIs. For example, perhaps you want your own AMI to use the
same version of Docker that the EKS team uses for the official AMI.
The GitHub repository also contains the specialized bootstrap script that runs at boot time to configure
your instance's certificate data, control plane endpoint, cluster name, and more.
Additionally, the GitHub repository contains our Amazon EKS worker node AWS CloudFormation
templates. These templates make it easier to spin up an instance running the Amazon EKS-optimized
AMI and register it with a cluster.
71
Amazon EKS User Guide
Amazon EKS-Optimized AMI with GPU Support
In addition to the standard Amazon EKS-optimized AMI configuration, the GPU AMI includes the
following:
• NVIDIA drivers
• The nvidia-docker2 package
• The nvidia-container-runtime (as the default runtime)
The AMI IDs for the latest Amazon EKS-optimized AMI with GPU support are shown in the following
table. You can also retrieve the IDs with an AWS Systems Manager parameter using different tools. For
more information, see Retrieving Amazon EKS-Optimized AMI IDs (p. 78).
Note
The Amazon EKS-optimized AMI with GPU support only supports GPU instance types. Be sure
to specify these instance types in your worker node AWS CloudFormation template. By using
the Amazon EKS-optimized AMI with GPU support, you agree to NVIDIA's end user license
agreement (EULA).
72
Amazon EKS User Guide
Amazon EKS-Optimized AMI with GPU Support
73
Amazon EKS User Guide
Amazon EKS-Optimized AMI with GPU Support
74
Amazon EKS User Guide
Amazon EKS-Optimized AMI with GPU Support
Important
These AMIs require the latest AWS CloudFormation worker node template. You can't use these
AMIs with a previous version of the worker node template; they will fail to join your cluster. Be
sure to upgrade any existing AWS CloudFormation worker stacks with the latest template (URL
shown below) before you attempt to use these AMIs.
https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-11-15/amazon-eks-
nodegroup.yaml
The AWS CloudFormation worker node template launches your worker nodes with Amazon EC2 user data
that triggers a specialized bootstrap script. This script allows your worker nodes to discover and connect
to your cluster's control plane automatically. For more information, see Launching Amazon EKS Linux
Worker Nodes (p. 88).
After your GPU worker nodes join your cluster, you must apply the NVIDIA device plugin for Kubernetes
as a DaemonSet on your cluster with the following command.
You can verify that your nodes have allocatable GPUs with the following command:
apiVersion: v1
kind: Pod
metadata:
name: nvidia-smi
spec:
restartPolicy: OnFailure
containers:
75
Amazon EKS User Guide
Amazon EKS-Optimized Linux AMI Versions
- name: nvidia-smi
image: nvidia/cuda:9.2-devel
args:
- "nvidia-smi"
resources:
limits:
nvidia.com/gpu: 1
After the pod has finished running, view its logs with the following command:
Output:
+-----------------------------------------------------------------------------+
| Processes: GPU Memory |
| GPU PID Type Process name Usage |
|=============================================================================|
| No running processes found |
+-----------------------------------------------------------------------------+
The Amazon EKS-optimized AMI metadata, including the AMI ID, for each variant can be retrieved
programmatically. For more information, see Retrieving Amazon EKS-Optimized AMI IDs (p. 78).
AMIs are versioned by Kubernetes version and the release date of the AMI in the following format:
k8s_major_version.k8s_minor_version.k8s_patch_version-release_date
AMI version kubelet version Docker version Kernel version Packer version
76
Amazon EKS User Guide
Amazon EKS-Optimized Linux AMI Versions
AMI version kubelet version Docker version Kernel version Packer version
AMI version kubelet version Docker version Kernel version Packer version
AMI version kubelet version Docker version Kernel version Packer version
1.15.10-20200228
1.15.10 18.09.9-ce 4.14.165 v20200228 418.87.00
77
Amazon EKS User Guide
Retrieving Amazon EKS-Optimized AMI IDs
1.13.12-20191213
1.13.12 18.09.9-ce 4.14.154 v20191213 418.87.00
1.13.11-20191119
1.13.11 18.09.9-ce 4.14.152 v20191119 418.87.00
1.13.11-20190927
1.13.11 18.06.1-ce 4.14.146 v20190927 418.87.00
1.12.10-20191213
1.12.10 18.09.9-ce 4.14.154 v20191213 418.87.00
1.12.10-20191119
1.12.10 18.09.9-ce 4.14.152 v20191119 418.87.00
1.12.10-20190927
1.12.10 18.06.1-ce 4.14.146 v20190927 418.87.00
Select the name of the tool that you want to retrieve the AMI ID with.
AWS CLI
You can retrieve the image ID of the latest recommended Amazon EKS-optimized Amazon Linux AMI
with the following command by using the sub-parameter image_id. Replace 1.15 with a supported
version (p. 48) and region-code with an Amazon EKS-supported Region for which you want the
AMI ID. Replace amazon-linux-2 with amazon-linux-2-gpu to see the AMI with GPU ID.
Example output:
ami-abcd1234efgh5678i
You can query for the recommended Amazon EKS-optimized AMI ID using a URL. The URL opens
the Amazon EC2 Systems Manager console with the value of the ID for the parameter. In the
following URL, replace 1.15 with a supported version (p. 48) and region-code with an Amazon
EKS-supported Region for which you want the AMI ID. Replace amazon-linux-2 with amazon-
linux-2-gpu to see the AMI with GPU ID.
78
Amazon EKS User Guide
Amazon EKS-Optimized Windows AMI
https://console.aws.amazon.com/systems-manager/parameters/%252Faws%252Fservice
%252Feks%252Foptimized-ami%252F1.15%252Famazon-linux-2%252Frecommended%252Fimage_id/
description?region=region-code
The AMI IDs for the latest Amazon EKS-optimized AMI are shown in the following table. You can also
retrieve the IDs with an AWS Systems Manager parameter using different tools. For more information,
see Retrieving Amazon EKS-Optimized Windows AMI IDs (p. 81).
79
Amazon EKS User Guide
Amazon EKS-Optimized Windows AMI
80
Amazon EKS User Guide
Retrieving Amazon EKS-Optimized Windows AMI IDs
Select the name of the tool that you want to retrieve the AMI ID with.
AWS CLI
You can retrieve the image ID of the latest recommended Amazon EKS-optimized Windows AMI with
the following command by using the sub-parameter image_id. You can replace 1.15 with 1.14
and region-code with an Amazon EKS-supported Region for which you want the AMI ID. Replace
Core with Full to see the Windows Server full AMI ID.
81
Amazon EKS User Guide
Managed Node Groups
Example output:
ami-ami-00a053f1635fffea0
You can query for the recommended Amazon EKS-optimized AMI ID using a URL. The URL opens the
Amazon EC2 Systems Manager console with the value of the ID for the parameter. In the following
URL, You can replace 1.15 with 1.14 and region-code with an Amazon EKS-supported Region for
which you want the AMI ID. Replace Core with Full to see the Windows Server full AMI ID.
https://console.aws.amazon.com/systems-manager/parameters/%252Faws%252Fservice%252Fami-
windows-latest%252FWindows_Server-2019-English-Core-EKS_Optimized-1.15%252Fimage_id/
description?region=region-code
With Amazon EKS managed node groups, you don’t need to separately provision or register the Amazon
EC2 instances that provide compute capacity to run your Kubernetes applications. You can create,
update, or terminate nodes for your cluster with a single operation. Nodes run using the latest Amazon
EKS-optimized AMIs in your AWS account while node updates and terminations gracefully drain nodes to
ensure that your applications stay available.
All managed nodes are provisioned as part of an Amazon EC2 Auto Scaling group that is managed for
you by Amazon EKS. All resources including the instances and Auto Scaling groups run within your AWS
account. Each node group uses the Amazon EKS-optimized Amazon Linux 2 AMI and can run across
multiple Availability Zones that you define.
You can add a managed node group to new or existing clusters using the Amazon EKS console, eksctl,
AWS CLI, AWS API, or infrastructure as code tools including AWS CloudFormation. Nodes launched as
part of a managed node group are automatically tagged for auto-discovery by the Kubernetes cluster
autoscaler and you can use the node group to apply Kubernetes labels to nodes and update them at any
time.
There are no additional costs to use Amazon EKS managed node groups, you only pay for the AWS
resources you provision. These include Amazon EC2 instances, Amazon EBS volumes, Amazon EKS cluster
hours, and any other AWS infrastructure. There are no minimum fees and no upfront commitments.
To get started with a new Amazon EKS cluster and managed node group, see Getting Started with the
AWS Management Console (p. 10).
To add a managed node group to an existing cluster, see Creating a Managed Node Group (p. 83).
82
Amazon EKS User Guide
Creating a Managed Node Group
• All managed nodes are provisioned as part of an Amazon EC2 Auto Scaling group that is managed for
you by Amazon EKS and all resources including Amazon EC2 instances and Auto Scaling groups run
within your AWS account.
• A managed node group's Auto Scaling group spans all of the subnets that you specify when you create
the group.
• Amazon EKS tags managed node group resources so that they are configured to use the Kubernetes
Cluster Autoscaler (p. 130).
Important
If you are running a stateful application across multiple Availability Zones that is backed by
Amazon EBS volumes and using the Kubernetes Cluster Autoscaler (p. 130), you should
configure multiple node groups, each scoped to a single Availability Zone. In addition, you
should enable the --balance-similar-node-groups feature.
• Instances in a managed node group use the latest version of the Amazon EKS-optimized Amazon Linux
2 AMI for its cluster's Kubernetes version. You can choose between standard and GPU variants of the
Amazon EKS-optimized Amazon Linux 2 AMI.
• Amazon EKS follows the shared responsibility model for CVEs and security patches on managed node
groups. Because managed nodes run the Amazon EKS-optimized AMIs, Amazon EKS is responsible for
building patched versions of these AMIs when bugs or issues are reported and we are able to publish
a fix. However, you are responsible for deploying these patched AMI versions to your managed node
groups. When updates become available, see Updating a Managed Node Group (p. 85).
• Amazon EKS managed node groups can be launched in both public and private subnets. The only
requirement is for the subnets to have outbound internet access. Amazon EKS automatically associates
a public IP to the instances started as part of a managed node group to ensure that these instances
can successfully join a cluster.
This ensures compatibility with existing VPCs created using eksctl or the Amazon EKS-vended
AWS CloudFormation templates (p. 150). The public subnets in these VPCs do not have
MapPublicIpOnLaunch set to true, so by default instances launched into these subnets are not
assigned a public IP address. Creating a public IP address on these instances launched by managed
node groups in public subnets ensures that they have outbound internet access and are able to join the
cluster.
• You can create multiple managed node groups within a single cluster. For example, you could create
one node group with the standard Amazon EKS-optimized Amazon Linux 2 AMI for some workloads
and another with the GPU variant for workloads that require GPU support.
• If your managed node group encounters a health issue, Amazon EKS returns an error message to help
you to diagnose the issue. For more information, see Managed Node Group Errors (p. 276).
• Amazon EKS adds Kubernetes labels to managed node group instances. These Amazon EKS-provided
labels are prefixed with eks.amazon.com.
• Amazon EKS automatically drains nodes using the Kubernetes API during terminations or updates.
Updates respect the pod disruption budgets that you set for your pods.
• There are no additional costs to use Amazon EKS managed node groups, you only pay for the AWS
resources you provision.
Managed Node Groups (p. 82) are supported on Amazon EKS clusters beginning with Kubernetes
version 1.14 and platform version (p. 48) eks.3. Existing clusters can update to version 1.14 or later to
take advantage of this feature. For more information, see Updating an Amazon EKS Cluster Kubernetes
Version (p. 29).
83
Amazon EKS User Guide
Creating a Managed Node Group
If this is your first time launching an Amazon EKS managed node group, we recommend that you follow
one of our Getting Started with Amazon EKS (p. 3) guides instead. The guides provide complete end-to-
end walkthroughs for creating an Amazon EKS cluster with worker nodes.
Important
Amazon EKS worker nodes are standard Amazon EC2 instances, and you are billed for them
based on normal Amazon EC2 prices. For more information, see Amazon EC2 Pricing.
1. Wait for your cluster status to show as ACTIVE. You cannot create a managed node group for a
cluster that is not yet ACTIVE.
2. Open the Amazon EKS console at https://console.aws.amazon.com/eks/home#/clusters.
3. Choose the name of the cluster that you want to create your managed node group in.
4. On the cluster page, choose Add node group.
5. On the Configure node group page, fill out the parameters accordingly, and then choose Next.
• AMI type — Choose Amazon Linux 2 (AL2_x86_64) for non-GPU instances, or Amazon Linux 2
GPU Enabled (AL2_x86_64_GPU) for GPU instances.
84
Amazon EKS User Guide
Updating a Managed Node Group
• Instance type — Choose the instance type to use in your managed node group. Larger instance
types can accommodate more pods.
• Disk size — Enter the disk size (in GiB) to use for your worker node root volume.
7. On the Setup scaling policies page, fill out the parameters accordingly, and then choose Next.
Note
Amazon EKS does not automatically scale your node group in or out. However, you can
configure the Kubernetes Cluster Autoscaler (p. 130) to do this for you.
• Minimum size — Specify the minimum number of worker nodes that the managed node group
can scale in to.
• Maximum size — Specify the maximum number of worker nodes that the managed node group
can scale out to.
• Desired size — Specify the current number of worker nodes that the managed node group should
maintain at launch.
8. On the Review and create page, review your managed node group configuration and choose Create.
9. Watch the status of your nodes and wait for them to reach the Ready status.
10. (GPU workers only) If you chose a GPU instance type and the Amazon EKS-optimized AMI with GPU
support, you must apply the NVIDIA device plugin for Kubernetes as a DaemonSet on your cluster
with the following command.
Now that you have a working Amazon EKS cluster with worker nodes, you are ready to start installing
Kubernetes add-ons and deploying applications to your cluster. The following documentation topics help
you to extend the functionality of your cluster.
• Cluster Autoscaler (p. 130) — Configure the Kubernetes Cluster Autoscaler to automatically adjust
the number of nodes in your node groups.
• Launch a Guest Book Application (p. 192) — Create a sample guest book application to test your
cluster and Linux worker nodes.
• Deploy a Windows Sample Application (p. 59) — Deploy a sample application to test your cluster and
Windows worker nodes.
• Tutorial: Deploy the Kubernetes Web UI (Dashboard) (p. 202) — This tutorial guides you through
deploying the Kubernetes dashboard to your cluster.
• Using Helm with Amazon EKS (p. 201) — The helm package manager for Kubernetes helps you
install and manage applications on your cluster.
• Installing the Kubernetes Metrics Server (p. 195) — The Kubernetes metrics server is an aggregator
of resource usage data in your cluster.
• Control Plane Metrics with Prometheus (p. 197) — This topic helps you deploy Prometheus into your
cluster with helm.
85
Amazon EKS User Guide
Updating a Managed Node Group
• You have updated the Kubernetes version for your Amazon EKS cluster, and you want to update your
worker nodes to use the same Kubernetes version.
• A new AMI release version is available for your managed node group. For more information, see
Amazon EKS-Optimized Linux AMI Versions (p. 76).
• You want to adjust the minimum, maximum, or desired count of the instances in your managed node
group.
• You want to add or remove Kubernetes labels from the instances in your managed node group.
• You want to add or remove AWS tags from your managed node group.
If there is a newer AMI release version for your managed node group's Kubernetes version than the one
your node group is running, you can update it to use that new AMI version. If your cluster is running a
newer Kubernetes version than your node group, you can update the node group to use the latest AMI
release version that matches your cluster's Kubernetes version.
Note
You cannot roll back a node group to an earlier Kubernetes version or AMI version.
When a node in a managed node group is terminated due to a scaling action or update, the pods in that
node are drained first. For more information, see Managed Node Update Behavior (p. 87).
If you select a node group from the table and an update is available for it, you'll receive a
notification on the Node Group configuration page. If so, you can select the Update now button on
the Node Group configuration page.
Note
Update now only appears if there is an update available. If you do not see this text, then
your node group is running the latest available version.
4. On the Update AMI release version page, select the Available AMI release version that you want to
update to, select one of the following options for Update strategy, and choose Update.
• Rolling update — This option respects pod disruption budgets for your cluster and the update
fails if Amazon EKS is unable to gracefully drain the pods that are running on this node group due
to a pod disruption budget issue.
• Force update — This option does not respect pod disruption budgets and it forces node restarts.
• Tags — Add tags to or remove tags from your node group resource. These tags are only applied
to the Amazon EKS node group, and they do not propagate to other resources, such as subnets or
Amazon EC2 instances in the node group.
86
Amazon EKS User Guide
Deleting a Managed Node Group
• Kubernetes labels — Add or remove Kubernetes labels to the nodes in your node group. The
labels shown here are only the labels that you have applied with Amazon EKS. Other labels may
exist on your nodes that are not shown here.
5. On the Edit node group page edit the Group size if necessary.
• Minimum size — Specify the current number of worker nodes that the managed node group
should maintain.
• Maximum size — Specify the maximum number of worker nodes that the managed node group
can scale out to. Managed node groups can support up to 100 nodes by default.
• Desired size — Specify the current number of worker nodes that the managed node group should
maintain.
6. When you are finished editing, choose Save changes.
1. Amazon EKS creates a new Amazon EC2 launch template version for the Auto Scaling group
associated with your node group. The new template uses the target AMI for the update.
2. The Auto Scaling group is updated to use the latest launch template with the new AMI.
3. The Auto Scaling group maximum size and desired size are incremented by 1 to support the new
instances that will be launched into your node group.
4. The Auto Scaling group launches a new instance with the new AMI to satisfy the increased desired size
of the node group.
5. Amazon EKS checks the nodes in the node group for the eks.amazonaws.com/nodegroup-image
label, and it cordons all of the nodes in the node group that are not labeled with the latest AMI
ID. This prevents nodes that have already been updated from a previous failed update from being
cordoned.
6. Amazon EKS randomly selects a node in your node group and sends a termination signal to the Auto
Scaling group, Then Amazon EKS sends a signal to drain the pods from the node.* After the node is
drained, it is terminated. This step is repeated until all of the nodes are using the new AMI version.
7. The Auto Scaling group maximum size and desired size are decremented by 1 to return to your pre-
update values.
* If pods do not drain from a node (for example, if a pod disruption budget is too restrictive) for 15
minutes, then one of two things happens:
• If the update is not forced, then the update fails and reports an error.
• If the update is forced, then the pods that could not be drained are deleted.
When you delete a managed node group, Amazon EKS randomly selects a node in your node group
and sends a termination signal to the Auto Scaling group. Then Amazon EKS sends a signal to drain
the pods from the node. If pods do not drain from a node (for example, if a pod disruption budget is
too restrictive) for 15 minutes, then the pods are deleted. After the node is drained, it is terminated.
This step is repeated until all of the nodes in the Auto Scaling group are terminated, and then the Auto
Scaling group is deleted.
87
Amazon EKS User Guide
Launching Amazon EKS Linux Worker Nodes
Important
If you delete a managed node group that uses a worker node IAM role that is not used by
any other managed node group in the cluster, the role is removed from the aws-auth
ConfigMap (p. 185). If any self-managed node groups in the cluster are using the same worker
node IAM role, the self-managed nodes will move to the NotReady status and cluster operation
will be disrupted. You can add the mapping back to the ConfigMap to minimize disruption.
If this is your first time launching Amazon EKS Linux worker nodes, we recommend that you follow one
of our Getting Started with Amazon EKS (p. 3) guides instead. The guides provide complete end-to-end
walkthroughs for creating an Amazon EKS cluster with worker nodes.
Important
Amazon EKS worker nodes are standard Amazon EC2 instances, and you are billed for them
based on normal Amazon EC2 prices. For more information, see Amazon EC2 Pricing.
Choose the tab below that corresponds to your desired worker node creation method:
Managed Node Groups (p. 82) are supported on Amazon EKS clusters beginning with Kubernetes
version 1.14 and platform version (p. 48) eks.3. Existing clusters can update to version 1.14 or
later to take advantage of this feature. For more information, see Updating an Amazon EKS Cluster
Kubernetes Version (p. 29).
1. Wait for your cluster status to show as ACTIVE. You cannot create a managed node group for a
cluster that is not yet ACTIVE.
2. Open the Amazon EKS console at https://console.aws.amazon.com/eks/home#/clusters.
3. Choose the name of the cluster that you want to create your managed node group in.
4. On the cluster page, choose Add node group.
5. On the Configure node group page, fill out the parameters accordingly, and then choose Next.
88
Amazon EKS User Guide
Launching Amazon EKS Linux Worker Nodes
Important
If you are running a stateful application across multiple Availability Zones
that is backed by Amazon EBS volumes and using the Kubernetes Cluster
Autoscaler (p. 130), you should configure multiple node groups, each scoped to a
single Availability Zone. In addition, you should enable the --balance-similar-
node-groups feature.
• Remote Access — (Optional) You can enable SSH access to the nodes in your managed
node group. Enabling SSH allows you to connect to your instances and gather diagnostic
information if there are issues. Complete the following steps to enable remote access.
Note
We highly recommend enabling remote access when you create your node group. You
cannot enable remote access after the node group is created.
1. Select the check box to Allow remote access to nodes.
2. For SSH key pair, choose an Amazon EC2 SSH key to use. For more information, see
Amazon EC2 Key Pairs in the Amazon EC2 User Guide for Linux Instances.
3. For Allow remote access from, choose All to allow SSH access from anywhere on the
Internet (0.0.0.0/0), or select a security group to allow SSH access from instances that
belong to that security group.
• Tags — (Optional) You can choose to tag your Amazon EKS managed node group. These
tags do not propagate to other resources in the node group, such as Auto Scaling groups or
instances. For more information, see Tagging Your Amazon EKS Resources (p. 263).
• Kubernetes labels — (Optional) You can choose to apply Kubernetes labels to the nodes in
your managed node group.
6. On the Set compute configuration page, fill out the parameters accordingly, and then choose
Next.
• AMI type — Choose Amazon Linux 2 (AL2_x86_64) for non-GPU instances, or Amazon Linux
2 GPU Enabled (AL2_x86_64_GPU) for GPU instances.
• Instance type — Choose the instance type to use in your managed node group. Larger
instance types can accommodate more pods.
• Disk size — Enter the disk size (in GiB) to use for your worker node root volume.
7. On the Setup scaling policies page, fill out the parameters accordingly, and then choose Next.
Note
Amazon EKS does not automatically scale your node group in or out. However, you can
configure the Kubernetes Cluster Autoscaler (p. 130) to do this for you.
• Minimum size — Specify the minimum number of worker nodes that the managed node
group can scale in to.
• Maximum size — Specify the maximum number of worker nodes that the managed node
group can scale out to.
• Desired size — Specify the current number of worker nodes that the managed node group
should maintain at launch.
8. On the Review and create page, review your managed node group configuration and choose
Create.
9. Watch the status of your nodes and wait for them to reach the Ready status.
10. (GPU workers only) If you chose a GPU instance type and the Amazon EKS-optimized AMI with
GPU support, you must apply the NVIDIA device plugin for Kubernetes as a DaemonSet on your
cluster with the following command.
89
Amazon EKS User Guide
Launching Amazon EKS Linux Worker Nodes
eksctl
This procedure assumes that you have installed eksctl, and that your eksctl version is at least
0.15.0-rc.2. You can check your version with the following command:
eksctl version
1. Create your worker node group with the following command. Replace the example values
with your own values.
Note
For more information on the available options for eksctl create nodegroup, see the
project README on GitHub or view the help page with the following command.
Output:
You'll see several lines of output as the worker nodes are created. The last line of output is
similar to the following example line.
2. (Optional) Launch a Guest Book Application (p. 192) — Deploy a sample application to test
your cluster and Linux worker nodes.
Self-managed nodes
• You have created a VPC and security group that meet the requirements for an Amazon EKS
cluster. For more information, see Cluster VPC Considerations (p. 152) and Amazon EKS Security
Group Considerations (p. 154). The Getting Started with Amazon EKS (p. 3) guide creates a
90
Amazon EKS User Guide
Launching Amazon EKS Linux Worker Nodes
VPC that meets the requirements, or you can also follow Creating a VPC for Your Amazon EKS
Cluster (p. 150) to create one manually.
• You have created an Amazon EKS cluster and specified that it use the VPC and security group that
meet the requirements of an Amazon EKS cluster. For more information, see Creating an Amazon
EKS Cluster (p. 21).
To launch your self-managed worker nodes with the AWS Management Console
1. Wait for your cluster status to show as ACTIVE. If you launch your worker nodes before the
cluster is active, the worker nodes will fail to register with the cluster and you will have to
relaunch them.
2. Open the AWS CloudFormation console at https://console.aws.amazon.com/cloudformation
3. Choose Create stack.
4. For Specify template, select Amazon S3 URL, then copy the following URL, paste it into
Amazon S3 URL, and select Next twice.
https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-11-15/amazon-eks-
nodegroup.yaml
Note
If you intend to only deploy worker nodes to private subnets, you should
edit this template in the AWS CloudFormation designer and modify the
AssociatePublicIpAddress parameter in the NodeLaunchConfig to be false.
AssociatePublicIpAddress: 'false'
5. On the Quick create stack page, fill out the following parameters accordingly:
• Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can
call it <cluster-name>-worker-nodes.
• ClusterName: Enter the name that you used when you created your Amazon EKS cluster.
Important
This name must exactly match the name you used in Step 1: Create Your Amazon EKS
Cluster (p. 15); otherwise, your worker nodes cannot join the cluster.
• ClusterControlPlaneSecurityGroup: Choose the SecurityGroups value from the AWS
CloudFormation output that you generated with Create your Amazon EKS Cluster VPC (p. 12).
• NodeGroupName: Enter a name for your node group. This name can be used later to identify
the Auto Scaling node group that is created for your worker nodes.
• NodeAutoScalingGroupMinSize: Enter the minimum number of nodes that your worker node
Auto Scaling group can scale in to.
• NodeAutoScalingGroupDesiredCapacity: Enter the desired number of nodes to scale to when
your stack is created.
• NodeAutoScalingGroupMaxSize: Enter the maximum number of nodes that your worker
node Auto Scaling group can scale out to.
• NodeInstanceType: Choose an instance type for your worker nodes.
Note
The supported instance types for the latest version of the Amazon VPC CNI plugin
for Kubernetes are shown here. You may need to update your CNI version to take
advantage of the latest supported instance types. For more information, see Amazon
VPC CNI Plugin for Kubernetes Upgrades (p. 166).
Important
Some instance types might not be available in all Regions.
91
Amazon EKS User Guide
Launching Amazon EKS Linux Worker Nodes
1. Download, edit, and apply the AWS IAM Authenticator configuration map.
b. Open the file with your favorite text editor. Replace the <ARN of instance role (not
instance profile)> snippet with the NodeInstanceRole value that you recorded in the
previous procedure, and save the file.
Important
Do not modify any other lines in this file.
apiVersion: v1
kind: ConfigMap
92
Amazon EKS User Guide
Launching Amazon EKS Windows Worker Nodes
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: <ARN of instance role (not instance profile)>
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
c. Apply the configuration. This command may take a few minutes to finish.
Note
If you receive the error "aws-iam-authenticator": executable file
not found in $PATH, your kubectl isn't configured for Amazon EKS. For more
information, see Installing aws-iam-authenticator (p. 179).
If you receive any other authorization or resource type errors, see Unauthorized or
Access Denied (kubectl) (p. 275) in the troubleshooting section.
2. Watch the status of your nodes and wait for them to reach the Ready status.
3. (GPU workers only) If you chose a GPU instance type and the Amazon EKS-optimized AMI with
GPU support, you must apply the NVIDIA device plugin for Kubernetes as a DaemonSet on your
cluster with the following command.
4. (Optional) Launch a Guest Book Application (p. 192) — Deploy a sample application to test
your cluster and Linux worker nodes.
You must enable Windows support for your cluster and we recommend that you review important
considerations before you launch a Windows worker node group. For more information, see Enabling
Windows Support (p. 55).
Choose the tab below that corresponds to your desired worker node creation method:
eksctl
If you don't already have an Amazon EKS cluster and a Linux worker node group to add a Windows
worker node group to, then we recommend that you follow the Getting Started with eksctl (p. 3)
guide instead. The guide provides a complete end-to-end walkthrough for creating an Amazon EKS
cluster with Linux and Windows worker nodes. If you have an existing Amazon EKS cluster and a
93
Amazon EKS User Guide
Launching Amazon EKS Windows Worker Nodes
Linux worker node group to add a Windows worker node group to, then complete the following
steps to add the Windows worker node group.
eksctl version
1. Create your worker node group with the following command. Replace the example values
with your own values.
Note
For more information on the available options for eksctl create nodegroup, see the
project README on GitHub or view the help page with the following command.
Output:
You'll see several lines of output as the worker nodes are created. The last line of output is
similar to the following example line.
2. (Optional) Deploy a Windows Sample Application (p. 59) — Deploy a sample application to test
your cluster and Windows worker nodes.
• You have an existing Amazon EKS cluster and a Linux worker node group. If you don't have these
resources, we recommend that you follow one of our Getting Started with Amazon EKS (p. 3)
guides to create them. The guides provide a complete end-to-end walkthrough for creating an
Amazon EKS cluster with Linux worker nodes.
• You have created a VPC and security group that meet the requirements for an Amazon EKS
cluster. For more information, see Cluster VPC Considerations (p. 152) and Amazon EKS Security
Group Considerations (p. 154). The Getting Started with Amazon EKS (p. 3) guide creates a
94
Amazon EKS User Guide
Launching Amazon EKS Windows Worker Nodes
VPC that meets the requirements, or you can also follow Creating a VPC for Your Amazon EKS
Cluster (p. 150) to create one manually.
1. Wait for your cluster status to show as ACTIVE. If you launch your worker nodes before the
cluster is active, the worker nodes will fail to register with the cluster and you will have to
relaunch them.
2. Open the AWS CloudFormation console at https://console.aws.amazon.com/cloudformation
3. Choose Create stack.
4. For Specify template, select Amazon S3 URL, then copy the following URL, paste it into
Amazon S3 URL, and select Next twice.
https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-11-15/amazon-eks-
windows-nodegroup.yaml
5. On the Quick create stack page, fill out the following parameters accordingly:
• Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can
call it cluster-name-worker-nodes.
• ClusterName: Enter the name that you used when you created your Amazon EKS cluster.
Important
This name must exactly match the name you used in Step 1: Create Your Amazon EKS
Cluster (p. 15); otherwise, your worker nodes cannot join the cluster.
• ClusterControlPlaneSecurityGroup: Choose the SecurityGroups value from the AWS
CloudFormation output that you generated with Create your Amazon EKS Cluster VPC (p. 12).
• NodeGroupName: Enter a name for your node group. This name can be used later to identify
the Auto Scaling node group that is created for your worker nodes.
• NodeAutoScalingGroupMinSize: Enter the minimum number of nodes that your worker node
Auto Scaling group can scale in to.
• NodeAutoScalingGroupDesiredCapacity: Enter the desired number of nodes to scale to when
your stack is created.
• NodeAutoScalingGroupMaxSize: Enter the maximum number of nodes that your worker
node Auto Scaling group can scale out to.
• NodeInstanceType: Choose an instance type for your worker nodes.
Note
The supported instance types for the latest version of the Amazon VPC CNI plugin
for Kubernetes are shown here. You may need to update your CNI version to take
advantage of the latest supported instance types. For more information, see Amazon
VPC CNI Plugin for Kubernetes Upgrades (p. 166).
Important
Some instance types might not be available in all Regions.
• NodeImageIdSSMParam: Pre-populated with the Amazon EC2 Systems Manager parameter
of the current recommended Amazon EKS-Optimized Windows Core AMI ID. If you want to
use the full version of Windows, then replace Core with Full.
• NodeImageId: (Optional) If you are using your own custom AMI (instead of the Amazon EKS-
optimized AMI), enter a worker node AMI ID for your Region. If you specify a value here, it
overrides any values in the NodeImageIdSSMParam field.
• NodeVolumeSize: Specify a root volume size for your worker nodes, in GiB.
• KeyName: Enter the name of an Amazon EC2 SSH key pair that you can use to connect using
SSH into your worker nodes with after they launch. If you don't already have an Amazon
EC2 keypair, you can create one in the AWS Management Console. For more information, see
Amazon EC2 Key Pairs in the Amazon EC2 User Guide for Windows Instances.
95
Amazon EKS User Guide
Launching Amazon EKS Windows Worker Nodes
Note
If you do not provide a keypair here, the AWS CloudFormation stack creation fails.
• BootstrapArguments: Specify any optional arguments to pass to the worker node bootstrap
script, such as extra kubelet arguments using -KubeletExtraArgs.
• VpcId: Select the ID for the VPC that you created in Create your Amazon EKS Cluster
VPC (p. 12).
• NodeSecurityGroups: Select the security group that was created for your Linux worker node
group in Create your Amazon EKS Cluster VPC (p. 12). If your Linux worker nodes have more
than one security group attached to them (for example, if the Linux worker node group was
created with eksctl), specify all of them here.
• Subnets: Choose the subnets that you created in Create your Amazon EKS Cluster VPC (p. 12).
If you created your VPC using the steps described at Creating a VPC for Your Amazon EKS
Cluster (p. 150), then specify only the private subnets within the VPC for your worker nodes
to launch into.
6. Acknowledge that the stack might create IAM resources, and then choose Create stack.
7. When your stack has finished creating, select it in the console and choose Outputs.
8. Record the NodeInstanceRole for the node group that was created. You need this when you
configure your Amazon EKS Windows worker nodes.
1. Download, edit, and apply the AWS IAM Authenticator configuration map.
b. Open the file with your favorite text editor. Replace the <ARN of instance role (not
instance profile) of **Linux** worker node> and <ARN of instance role
(not instance profile) of **Windows** worker node> snippets with the
NodeInstanceRole values that you recorded for your Linux and Windows worker nodes, and
save the file.
Important
Do not modify any other lines in this file.
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: <ARN of instance role (not instance profile) of **Linux** worker
node>
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- rolearn: <ARN of instance role (not instance profile) of **Windows**
worker node>
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- eks:kube-proxy-windows
96
Amazon EKS User Guide
Worker Node Updates
c. Apply the configuration. This command may take a few minutes to finish.
Note
If you receive the error "aws-iam-authenticator": executable file
not found in $PATH, your kubectl isn't configured for Amazon EKS. For more
information, see Installing aws-iam-authenticator (p. 179).
If you receive any other authorization or resource type errors, see Unauthorized or
Access Denied (kubectl) (p. 275) in the troubleshooting section.
2. Watch the status of your nodes and wait for them to reach the Ready status.
3. (Optional) Deploy a Windows Sample Application (p. 59) — Deploy a sample application to test
your cluster and Windows worker nodes.
There are two basic ways to update self-managed node groups in your clusters to use a new AMI: create a
new worker node group and migrate your pods to that group, or update the AWS CloudFormation stack
for an existing worker node group to use the new AMI. This latter method is not supported for worker
node groups that were created with eksctl.
Migrating to a new worker node group is more graceful than simply updating the AMI ID in an existing
AWS CloudFormation stack, because the migration process taints the old node group as NoSchedule
and drains the nodes after a new stack is ready to accept the existing pod workload.
Topics
• Migrating to a New Worker Node Group (p. 97)
• Updating an Existing Worker Node Group (p. 102)
eksctl
This procedure assumes that you have installed eksctl, and that your eksctl version is at least
0.15.0-rc.2. You can check your version with the following command:
97
Amazon EKS User Guide
Migrating to a New Worker Node Group
eksctl version
1. Retrieve the name of your existing worker node groups, substituting default with your cluster
name.
Output:
2. Launch a new worker node group with eksctl with the following command, substituting the
example values with your own values.
Note
For more available flags and their descriptions, see https://eksctl.io/.
3. When the previous command completes, verify that all of your worker nodes have reached the
Ready state with the following command:
4. Delete the original node group with the following command, substituting the example values
with your cluster and nodegroup names:
To migrate your applications to a new worker node group with the AWS Management
Console
1. Launch a new worker node group by following the steps outlined in Launching Amazon EKS
Linux Worker Nodes (p. 88).
2. When your stack has finished creating, select it in the console and choose Outputs.
3. Record the NodeInstanceRole for the node group that was created. You need this to add the
new Amazon EKS worker nodes to your cluster.
98
Amazon EKS User Guide
Migrating to a New Worker Node Group
Note
If you have attached any additional IAM policies to your old node group IAM role, such
as adding permissions for the Kubernetes Cluster Autoscaler, you should attach those
same policies to your new node group IAM role to maintain that functionality on the
new group.
4. Update the security groups for both worker node groups so that they can communicate with
each other. For more information, see Amazon EKS Security Group Considerations (p. 154).
a. Record the security group IDs for both worker node groups. This is shown as the
NodeSecurityGroup value in the AWS CloudFormation stack outputs.
You can use the following AWS CLI commands to get the security group IDs from the stack
names. In these commands, oldNodes is the AWS CloudFormation stack name for your
older worker node stack, and newNodes is the name of the stack that you are migrating to.
oldNodes="<old_node_CFN_stack_name>"
newNodes="<new_node_CFN_stack_name>"
b. Add ingress rules to each worker node security group so that they accept traffic from each
other.
The following AWS CLI commands add ingress rules to each security group that allow all
traffic on all protocols from the other security group. This configuration allows pods in each
worker node group to communicate with each other while you are migrating your workload
to the new group.
5. Edit the aws-auth configmap to map the new worker node instance role in RBAC.
Add a new mapRoles entry for the new worker node group.
apiVersion: v1
data:
mapRoles: |
- rolearn: <ARN of instance role (not instance profile)>
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
- rolearn: arn:aws:iam::111122223333:role/workers-1-10-NodeInstanceRole-
U11V27W93CX5
username: system:node:{{EC2PrivateDNSName}}
99
Amazon EKS User Guide
Migrating to a New Worker Node Group
groups:
- system:bootstrappers
- system:nodes
Replace the <ARN of instance role (not instance profile)> snippet with the
NodeInstanceRole value that you recorded in Step 3 (p. 98), then save and close the file to
apply the updated configmap.
6. Watch the status of your nodes and wait for your new worker nodes to join your cluster and
reach the Ready status.
7. (Optional) If you are using the Kubernetes Cluster Autoscaler, scale the deployment down to 0
replicas to avoid conflicting scaling actions.
8. Use the following command to taint each of the nodes that you want to remove with
NoSchedule so that new pods are not scheduled or rescheduled on the nodes you are
replacing:
If you are upgrading your worker nodes to a new Kubernetes version, you can identify and taint
all of the nodes of a particular Kubernetes version (in this case, 1.13.12) with the following code
snippet.
K8S_VERSION=1.13.12
nodes=$(kubectl get nodes -o jsonpath="{.items[?(@.status.nodeInfo.kubeletVersion==
\"v$K8S_VERSION\")].metadata.name}")
for node in ${nodes[@]}
do
echo "Tainting $node"
kubectl taint nodes $node key=value:NoSchedule
done
Output (this cluster is using kube-dns for DNS resolution, but your cluster may return coredns
instead):
10. If your current deployment is running fewer than two replicas, scale out the deployment to two
replicas. Substitute coredns for kube-dns if your previous command output returned that
instead.
11. Drain each of the nodes that you want to remove from your cluster with the following
command:
100
Amazon EKS User Guide
Migrating to a New Worker Node Group
If you are upgrading your worker nodes to a new Kubernetes version, you can identify and drain
all of the nodes of a particular Kubernetes version (in this case, 1.13.12) with the following code
snippet.
K8S_VERSION=1.13.12
nodes=$(kubectl get nodes -o jsonpath="{.items[?(@.status.nodeInfo.kubeletVersion==
\"v$K8S_VERSION\")].metadata.name}")
for node in ${nodes[@]}
do
echo "Draining $node"
kubectl drain $node --ignore-daemonsets --delete-local-data
done
12. After your old worker nodes have finished draining, revoke the security group ingress rules you
authorized earlier, and then delete the AWS CloudFormation stack to terminate the instances.
Note
If you have attached any additional IAM policies to your old node group IAM role, such
as adding permissions for the Kubernetes Cluster Autoscaler), you must detach those
additional policies from the role before you can delete your AWS CloudFormation stack.
a. Revoke the ingress rules that you created for your worker node security groups earlier. In
these commands, oldNodes is the AWS CloudFormation stack name for your older worker
node stack, and newNodes is the name of the stack that you are migrating to.
oldNodes="<old_node_CFN_stack_name>"
newNodes="<new_node_CFN_stack_name>"
Delete the mapRoles entry for the old worker node group.
apiVersion: v1
data:
mapRoles: |
- rolearn: arn:aws:iam::111122223333:role/workers-1-11-NodeInstanceRole-
W70725MZQFF8
username: system:node:{{EC2PrivateDNSName}}
101
Amazon EKS User Guide
Updating an Existing Worker Node Group
groups:
- system:bootstrappers
- system:nodes
- rolearn: arn:aws:iam::111122223333:role/workers-1-10-NodeInstanceRole-
U11V27W93CX5
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
15. (Optional) Verify that you are using the latest version of the Amazon VPC CNI plugin for
Kubernetes. You may need to update your CNI version to take advantage of the latest
supported instance types. For more information, see Amazon VPC CNI Plugin for Kubernetes
Upgrades (p. 166).
16. If your cluster is using kube-dns for DNS resolution (see step Step 9 (p. 100)), scale in the
kube-dns deployment to one replica.
The latest default Amazon EKS worker node AWS CloudFormation template is configured to launch
an instance with the new AMI into your cluster before removing an old one, one at a time. This
configuration ensures that you always have your Auto Scaling group's desired count of active instances in
your cluster during the rolling update.
Note
This method is not supported for worker node groups that were created with eksctl. If you
created your cluster or worker node group with eksctl, see Migrating to a New Worker Node
Group (p. 97).
102
Amazon EKS User Guide
Updating an Existing Worker Node Group
Output (this cluster is using kube-dns for DNS resolution, but your cluster may return coredns
instead):
2. If your current deployment is running fewer than two replicas, scale out the deployment to two
replicas. Substitute coredns for kube-dns if your previous command output returned that instead.
3. (Optional) If you are using the Kubernetes Cluster Autoscaler, scale the deployment down to zero
replicas to avoid conflicting scaling actions.
4. Determine the instance type and desired instance count of your current worker node group. You will
enter these values later when you update the AWS CloudFormation template for the group.
https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-11-15/amazon-eks-
nodegroup.yaml
9. On the Specify stack details page, fill out the following parameters, and choose Next:
103
Amazon EKS User Guide
Ubuntu AMIs
• NodeImageIdSSMParam – The Amazon EC2 Systems Manager parameter of the AMI ID that you
want to update to. The following value uses the latest Amazon EKS-optimized AMI for Kubernetes
version 1.15.
/aws/service/eks/optimized-ami/1.15/amazon-linux-2/recommended/image_id
You can change the 1.15 value to any supported Kubernetes version (p. 48). If you want to use
the Amazon EKS-optimized AMI with GPU support, then change amazon-linux-2 to amazon-
linux-2-gpu.
Note
Using the Amazon EC2 Systems Manager parameter enables you to update your worker
nodes in the future without having to lookup and specify an AMI ID. If your AWS
CloudFormation stack is using this value, any stack update will always launch the latest
recommended Amazon EKS-optimized AMI for your specified Kubernetes version, even if
you don't change any values in the template.
• NodeImageId – To use your own custom AMI, enter the ID for the AMI to use.
Important
This value overrides any value specified for NodeImageIdSSMParam. If you want to use
the NodeImageIdSSMParam value, ensure that the value for NodeImageId is blank.
10. (Optional) On the Options page, tag your stack resources. Choose Next.
11. On the Review page, review your information, acknowledge that the stack might create IAM
resources, and then choose Update stack.
Note
The update of each node in the cluster takes several minutes. Wait for the update of all
nodes to complete before performing the next steps.
12. If your cluster's DNS provider is kube-dns, scale in the kube-dns deployment to one replica.
13. (Optional) If you are using the Kubernetes Cluster Autoscaler, scale the deployment back to one
replica.
14. (Optional) Verify that you are using the latest version of the Amazon VPC CNI plugin for Kubernetes.
You may need to update your CNI version to take advantage of the latest supported instance types.
For more information, see Amazon VPC CNI Plugin for Kubernetes Upgrades (p. 166).
Canonical delivers a built-for-purpose Kubernetes Node OS image. This minimized Ubuntu image is
optimized for Amazon EKS and includes the custom AWS kernel that is jointly developed with AWS.
For more information, see Ubuntu and Amazon Elastic Kubernetes Service and Optimized Support for
Amazon EKS on Ubuntu 18.04.
104
Amazon EKS User Guide
Fargate Considerations
AWS Fargate
This chapter discusses using Amazon EKS to run Kubernetes pods on AWS Fargate.
AWS Fargate is a technology that provides on-demand, right-sized compute capacity for containers.
With AWS Fargate, you no longer have to provision, configure, or scale groups of virtual machines to run
containers. This removes the need to choose server types, decide when to scale your node groups, or
optimize cluster packing.
You can control which pods start on Fargate and how they run with Fargate profiles (p. 110), which are
defined as part of your Amazon EKS cluster.
Amazon EKS integrates Kubernetes with AWS Fargate by using controllers that are built by AWS using
the upstream, extensible model provided by Kubernetes. These controllers run as part of the Amazon
EKS managed Kubernetes control plane and are responsible for scheduling native Kubernetes pods onto
Fargate. The Fargate controllers include a new scheduler that runs alongside the default Kubernetes
scheduler in addition to several mutating and validating admission controllers. When you start a pod
that meets the criteria for running on Fargate, the Fargate controllers running in the cluster recognize,
update, and schedule the pod onto Fargate.
Each pod running on Fargate has its own isolation boundary and does not share the underlying kernel,
CPU resources, memory resources, or elastic network interface with another pod.
This chapter describes the different components of pods running on Fargate, and calls out special
considerations for using Fargate with Amazon EKS.
AWS Fargate with Amazon EKS is currently only available in the following Regions:
EU (Ireland) eu-west-1
Topics
• AWS Fargate Considerations (p. 105)
• Getting Started with AWS Fargate on Amazon EKS (p. 106)
• AWS Fargate Profile (p. 110)
• Fargate Pod Configuration (p. 113)
• Classic Load Balancers and Network Load Balancers are not supported on pods running on Fargate. For
ingress, we recommend that you use the ALB Ingress Controller on Amazon EKS (p. 145) (minimum
version v1.1.4).
105
Amazon EKS User Guide
Getting Started with Fargate
• Pods must match a Fargate profile at the time that they are scheduled in order to run on Fargate. Pods
which do not match a Fargate profile may be stuck as Pending. If a matching Fargate profile exists,
you can delete pending pods that you have created to reschedule them onto Fargate.
• Daemonsets are not supported on Fargate. If your application requires a daemon, you should
reconfigure that daemon to run as a sidecar container in your pods.
• Privileged containers are not supported on Fargate.
• Pods running on Fargate cannot specify HostPort or HostNetwork in the pod manifest.
• GPUs are currently not available on Fargate.
• Pods running on Fargate are only supported on private subnets (with NAT gateway access to AWS
services, but not a direct route to an Internet Gateway), so your cluster's VPC must have private
subnets available.
• We recommend using the Vertical Pod Autoscaler (p. 137) with pods running on Fargate to optimize
the CPU and memory used for your applications. However, because changing the resource allocation
for a pod requires the pod to be restarted, you must set the pod update policy to either Auto or
Recreate to ensure correct functionality.
• Stateful applications are not recommended for pods running on Fargate. Instead, we recommend that
you use AWS solutions such as Amazon S3 or DynamoDB for pod data storage.
• Fargate runs each pod in a VM-isolated environment without sharing resources with other pods.
However, because Kubernetes is a single-tenant orchestrator, Fargate cannot guarantee pod-level
security isolation. You should run sensitive workloads or untrusted workloads that need complete
security isolation using separate Amazon EKS clusters.
EU (Ireland) eu-west-1
If you restrict access to your cluster's public endpoint using CIDR blocks, it is recommended that you
also enable private endpoint access so that Fargate pods can communicate with the cluster. Without
the private endpoint enabled, the CIDR blocks that you specify for public access must include the egress
sources from your VPC. For more information, see Amazon EKS Cluster Endpoint Access Control (p. 35).
If you do not already have an Amazon EKS cluster that supports Fargate, you can create one with the
following eksctl command.
106
Amazon EKS User Guide
Ensure that Existing Nodes can
Communicate with Fargate Pods
Note
This procedure assumes that you have installed eksctl, and that your eksctl version is at
least 0.15.0-rc.2. You can check your version with the following command:
eksctl version
Adding the --fargate option in the command above creates a cluster without a node group. However,
eksctl creates a pod execution role, a Fargate profile for the default and kube-system namespaces,
and it patches the coredns deployment so that it can run on Fargate.
If you are working with an existing cluster that already has worker nodes associated with it, you need
to make sure that pods on these nodes can communicate freely with pods running on Fargate. Pods
running on Fargate are automatically configured to use the cluster security group for the cluster that
they are associated with. You must ensure that any existing worker nodes in your cluster can send and
receive traffic to and from the cluster security group. Managed Node Groups (p. 82) are automatically
configured to use the cluster security group as well, so you do not need to modify or check them for this
compatibility.
For existing node groups that were created with eksctl or the Amazon EKS-managed AWS
CloudFormation templates, you can add the cluster security group to the nodes manually, or you can
modify the node group's Auto Scaling group launch template to attach the cluster security group to the
instances. For more information, see Changing an Instance's Security Groups in the Amazon VPC User
Guide.
You can check for a cluster security group for your cluster in the AWS Management Console under the
cluster's Networking section, or with the following AWS CLI command:
When you create a Fargate profile, you must specify a pod execution role to use with your pods. This role
is added to the cluster's Kubernetes Role Based Access Control (RBAC) for authorization. This allows the
107
Amazon EKS User Guide
Create a Fargate Profile for your Cluster
kubelet that is running on the Fargate infrastructure to register with your Amazon EKS cluster so that it
can appear in your cluster as a node. For more information, see Pod Execution Role (p. 241).
Choose the tab below that corresponds to your preferred Fargate profile creation method.
eksctl
This procedure assumes that you have installed eksctl, and that your eksctl version is at least
0.15.0-rc.2. You can check your version with the following command:
eksctl version
• Create your Fargate profile with the following eksctl command, replacing the variable
text with your own values. You must specify a namespace, but the labels option is not required.
To create a Fargate profile for a cluster with the AWS Management Console
108
Amazon EKS User Guide
(Optional) Update CoreDNS
a. For Namespace, enter a namespace to match for pods, such as kube-system or default.
b. (Optional) Add Kubernetes labels to the selector that pods in the specified namespace
must have to match the selector. For example, you could add the label infrastructure:
fargate to the selector so that only pods in the specified namespace that also have the
infrastructure: fargate Kubernetes label match the selector.
6. On the Review and create page, review the information for your Fargate profile and choose
Create.
{
"fargateProfileName": "coredns",
"clusterName": "dev",
"podExecutionRoleArn": "arn:aws:iam::111122223333:role/
AmazonEKSFargatePodExecutionRole",
"subnets": [
"subnet-0b64dd020cdff3864",
"subnet-00b03756df55e2b87",
"subnet-0418fcb68ed294abf"
],
"selectors": [
{
"namespace": "kube-system",
"labels": {
"k8s-app": "kube-dns"
}
}
]
}
You could apply this Fargate profile to your cluster with the following AWS CLI command. First, create
a file called coredns.json and paste the JSON file from the previous step into it, replacing the
variable text with your own cluster values.
109
Amazon EKS User Guide
Next Steps
Next Steps
• You can start migrating your existing applications to run on Fargate with the following workflow.
1. Create a Fargate profile (p. 112) that matches your application's Kubernetes namespace and
Kubernetes labels.
2. Delete and re-create any existing pods so that they are scheduled on Fargate. For example, the
following command triggers a rollout of the coredns Deployment. You can modify the namespace
and deployment type to update your specific pods.
• Deploy the ALB Ingress Controller on Amazon EKS (p. 145) (version v1.1.4 or later) to allow Ingress
objects for your pods running on Fargate.
• Deploy the Vertical Pod Autoscaler (p. 137) for your pods running on Fargate to optimize the
CPU and memory used for your applications. Be sure to set the pod update policy to either Auto or
Recreate to ensure correct functionality.
The Fargate profile allows an administrator to declare which pods run on Fargate. This declaration is
done through the profile’s selectors. Each profile can have up to five selectors that contain a namespace
and optional labels. You must define a namespace for every selector. The label field consists of multiple
optional key-value pairs. Pods that match a selector (by matching a namespace for the selector and all of
the labels specified in the selector) are scheduled on Fargate. If a namespace selector is defined without
any labels, Amazon EKS will attempt to schedule all pods that run in that namespace onto Fargate using
the profile. If a to-be-scheduled pod matches any of the selectors in the Fargate profile, then that pod is
scheduled on Fargate.
If a pod matches multiple Fargate profiles, Amazon EKS picks one of the matches at random. In this
case, you can specify which profile a pod should use by adding the following Kubernetes label to the pod
specification: eks.amazonaws.com/fargate-profile: profile_name. However, the pod must still
match a selector in that profile in order to be scheduled onto Fargate.
When you create a Fargate profile, you must specify a pod execution role for the pods that run on
Fargate using the profile. This role is added to the cluster's Kubernetes Role Based Access Control (RBAC)
for authorization so that the kubelet that is running on the Fargate infrastructure can register with
your Amazon EKS cluster and appear in your cluster as a node. The pod execution role also provides IAM
permissions to the Fargate infrastructure to allow read access to Amazon ECR image repositories. For
more information, see Pod Execution Role (p. 241).
Fargate profiles are immutable. However, you can create a new updated profile to replace an existing
profile and then delete the original after the updated profile has finished creating.
110
Amazon EKS User Guide
Fargate Profile Components
Note
Any pods that are running using a Fargate profile will be stopped and put into pending when
the profile is deleted.
If any Fargate profiles in a cluster are in the DELETING status, you must wait for that Fargate profile to
finish deleting before you can create any other profiles in that cluster.
{
"fargateProfileName": "",
"clusterName": "",
"podExecutionRoleArn": "",
"subnets": [
""
],
"selectors": [
{
"namespace": "",
"labels": {
"KeyName": ""
}
}
],
"clientRequestToken": "",
"tags": {
"KeyName": ""
}
}
When your cluster creates pods on AWS Fargate, the pod needs to make calls to AWS APIs on your
behalf, for example, to pull container images from Amazon ECR. The Amazon EKS pod execution role
provides the IAM permissions to do this.
When you create a Fargate profile, you must specify a pod execution role to use with your pods. This
role is added to the cluster's Kubernetes Role Based Access Control (RBAC) for authorization, so that
the kubelet that is running on the Fargate infrastructure can register with your Amazon EKS cluster
and appear in your cluster as a node. For more information, see Pod Execution Role (p. 241).
Subnets
The IDs of subnets to launch pods into that use this profile. At this time, pods running on Fargate
are not assigned public IP addresses, so only private subnets (with no direct route to an Internet
Gateway) are accepted for this parameter.
Selectors
The selectors to match for pods to use this Fargate profile. Each selector must have an associated
namespace. Optionally, you can also specify labels for a namespace. You may specify up to five
selectors in a Fargate profile. A pod only needs to match one selector to run using the Fargate
profile.
Namespace
You must specify a namespace for a selector. The selector only matches pods that are created in
this namespace, but you can create multiple selectors to target multiple namespaces.
111
Amazon EKS User Guide
Creating a Fargate Profile
Labels
You can optionally specify Kubernetes labels to match for the selector. The selector only
matches pods that have all of the labels that are specified in the selector.
eksctl
eksctl version
• Create your Fargate profile with the following eksctl command, replacing the variable
text with your own values. You must specify a namespace, but the labels option is not required.
To create a Fargate profile for a cluster with the AWS Management Console
a. For Namespace, enter a namespace to match for pods, such as kube-system or default.
112
Amazon EKS User Guide
Deleting a Fargate Profile
b. (Optional) Add Kubernetes labels to the selector that pods in the specified namespace
must have to match the selector. For example, you could add the label infrastructure:
fargate to the selector so that only pods in the specified namespace that also have the
infrastructure: fargate Kubernetes label match the selector.
6. On the Review and create page, review the information for your Fargate profile and choose
Create.
When you delete a Fargate profile, any pods that were scheduled onto Fargate with the profile are
deleted. If those pods match another Fargate profile, then they are scheduled on Fargate with that
profile. If they no longer match any Fargate profiles, then they are not scheduled onto Fargate and may
remain as pending.
Only one Fargate profile in a cluster can be in the DELETING status at a time. You must wait for a
Fargate profile to finish deleting before you can delete any other profiles in that cluster.
When pods are scheduled on Fargate, the vCPU and memory reservations within the pod specification
determine how much CPU and memory to provision for the pod.
• The maximum request out of any Init containers is used to determine the Init request vCPU and
memory requirements.
• Requests for all long-running containers are added up to determine the long-running request vCPU
and memory requirements.
• The larger of the above two values is chosen for the vCPU and memory request to use for your pod.
• Fargate adds 256 MB to each pod's memory reservation for the required Kubernetes components
(kubelet, kube-proxy, and containerd).
Fargate rounds up to the compute configuration shown below that most closely matches the sum of
vCPU and memory requests in order to ensure pods always have the resources that they need to run.
113
Amazon EKS User Guide
Fargate Storage
If you do not specify a vCPU and memory combination, then the smallest available combination is used
(.25 vCPU and 0.5 GB memory).
The table below shows the vCPU and memory combinations that are available for pods running on
Fargate.
For pricing information on these compute configurations, see AWS Fargate Pricing.
Fargate Storage
When provisioned, each pod running on Fargate receives 10 GB of container image layer storage. Pod
storage is ephemeral. After a pod stops, the storage is deleted.
114
Amazon EKS User Guide
Storage Classes
Storage
This chapter covers storage options for Amazon EKS clusters.
The Storage Classes (p. 115) topic uses the in-tree Amazon EBS storage provisioner. The Amazon EBS
CSI Driver (p. 116) is available for managing storage in Kubernetes 1.14 and later clusters.
Note
The existing in-tree Amazon EBS plugin is still supported, but by using a CSI driver, you
benefit from the decoupling of Kubernetes upstream release cycle and CSI driver release cycle.
Eventually, the in-tree plugin will be deprecated in favor of the CSI driver.
Topics
• Storage Classes (p. 115)
• Amazon EBS CSI Driver (p. 116)
• Amazon EFS CSI Driver (p. 120)
• Amazon FSx for Lustre CSI Driver (p. 124)
Storage Classes
Amazon EKS clusters that were created prior to Kubernetes version 1.11 were not created with any
storage classes. You must define storage classes for your cluster to use and you should define a default
storage class for your persistent volume claims. For more information, see Storage Classes in the
Kubernetes documentation.
Note
This topic uses the in-tree Amazon EBS storage provisioner. For Kubernetes 1.14 and later
clusters, the Amazon EBS CSI Driver (p. 116) is available for managing storage. The existing
in-tree Amazon EBS plugin is still supported, but by using a CSI driver, you benefit from the
decoupling of Kubernetes upstream release cycle and CSI driver release cycle. Eventually, the in-
tree plugin will be deprecated in favor of the CSI driver.
1. Create an AWS storage class manifest file for your storage class. The gp2-storage-class.yaml
example below defines a storage class called gp2 that uses the Amazon EBS gp2 volume type.
For more information about the options available for AWS storage classes, see AWS EBS in the
Kubernetes documentation.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: gp2
annotations:
storageclass.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
115
Amazon EKS User Guide
Amazon EBS CSI Driver
fsType: ext4
2. Use kubectl to create the storage class from the manifest file.
Output:
1. List the existing storage classes for your cluster. A storage class must be defined before you can set it
as a default.
Output:
2. Choose a storage class and set it as your default by setting the storageclass.kubernetes.io/
is-default-class=true annotation.
Output:
Output:
This topic shows you how to deploy the Amazon EBS CSI Driver to your Amazon EKS cluster and verify
that it works. We recommend using version v0.4.0 of the driver.
Note
This driver is only supported on Kubernetes version 1.14 and later Amazon EKS clusters and
worker nodes. Alpha features of the Amazon EBS CSI Driver are not supported on Amazon
EKS clusters. The driver is in Beta release. It is well tested and supported by Amazon EKS for
116
Amazon EKS User Guide
Amazon EBS CSI Driver
production use. Support for the driver will not be dropped, though details may change. If the
schema or schematics of the driver changes, instructions for migrating to the next version will
be provided.
For detailed descriptions of the available parameters and complete examples that demonstrate the
driver's features, see the Amazon EBS Container Storage Interface (CSI) Driver project on GitHub.
1. Create an IAM policy called Amazon_EBS_CSI_Driver for your worker node instance profile that
allows the Amazon EBS CSI Driver to make calls to AWS APIs on your behalf. Use the following AWS
CLI commands to create the IAM policy in your AWS account. You can view the policy document on
GitHub.
curl -O https://raw.githubusercontent.com/kubernetes-sigs/aws-ebs-csi-driver/
v0.4.0/docs/example-iam-policy.json
Output:
Name: aws-auth
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
mapRoles:
----
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::111122223333:role/eksctl-alb-nodegroup-ng-b1f603c5-
NodeInstanceRole-GKNS581EASPU
username: system:node:{{EC2PrivateDNSName}}
Events: <none>
Record the role name for any rolearn values that have the system:nodes group assigned to
them. In the previous example output, the role name is eksctl-alb-nodegroup-ng-b1f603c5-
NodeInstanceRole-GKNS581EASPU. You should have one value for each node group in your
cluster.
3. Attach the new Amazon_EBS_CSI_Driver IAM policy to each of the worker node IAM roles you
identified earlier with the following command, substituting the red text with your own AWS account
number and worker node IAM role name.
117
Amazon EKS User Guide
Amazon EBS CSI Driver
4. Deploy the Amazon EBS CSI Driver with the following command.
Note
This command requires version 1.14 or later of kubectl. You can see your kubectl version
with the following command. To install or upgrade your kubectl version, see Installing
kubectl (p. 174).
To deploy a sample application and verify that the CSI driver is working
This procedure uses the Dynamic Volume Provisioning example from the Amazon EBS Container Storage
Interface (CSI) Driver GitHub repository to consume a dynamically-provisioned Amazon EBS volume.
1. Clone the Amazon EBS Container Storage Interface (CSI) Driver GitHub repository to your local
system.
cd aws-ebs-csi-driver/examples/kubernetes/dynamic-provisioning/
3. Deploy the ebs-sc storage class, ebs-claim persistent volume claim, and app sample application
from the specs directory.
Output:
Name: ebs-sc
IsDefaultClass: No
Annotations: kubectl.kubernetes.io/last-applied-
configuration={"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":
{"annotations":{},"name":"ebs-
sc"},"provisioner":"ebs.csi.aws.com","volumeBindingMode":"WaitForFirstConsumer"}
Provisioner: ebs.csi.aws.com
Parameters: <none>
AllowVolumeExpansion: <unset>
MountOptions: <none>
ReclaimPolicy: Delete
VolumeBindingMode: WaitForFirstConsumer
Events: <none>
118
Amazon EKS User Guide
Amazon EBS CSI Driver
Note that the storage class uses the WaitForFirstConsumer volume binding mode. This means
that volumes are not dynamically provisioned until a pod makes a persistent volume claim. For more
information, see Volume Binding Mode in the Kubernetes documentation.
5. Watch the pods in the default namespace and wait for the app pod to become ready.
6. List the persistent volumes in the default namespace. Look for a persistent volume with the
default/ebs-claim claim.
kubectl get pv
Output:
Output:
Name: pvc-37717cd6-d0dc-11e9-b17f-06fad4858a5a
Labels: <none>
Annotations: pv.kubernetes.io/provisioned-by: ebs.csi.aws.com
Finalizers: [kubernetes.io/pv-protection external-attacher/ebs-csi-aws-com]
StorageClass: ebs-sc
Status: Bound
Claim: default/ebs-claim
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 4Gi
Node Affinity:
Required Terms:
Term 0: topology.ebs.csi.aws.com/zone in [regiona]
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: ebs.csi.aws.com
VolumeHandle: vol-0d651e157c6d93445
ReadOnly: false
VolumeAttributes: storage.kubernetes.io/
csiProvisionerIdentity=1567792483192-8081-ebs.csi.aws.com
Events: <none>
Output:
119
Amazon EKS User Guide
Amazon EFS CSI Driver
9. When you finish experimenting, delete the resources for this sample application to clean up.
This topic shows you how to deploy the Amazon EFS CSI Driver to your Amazon EKS cluster and verify
that it works.
Note
This driver is supported on Kubernetes version 1.14 and later Amazon EKS clusters and worker
nodes. Alpha features of the Amazon EFS CSI Driver are not supported on Amazon EKS clusters.
The driver is in Beta release. It is well tested and supported by Amazon EKS for production
use. Support for the driver will not be dropped, though details may change. If the schema or
schematics of the driver changes, instructions for migrating to the next version will be provided.
For detailed descriptions of the available parameters and complete examples that demonstrate the
driver's features, see the Amazon EFS Container Storage Interface (CSI) Driver project on GitHub.
• Deploy the Amazon EFS CSI Driver with the following command.
Note
This command requires kubectl version 1.14 or later. You can see your kubectl version
with the following command. To install or upgrade your kubectl version, see Installing
kubectl (p. 174).
To create an Amazon EFS file system for your Amazon EKS cluster
1. Locate the VPC ID for your Amazon EKS cluster. You can find this ID in the Amazon EKS console, or
you can use the following AWS CLI command.
Output:
120
Amazon EKS User Guide
Amazon EFS CSI Driver
vpc-exampledb76d3e813
2. Locate the CIDR range for your cluster's VPC. You can find this in the Amazon VPC console, or you
can use the following AWS CLI command.
Output:
192.168.0.0/16
3. Create a security group that allows inbound NFS traffic for your Amazon EFS mount points.
a. Choose the security group that you created in the previous step.
b. Choose the Inbound Rules tab and then choose Edit rules.
c. Choose Add Rule, fill out the following fields, and then choose Save rules.
• Type: NFS
• Source: Custom. Paste the VPC CIDR range.
• Description: Add a description, such as "Allows inbound NFS traffic from within the VPC."
5. Create the Amazon EFS file system for your Amazon EKS cluster.
To deploy a sample application and verify that the CSI driver is working
This procedure uses the Multiple Pods Read Write Many example from the Amazon EFS Container
Storage Interface (CSI) Driver GitHub repository to consume a statically provisioned Amazon EFS
persistent volume and access it from multiple pods with the ReadWriteMany access mode.
121
Amazon EKS User Guide
Amazon EFS CSI Driver
1. Clone the Amazon EFS Container Storage Interface (CSI) Driver GitHub repository to your local
system.
cd aws-efs-csi-driver/examples/kubernetes/multiple_pods/
3. Retrieve your Amazon EFS file system ID. You can find this in the Amazon EFS console, or use the
following AWS CLI command.
Output:
fs-582a03f3
4. Edit the specs/pv.yaml file and replace the volumeHandle value with your Amazon EFS file
system ID.
apiVersion: v1
kind: PersistentVolume
metadata:
name: efs-pv
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
storageClassName: efs-sc
csi:
driver: efs.csi.aws.com
volumeHandle: fs-582a03f3
Note
Because Amazon EFS is an elastic file system, it does not enforce any file system capacity
limits. The actual storage capacity value in persistent volumes and persistent volume claims
is not used when creating the file system. However, since storage capacity is a required field
in Kubernetes, you must specify a valid value, such as, 5Gi in this example. This value does
not limit the size of your Amazon EFS file system.
5. Deploy the efs-sc storage class, efs-claim persistent volume claim, efs-pv persistent volume,
and app1 and app2 sample applications from the specs directory.
6. Watch the pods in the default namespace and wait for the app1 and app2 pods to become ready.
7. List the persistent volumes in the default namespace. Look for a persistent volume with the
default/efs-claim claim.
kubectl get pv
122
Amazon EKS User Guide
Amazon EFS CSI Driver
Output:
Output:
Name: efs-pv
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolume","metadata":
{"annotations":{},"name":"efs-pv"},"spec":{"accessModes":["ReadWriteMany"],"capaci...
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: efs-sc
Status: Bound
Claim: default/efs-claim
Reclaim Policy: Retain
Access Modes: RWX
VolumeMode: Filesystem
Capacity: 5Gi
Node Affinity: <none>
Message:
Source:
Type: CSI (a Container Storage Interface (CSI) volume source)
Driver: efs.csi.aws.com
VolumeHandle: fs-582a03f3
ReadOnly: false
VolumeAttributes: <none>
Events: <none>
Output:
10. Verify that the app2 pod is shows the same data in the volume.
Output:
123
Amazon EKS User Guide
Amazon FSx for Lustre CSI Driver
11. When you finish experimenting, delete the resources for this sample application to clean up.
This topic shows you how to deploy the Amazon FSx for Lustre CSI Driver to your Amazon EKS cluster
and verify that it works. We recommend using version 0.3.0 of the driver.
Note
This driver is supported on Kubernetes version 1.15 and later Amazon EKS clusters and worker
nodes. Alpha features of the Amazon FSx for Lustre CSI Driver are not supported on Amazon
EKS clusters. The driver is in Beta release. It is well tested and supported by Amazon EKS for
production use. Support for the driver will not be dropped, though details may change. If the
schema or schematics of the driver changes, instructions for migrating to the next version will
be provided.
For detailed descriptions of the available parameters and complete examples that demonstrate the
driver's features, see the Amazon FSx for Lustre Container Storage Interface (CSI) Driver project on
GitHub.
Prerequisites
• Version 1.18.17 or later of the AWS CLI installed. You can check your currently-installed version with
the aws --version command. To install or upgrade the AWS CLI, see Installing the AWS CLI.
• An existing Amazon EKS cluster. If you don't currently have a cluster, see ??? (p. 3) to create one.
• Version 0.15.0-rc.2 or later of eksctl installed. You can check your currently-installed version
with the eksctl version command. To install or upgrade eksctl, see Installing or Upgrading
eksctl (p. 189).
• The latest version of kubectl installed that aligns to your cluster version. You can check your
currently-installed version with the kubectl version --short --client command. For more
information, see Installing kubectl (p. 174).
To deploy the Amazon FSx for Lustre CSI Driver to an Amazon EKS cluster
1. Create an AWS Identity and Access Management OIDC provider and associate it with your cluster.
124
Amazon EKS User Guide
Amazon FSx for Lustre CSI Driver
2. Create an IAM policy and service account that allows the driver to make calls to AWS APIs on your
behalf.
{
"Version":"2012-10-17",
"Statement":[
{
"Effect":"Allow",
"Action":[
"iam:CreateServiceLinkedRole",
"iam:AttachRolePolicy",
"iam:PutRolePolicy"
],
"Resource":"arn:aws:iam::*:role/aws-service-role/s3.data-
source.lustre.fsx.amazonaws.com/*"
},
{
"Action":"iam:CreateServiceLinkedRole",
"Effect":"Allow",
"Resource":"*",
"Condition":{
"StringLike":{
"iam:AWSServiceName":[
"fsx.amazonaws.com"
]
}
}
},
{
"Effect":"Allow",
"Action":[
"s3:ListBucket",
"fsx:CreateFileSystem",
"fsx:DeleteFileSystem",
"fsx:DescribeFileSystems"
],
"Resource":[
"*"
]
}
]
}
Take note of the policy Amazon Resource Name (ARN) that is returned.
3. Create a Kubernetes service account for the driver and attach the policy to the service account.
Replacing the ARN of the policy with the ARN returned in the previous step.
125
Amazon EKS User Guide
Amazon FSx for Lustre CSI Driver
--approve
Output:
You'll see several lines of output as the service account is created. The last line of output is similar to
the following example line.
Note the name of the AWS CloudFormation stack that was deployed. In the example output above,
the stack is named eksctl-prod-addon-iamserviceaccount-kube-system-fsx-csi-
controller-sa.
4. Note the Role ARN for the role that was created.
Output
Warning: kubectl apply should be used on resource created by either kubectl create --
save-config or kubectl apply
serviceaccount/fsx-csi-controller-sa configured
clusterrole.rbac.authorization.k8s.io/fsx-csi-external-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/fsx-csi-external-provisioner-binding
created
deployment.apps/fsx-csi-controller created
daemonset.apps/fsx-csi-node created
csidriver.storage.k8s.io/fsx.csi.aws.com created
6. Patch the driver deployment to add the service account that you created in step 3, replacing the ARN
with the ARN that you noted in step 4.
To deploy a Kubernetes storage class, persistent volume claim, and sample application to
verify that the CSI driver is working
This procedure uses the Dynamic Volume Provisioning for Amazon S3 from the Amazon FSx for Lustre
Container Storage Interface (CSI) Driver GitHub repository to consume a dynamically-provisioned
Amazon FSx for Lustre volume.
1. Create an Amazon S3 bucket and a folder within it named export by creating and copying a file to
the bucket.
aws s3 mb s3://fsx-csi
126
Amazon EKS User Guide
Amazon FSx for Lustre CSI Driver
3. Edit the file and replace the existing, alternate-colored values with your own.
parameters:
subnetId: subnet-056da83524edbe641
securityGroupIds: sg-086f61ea73388fb6b
s3ImportPath: s3://ml-training-data-000
s3ExportPath: s3://ml-training-data-000/export
deploymentType: SCRATCH_2
• subnetId – The subnet ID that the Amazon FSx for Lustre file system should be created in. Amazon
FSx for Lustre is not supported in all availability zones. Open the Amazon FSx for Lustre console
at https://console.aws.amazon.com/fsx/ to confirm that the subnet that you want to use is in
a supported availability zone. The subnet can include your worker nodes, or can be a different
subnet or VPC. If the subnet that you specify is not the same subnet that you have worker nodes
in, then your VPCs must be connected, and you must ensure that you have the necessary ports
open in your security groups.
• securityGroupIds – The security group ID for your worker nodes.
• s3ImportPath – The Amazon Simple Storage Service data repository that you want to copy data
from to the persistent volume. Specify the fsx-csi bucket that you created in step 1.
• s3ExportPath – The Amazon S3 data repository that you want to export new or modified files to.
Specify the fsx-csi/export folder that you created in step 1.
• deploymentType – The file system deployment type. Valid values are SCRATCH_1, SCRATCH_2,
and PERSISTENT_1. For more information about deployment types, see Create Your Amazon FSx
for Lustre File System.
Note
The Amazon S3 bucket for s3ImportPath and s3ExportPath must be the same,
otherwise the driver cannot create the Amazon FSx for Lustre file system. The
s3ImportPath can stand alone. A random path will be created automatically like s3://
ml-training-data-000/FSxLustre20190308T012310Z. The s3ExportPath cannot
be used without specifying a value for S3ImportPath.
4. Create the storageclass.
6. (Optional) Edit the claim.yaml file. Change the following value to one of the increment values
listed below, based on your storage requirements and the deploymentType that you selected in a
previous step.
storage: 1200Gi
• SCRATCH_2 and PERSISTENT – 1.2 TiB, 2.4 TiB, or increments of 2.4 TiB over 2.4 TiB.
127
Amazon EKS User Guide
Amazon FSx for Lustre CSI Driver
• SCRATCH_1 – 1.2 TiB, 2.4 TiB, 3.6 TiB, or increments of 3.6 TiB over 3.6 TiB.
7. Create the persistent volume claim.
Output.
Note
The STATUS may show as Pending for 5-10 minutes, before changing to Bound. Don't
continue with the next step until the STATUS is Bound.
9. Deploy the sample application.
Output
Access Amazon S3 files from the Amazon FSx for Lustre file system
If you only want to import data and read it without any modification and creation, then you don't need a
value for s3ExportPath in your storageclass.yaml file. Verify that data was written to the Amazon
FSx for Lustre file system by the sample app.
Output.
export out.txt
The sample app wrote the out.txt file to the file system.
For new files and modified files, you can use the Lustre user space tool to archive the data back to
Amazon S3 using the value that you specified for s3ExportPath.
128
Amazon EKS User Guide
Amazon FSx for Lustre CSI Driver
Note
• New files aren't synced back to Amazon S3 automatically. In order to sync files to
the s3ExportPath, you need to install the Lustre client in your container image and
manually run the lfs hsm_archive command. The container should run in privileged
mode with the CAP_SYS_ADMIN capability.
• This example uses a lifecycle hook to install the Lustre client for demonstration purpose.
A normal approach is building a container image with the Lustre client.
2. Confirm that the out.txt file was written to the s3ExportPath folder in Amazon S3.
aws s3 ls fsx-csi/export/
Output
129
Amazon EKS User Guide
Cluster Autoscaler
Autoscaling
This chapter covers various autoscaling configurations for your Amazon EKS cluster. There are several
types of Kubernetes autoscaling supported in Amazon EKS:
• Cluster Autoscaler (p. 130) — The Kubernetes Cluster Autoscaler automatically adjusts the number of
nodes in your cluster when pods fail to launch due to lack of resources or when nodes in the cluster are
underutilized and their pods can be rescheduled on to other nodes in the cluster.
• Horizontal Pod Autoscaler (p. 134) — The Kubernetes Horizontal Pod Autoscaler automatically scales
the number of pods in a deployment, replication controller, or replica set based on that resource's CPU
utilization.
• Vertical Pod Autoscaler (p. 137) — The Kubernetes Vertical Pod Autoscaler automatically adjusts the
CPU and memory reservations for your pods to help "right size" your applications. This can help you to
better use your cluster resources and free up CPU and memory for other pods.
Cluster Autoscaler
The Kubernetes Cluster Autoscaler automatically adjusts the number of nodes in your cluster when pods
fail to launch due to lack of resources or when nodes in the cluster are underutilized and their pods can
be rescheduled onto other nodes in the cluster.
This topic shows you how to deploy the Cluster Autoscaler to your Amazon EKS cluster and how to
configure it to modify your Amazon EC2 Auto Scaling groups. The Cluster Autoscaler modifies your
worker node groups so that they scale out when you need more resources and scale in when you have
underutilized resources.
If you are running a stateful application across multiple Availability Zones that is backed by Amazon
EBS volumes and using the Kubernetes Cluster Autoscaler (p. 130), you should configure multiple
node groups, each scoped to a single Availability Zone. In addition, you should enable the --balance-
similar-node-groups feature. Otherwise, you can create a single node group that spans multiple
Availability Zones.
Choose one of the cluster creation procedures below that meets your requirements.
To create a cluster with a single managed group that spans multiple Availability Zones
• Create an Amazon EKS cluster with a single managed node group with the following eksctl
command. For more information, see Creating an Amazon EKS Cluster (p. 21). Substitute the
variable text with your own values.
Output:
130
Amazon EKS User Guide
Create an Amazon EKS Cluster
To create a cluster with a dedicated managed node group for each Availability Zone
1. Create an Amazon EKS cluster with no node groups with the following eksctl command. For more
information, see Creating an Amazon EKS Cluster (p. 21). Note the Availability Zones that the cluster
is created in. You will use these Availability Zones when you create your node groups. Substitute the
red variable text with your own values.
Output:
131
Amazon EKS User Guide
Cluster Autoscaler Node group Considerations
This cluster was created in the following Availability Zones: region-codea region-codec
region-codeb.
2. For each Availability Zone in your cluster, use the following eksctl command to create a node
group. Substitute the variable text with your own values. This command creates an Auto Scaling
group with a minimum count of one and a maximum count of ten.
If you used the previous eksctl commands to create your node groups, these permissions are
automatically provided and attached to your worker node IAM roles. If you did not use eksctl, you must
create an IAM policy with the following document and attach it to your worker node IAM roles. For more
information, see Modifying a Role in the IAM User Guide.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"autoscaling:DescribeAutoScalingGroups",
"autoscaling:DescribeAutoScalingInstances",
"autoscaling:DescribeLaunchConfigurations",
"autoscaling:DescribeTags",
"autoscaling:SetDesiredCapacity",
"autoscaling:TerminateInstanceInAutoScalingGroup",
"ec2:DescribeLaunchTemplateVersions"
],
"Resource": "*",
"Effect": "Allow"
}
]
}
If you used the previous eksctl commands to create your node groups, these tags are automatically
applied. If not, you must manually tag your Auto Scaling groups with the following tags. For more
information, see Tagging Your Amazon EC2 Resources in the Amazon EC2 User Guide for Linux Instances.
Key Value
k8s.io/cluster-autoscaler/<cluster- owned
name>
132
Amazon EKS User Guide
Deploy the Cluster Autoscaler
Key Value
k8s.io/cluster-autoscaler/enabled true
1. Deploy the Cluster Autoscaler to your cluster with the following command.
Edit the cluster-autoscaler container command to replace <YOUR CLUSTER NAME> with your
cluster's name, and add the following options.
• --balance-similar-node-groups
• --skip-nodes-with-system-pods=false
spec:
containers:
- command:
- ./cluster-autoscaler
- --v=4
- --stderrthreshold=info
- --cloud-provider=aws
- --skip-nodes-with-local-storage=false
- --expander=least-waste
- --node-group-auto-discovery=asg:tag=k8s.io/cluster-autoscaler/enabled,k8s.io/
cluster-autoscaler/<YOUR CLUSTER NAME>
- --balance-similar-node-groups
- --skip-nodes-with-system-pods=false
133
Amazon EKS User Guide
View your Cluster Autoscaler Logs
Output:
The Horizontal Pod Autoscaler is a standard API resource in Kubernetes that simply requires that a
metrics source (such as the Kubernetes metrics server) is installed on your Amazon EKS cluster to work.
You do not need to deploy or install the Horizontal Pod Autoscaler on your cluster to begin scaling your
applications. For more information, see Horizontal Pod Autoscaler in the Kubernetes documentation.
Use this topic to prepare the Horizontal Pod Autoscaler for your Amazon EKS cluster and to verify that it
is working with a sample application.
Note
This topic is based on the Horizontal Pod Autoscaler Walkthrough in the Kubernetes
documentation.
If you have already deployed the metrics server to your cluster, you can move on to the next section. You
can check for the metrics server with the following command.
134
Amazon EKS User Guide
Install the Metrics Server
If this command returns a NotFound error, then you must deploy the metrics server to your Amazon EKS
cluster. Choose the tab below that corresponds to your preferred installation method.
curl and jq
To install metrics-server from GitHub on an Amazon EKS cluster using curl and jq
If you have a macOS or Linux system with curl, tar, gzip, and the jq JSON parser installed, you
can download, extract, and install the latest release with the following commands. Otherwise, use
the next procedure to download the latest version using a web browser.
1. Open a terminal window and navigate to a directory where you would like to download the
latest metrics-server release.
2. Copy and paste the commands below into your terminal window and type Enter to execute
them. These commands download the latest release, extract it, and apply the version 1.8+
manifests to your cluster.
3. Verify that the metrics-server deployment is running the desired number of pods with the
following command.
Output:
Web browser
To install metrics-server from GitHub on an Amazon EKS cluster using a web browser
1. Download and extract the latest version of the metrics server code from GitHub.
a. Navigate to the latest release page of the metrics-server project on GitHub (https://
github.com/kubernetes-sigs/metrics-server/releases/latest), then choose a source code
archive for the latest release to download it.
Note
If you are downloading to a remote server, you can use the following wget
command, substituting the alternate-colored text with the latest version
number.
135
Amazon EKS User Guide
Run a Horizontal Pod Autoscaler Test Application
b. Navigate to your downloads location and extract the source code archive. For example, if
you downloaded the .tar.gz archive, use the following command to extract (substituting
your release version).
3. Verify that the metrics-server deployment is running the desired number of pods with the
following command.
Output:
1. Create a simple Apache web server application with the following command.
This Apache web server pod is given a 200 millicpu CPU limit and it is serving on port 80.
2. Create a Horizontal Pod Autoscaler resource for the httpd deployment.
This command creates an autoscaler that targets 50 percent CPU utilization for the deployment,
with a minimum of one pod and a maximum of ten pods. When the average CPU load is below 50
percent, the autoscaler tries to reduce the number of pods in the deployment, to a minimum of one.
When the load is greater than 50 percent, the autoscaler tries to increase the number of pods in
the deployment, up to a maximum of ten. For more information, see How does the Horizontal Pod
Autoscaler work? in the Kubernetes documentation.
3. Describe the autoscaler with the following command to view its details.
Output:
136
Amazon EKS User Guide
Vertical Pod Autoscaler
Name: httpd
Namespace: default
Labels: <none>
Annotations: <none>
CreationTimestamp: Fri, 27 Sep 2019 13:32:15 -0700
Reference: Deployment/httpd
Metrics: ( current / target )
resource cpu on pods (as a percentage of request): 1% (1m) / 50%
Min replicas: 1
Max replicas: 10
Deployment pods: 1 current / 1 desired
Conditions:
Type Status Reason Message
---- ------ ------ -------
AbleToScale True ReadyForNewScale recommended size matches current size
ScalingActive True ValidMetricFound the HPA was able to successfully
calculate a replica count from cpu resource utilization (percentage of request)
ScalingLimited False DesiredWithinRange the desired count is within the
acceptable range
Events: <none>
As you can see, the current CPU load is only one percent, but the pod count is already at its lowest
boundary (one), so it cannot scale in.
4. Create a load for the web server. The following command uses the Apache Bench program to send
hundreds of thousands of requests to the httpd server. This should significantly increase the load
and cause the autoscaler to scale out the deployment.
5. Watch the httpd deployment scale out while the load is generated. To watch the deployment and
the autoscaler, periodically run the following command.
Output:
When the load finishes, the deployment should scale back down to 1.
6. When you are done experimenting with your sample application, delete the httpd resources.
137
Amazon EKS User Guide
Install the Metrics Server
If you have already deployed the metrics server to your cluster, you can move on to the next section. You
can check for the metrics server with the following command.
If this command returns a NotFound error, then you must deploy the metrics server to your Amazon EKS
cluster. Choose the tab below that corresponds to your preferred installation method.
curl and jq
To install metrics-server from GitHub on an Amazon EKS cluster using curl and jq
If you have a macOS or Linux system with curl, tar, gzip, and the jq JSON parser installed, you
can download, extract, and install the latest release with the following commands. Otherwise, use
the next procedure to download the latest version using a web browser.
1. Open a terminal window and navigate to a directory where you would like to download the
latest metrics-server release.
2. Copy and paste the commands below into your terminal window and type Enter to execute
them. These commands download the latest release, extract it, and apply the version 1.8+
manifests to your cluster.
3. Verify that the metrics-server deployment is running the desired number of pods with the
following command.
Output:
Web browser
To install metrics-server from GitHub on an Amazon EKS cluster using a web browser
1. Download and extract the latest version of the metrics server code from GitHub.
138
Amazon EKS User Guide
Deploy the Vertical Pod Autoscaler
a. Navigate to the latest release page of the metrics-server project on GitHub (https://
github.com/kubernetes-sigs/metrics-server/releases/latest), then choose a source code
archive for the latest release to download it.
Note
If you are downloading to a remote server, you can use the following wget
command, substituting the alternate-colored text with the latest version
number.
b. Navigate to your downloads location and extract the source code archive. For example, if
you downloaded the .tar.gz archive, use the following command to extract (substituting
your release version).
3. Verify that the metrics-server deployment is running the desired number of pods with the
following command.
Output:
1. Open a terminal window and navigate to a directory where you would like to download the Vertical
Pod Autoscaler source code.
2. Clone the kubernetes/autoscaler GitHub repository.
cd autoscaler/vertical-pod-autoscaler/
4. (Optional) If you have already deployed another version of the Vertical Pod Autoscaler, remove it
with the following command.
./hack/vpa-down.sh
139
Amazon EKS User Guide
Test your Vertical Pod Autoscaler Installation
5. Deploy the Vertical Pod Autoscaler to your cluster with the following command.
./hack/vpa-up.sh
6. Verify that the Vertical Pod Autoscaler pods have been created successfully.
Output:
1. Deploy the hamster.yaml Vertical Pod Autoscaler example with the following command.
Output:
3. Describe one of the pods to view its CPU and memory reservation.
Output:
Name: hamster-c7d89d6db-rglf5
Namespace: default
Priority: 0
Node: ip-192-168-9-44.region-code.compute.internal/192.168.9.44
Start Time: Fri, 27 Sep 2019 10:35:15 -0700
Labels: app=hamster
pod-template-hash=c7d89d6db
Annotations: kubernetes.io/psp: eks.privileged
vpaUpdates: Pod resources updated by hamster-vpa: container 0:
140
Amazon EKS User Guide
Test your Vertical Pod Autoscaler Installation
Status: Running
IP: 192.168.23.42
IPs: <none>
Controlled By: ReplicaSet/hamster-c7d89d6db
Containers:
hamster:
Container ID: docker://
e76c2413fc720ac395c33b64588c82094fc8e5d590e373d5f818f3978f577e24
Image: k8s.gcr.io/ubuntu-slim:0.1
Image ID: docker-pullable://k8s.gcr.io/ubuntu-
slim@sha256:b6f8c3885f5880a4f1a7cf717c07242eb4858fdd5a84b5ffe35b1cf680ea17b1
Port: <none>
Host Port: <none>
Command:
/bin/sh
Args:
-c
while true; do timeout 0.5s yes >/dev/null; sleep 0.5s; done
State: Running
Started: Fri, 27 Sep 2019 10:35:16 -0700
Ready: True
Restart Count: 0
Requests:
cpu: 100m
memory: 50Mi
...
You can see that the original pod reserves 100 millicpu of CPU and 50 Mebibytes of memory. For
this example application, 100 millicpu is less than the pod needs to run, so it is CPU-constrained.
It also reserves much less memory than it needs. The Vertical Pod Autoscaler vpa-recommender
deployment analyzes the hamster pods to see if the CPU and memory requirements are
appropriate. If adjustments are needed, the vpa-updater relaunches the pods with updated values.
4. Wait for the vpa-updater to launch a new hamster pod. This should take a minute or two. You can
monitor the pods with the following command.
Note
If you are not sure that a new pod has launched, compare the pod names with your previous
list. When the new pod launches, you will see a new pod name.
5. When a new hamster pod is started, describe it and view the updated CPU and memory
reservations.
Output:
Name: hamster-c7d89d6db-jxgfv
Namespace: default
Priority: 0
Node: ip-192-168-9-44.region-code.compute.internal/192.168.9.44
Start Time: Fri, 27 Sep 2019 10:37:08 -0700
Labels: app=hamster
pod-template-hash=c7d89d6db
Annotations: kubernetes.io/psp: eks.privileged
vpaUpdates: Pod resources updated by hamster-vpa: container 0: cpu
request, memory request
Status: Running
IP: 192.168.3.140
IPs: <none>
141
Amazon EKS User Guide
Test your Vertical Pod Autoscaler Installation
Here you can see that the CPU reservation has increased to 587 millicpu, which is over five times
the original value. The memory has increased to 262,144 Kilobytes, which is around 250 Mebibytes,
or five times the original value. This pod was under-resourced, and the Vertical Pod Autoscaler
corrected our estimate with a much more appropriate value.
6. Describe the hamster-vpa resource to view the new recommendation.
Output:
Name: hamster-vpa
Namespace: default
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"autoscaling.k8s.io/
v1beta2","kind":"VerticalPodAutoscaler","metadata":{"annotations":{},"name":"hamster-
vpa","namespace":"d...
API Version: autoscaling.k8s.io/v1beta2
Kind: VerticalPodAutoscaler
Metadata:
Creation Timestamp: 2019-09-27T18:22:51Z
Generation: 23
Resource Version: 14411
Self Link: /apis/autoscaling.k8s.io/v1beta2/namespaces/default/
verticalpodautoscalers/hamster-vpa
UID: d0d85fb9-e153-11e9-ae53-0205785d75b0
Spec:
Target Ref:
API Version: apps/v1
Kind: Deployment
Name: hamster
Status:
Conditions:
Last Transition Time: 2019-09-27T18:23:28Z
Status: True
Type: RecommendationProvided
Recommendation:
Container Recommendations:
142
Amazon EKS User Guide
Test your Vertical Pod Autoscaler Installation
7. When you finish experimenting with the example application, you can delete it with the following
command.
143
Amazon EKS User Guide
Load Balancing
Load Balancing
Amazon EKS supports the Network Load Balancer and the Classic Load Balancer for pods running on
Amazon EC2 instance worker nodes through the Kubernetes service of type LoadBalancer. Classic Load
Balancers and Network Load Balancers are not supported for pods running on AWS Fargate (Fargate).
For Fargate ingress, we recommend that you use the ALB Ingress Controller (p. 145) on Amazon EKS
(minimum version v1.1.4).
The configuration of your load balancer is controlled by annotations that are added to the manifest for
your service. By default, Classic Load Balancers are used for LoadBalancer type services. To use the
Network Load Balancer instead, apply the following annotation to your service:
service.beta.kubernetes.io/aws-load-balancer-type: nlb
For an example service manifest that specifies a load balancer, see Type LoadBalancer in the Kubernetes
documentation. For more information about using Network Load Balancer with Kubernetes, see Network
Load Balancer support on AWS in the Kubernetes documentation.
By default, services of type LoadBalancer create public-facing load balancers. To use an internal load
balancer, apply the following annotation to your service:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
For internal load balancers, your Amazon EKS cluster must be configured to use at least one private
subnet in your VPC. Kubernetes examines the route table for your subnets to identify whether they are
public or private. Public subnets have a route directly to the internet using an internet gateway, but
private subnets do not.
Key Value
kubernetes.io/role/elb 1
Private subnets in your VPC should be tagged accordingly so that Kubernetes knows that it can use them
for internal load balancers:
Key Value
kubernetes.io/role/internal-elb 1
144
Amazon EKS User Guide
ALB Ingress Controller on Amazon EKS
To ensure that your Ingress objects use the ALB Ingress Controller, add the following annotation to your
Ingress specification. For more information, see Ingress specification in the documentation.
annotations:
kubernetes.io/ingress.class: alb
• Instance – Registers nodes within your cluster as targets for the ALB. Traffic reaching the ALB is routed
to NodePort for your service and then proxied to your pods. This is the default traffic mode. You
can also explicitly specify it with the alb.ingress.kubernetes.io/target-type: instance
annotation.
Note
Your Kubernetes service must specify the NodePort type to use this traffic mode.
• IP – Registers pods as targets for the ALB. Traffic reaching the ALB is directly routed to pods for your
service. You must specify the alb.ingress.kubernetes.io/target-type: ip annotation to use
this traffic mode.
For other available annotations supported by the ALB Ingress Controller, see Ingress annotations.
This topic shows you how to configure the ALB Ingress Controller to work with your Amazon EKS cluster.
1. Tag the subnets in your VPC that you want to use for your load balancers so that the ALB
Ingress Controller knows that it can use them. For more information, see Subnet Tagging
Requirement (p. 154). If you deployed your cluster with ekctl, then the tags are already applied.
• All subnets in your VPC should be tagged accordingly so that Kubernetes can discover them.
Key Value
kubernetes.io/cluster/<cluster- shared
name>
• Public subnets in your VPC should be tagged accordingly so that Kubernetes knows to use only
those subnets for external load balancers.
Key Value
kubernetes.io/role/elb 1
• Private subnets in your VPC should be tagged accordingly so that Kubernetes knows that it can
use them for internal load balancers:
145
Amazon EKS User Guide
ALB Ingress Controller on Amazon EKS
Key Value
kubernetes.io/role/internal-elb 1
2. Create an IAM OIDC provider and associate it with your cluster. If you don't have eksctl version
0.15.0-rc.2 or later installed, complete the instructions in Installing or Upgrading eksctl (p. 189)
to install or upgrade it. You can check your installed version with eksctl version.
3. Create an IAM policy called ALBIngressControllerIAMPolicy for the ALB Ingress Controller
pod that allows it to make calls to AWS APIs on your behalf. Use the following AWS CLI command to
create the IAM policy in your AWS account. You can view the policy document on GitHub.
5. Create an IAM role for the ALB ingress controller and attach the role to the service account created
in the previous step. If you didn't create your cluster with eksctl, then use the instructions on the
AWS Management Console or AWS CLI tabs.
eksctl
The command that follows only works for clusters that were created with eksctl.
1. Using the instructions on the AWS Management Console tab in Create an IAM
Role (p. 249), create an IAM role named eks-alb-ingress-controller and attach the
ALBIngressControllerIAMPolicy IAM policy that you created in a previous step to it.
Note the Amazon Resource Name (ARN) of the role, once you've created it.
146
Amazon EKS User Guide
ALB Ingress Controller on Amazon EKS
2. Annotate the Kubernetes service account with the ARN of the role that you created with the
following command.
AWS CLI
1. Using the instructions on the AWS CLI tab in Create an IAM Role (p. 249),
create an IAM role named eks-alb-ingress-controller and attach the
ALBIngressControllerIAMPolicy IAM policy that you created in a previous step to it.
Note the Amazon Resource Name (ARN) of the role, once you've created it.
2. Annotate the Kubernetes service account with the ARN of the role that you created with the
following command.
7. Open the ALB Ingress Controller deployment manifest for editing with the following command.
8. Add a line for the cluster name after the --ingress-class=alb line. If you're running the ALB
ingress controller on Fargate, then you must also add the lines for the VPC ID, and AWS Region name
of your cluster. Once you've added the appropriate lines, save and close the file.
spec:
containers:
- args:
- --ingress-class=alb
- --cluster-name=prod
- --aws-vpc-id=vpc-03468a8157edca5bd
- --aws-region=region-code
9. Confirm that the ALB Ingress Controller is running with the following command.
Expected output:
1. Deploy the game 2048 as a sample application to verify that the ALB Ingress Controller creates an
Application Load Balancer as a result of the Ingress object. You can run the sample application on
a cluster that has Amazon EC2 worker nodes only, one or more Fargate pods, or a combination of
147
Amazon EKS User Guide
ALB Ingress Controller on Amazon EKS
the two. If your cluster has Amazon EC2 worker nodes and no Fargate pods, then select the Amazon
EC2 worker nodes only tab. If your cluster has any existing Fargate pods, or you want to deploy the
application to new Fargate pods, then select the Fargate tab. For more information about Fargate
pods, see Getting Started with AWS Fargate on Amazon EKS (p. 106) .
Fargate
Ensure that the cluster that you want to use Fargate in is in the list of supported
Regions (p. 105).
a. Create a Fargate profile that includes the sample application's namespace with the following
command. Replace the example-values with your own values.
Note
The command that follows only works for clusters that were created with eksctl.
If you didn't create your cluster with eksctl, then you can create the profile with
the the AWS Management Console (p. 112), using the same values for name and
namespace that are in the command below.
b. Download and apply the manifest files to create the Kubernetes namespace, deployment, and
service with the following commands.
2. After a few minutes, verify that the Ingress resource was created with the following command.
148
Amazon EKS User Guide
ALB Ingress Controller on Amazon EKS
Output:
Note
If your Ingress has not been created after several minutes, run the following command to
view the Ingress controller logs. These logs may contain error messages that can help you
diagnose any issues with your deployment.
3. Open a browser and navigate to the ADDRESS URL from the previous command output to see the
sample application.
4. When you finish experimenting with your sample application, delete it with the following
commands.
149
Amazon EKS User Guide
Creating a VPC for Amazon EKS
Topics
• Creating a VPC for Your Amazon EKS Cluster (p. 150)
• Cluster VPC Considerations (p. 152)
• Amazon EKS Security Group Considerations (p. 154)
• Pod Networking (CNI) (p. 157)
• Installing or Upgrading CoreDNS (p. 167)
• Installing Calico on Amazon EKS (p. 169)
This topic guides you through creating a VPC for your cluster with either 3 public subnets, or two public
subnets and two private subnets, which are provided with internet access through a NAT gateway. You
can use this VPC for your Amazon EKS cluster. We recommend a network architecture that uses private
subnets for your worker nodes, and public subnets for Kubernetes to create public load balancers within.
Choose the tab below that represents your desired VPC configuration.
https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-11-15/amazon-eks-
vpc-sample.yaml
6. On the Specify Details page, fill out the parameters accordingly, and then choose Next.
• Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can
call it eks-vpc.
• VpcBlock: Choose a CIDR range for your VPC. You can keep the default value.
• Subnet01Block: Specify a CIDR range for subnet 1. We recommend that you keep the default
value so that you have plenty of IP addresses for pods to use.
• Subnet02Block: Specify a CIDR range for subnet 2. We recommend that you keep the default
value so that you have plenty of IP addresses for pods to use.
• Subnet03Block: Specify a CIDR range for subnet 3. We recommend that you keep the default
value so that you have plenty of IP addresses for pods to use.
150
Amazon EKS User Guide
Creating a VPC for Amazon EKS
7. (Optional) On the Options page, tag your stack resources. Choose Next.
8. On the Review page, choose Create.
9. When your stack is created, select it in the console and choose Outputs.
10. Record the SecurityGroups value for the security group that was created. You need this
when you create your EKS cluster; this security group is applied to the cross-account elastic
network interfaces that are created in your subnets that allow the Amazon EKS control plane to
communicate with your worker nodes.
11. Record the VpcId for the VPC that was created. You need this when you launch your worker
node group template.
12. Record the SubnetIds for the subnets that were created. You need this when you create your
EKS cluster; these are the subnets that your worker nodes are launched into.
13. Tag your public subnets so that Kubernetes knows that it can use them for external load
balancers.
Key Value
kubernetes.io/role/elb 1
https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-11-15/amazon-eks-
vpc-private-subnets.yaml
6. On the Specify Details page, fill out the parameters accordingly, and then choose Next.
• Stack name: Choose a stack name for your AWS CloudFormation stack. For example, you can
call it eks-vpc.
• VpcBlock: Choose a CIDR range for your VPC. You can keep the default value.
• PublicSubnet01Block: Specify a CIDR range for public subnet 1. We recommend that you
keep the default value so that you have plenty of IP addresses for pods to use.
• PublicSubnet02Block: Specify a CIDR range for public subnet 2. We recommend that you
keep the default value so that you have plenty of IP addresses for pods to use.
• PrivateSubnet01Block: Specify a CIDR range for private subnet 1. We recommend that you
keep the default value so that you have plenty of IP addresses for pods to use.
• PrivateSubnet02Block: Specify a CIDR range for private subnet 2. We recommend that you
keep the default value so that you have plenty of IP addresses for pods to use.
151
Amazon EKS User Guide
Next Steps
7. (Optional) On the Options page, tag your stack resources. Choose Next.
8. On the Review page, choose Create.
9. When your stack is created, select it in the console and choose Outputs.
10. Record the SecurityGroups value for the security group that was created. You need this
when you create your EKS cluster; this security group is applied to the cross-account elastic
network interfaces that are created in your subnets that allow the Amazon EKS control plane to
communicate with your worker nodes.
11. Record the VpcId for the VPC that was created. You need this when you launch your worker
node group template.
12. Record the SubnetIds for the subnets that were created. You need this when you create your
EKS cluster; these are the subnets that your worker nodes are launched into.
13. Tag your private subnets so that Kubernetes knows that it can use them for internal load
balancers.
Key Value
kubernetes.io/role/internal-elb 1
Key Value
kubernetes.io/role/elb 1
Next Steps
After you have created your VPC, you can try the Getting Started with Amazon EKS (p. 3) walkthrough,
but you can skip the Create your Amazon EKS Cluster VPC (p. 12) section and use these subnets and
security groups for your cluster.
152
Amazon EKS User Guide
VPC IP Addressing
that uses private subnets for your worker nodes and public subnets for Kubernetes to create internet-
facing load balancers within.
When you create your cluster, specify all of the subnets that will host resources for your cluster (such as
worker nodes and load balancers).
Note
Internet-facing load balancers require a public subnet in your cluster. Worker nodes also
require outbound internet access to the Amazon EKS APIs for cluster introspection and node
registration at launch time. To pull container images, they require access to the Amazon S3 and
Amazon ECR APIs (and any other container registries, such as DockerHub). For more information,
see Amazon EKS Security Group Considerations (p. 154) and AWS IP Address Ranges in the
AWS General Reference.
The subnets that you pass when you create the cluster influence where Amazon EKS places elastic
network interfaces that are used for the control plane to worker node communication.
It is possible to specify only public or private subnets when you create your cluster, but there are some
limitations associated with these configurations:
• Private-only: Everything runs in a private subnet and Kubernetes cannot create internet-facing load
balancers for your pods.
• Public-only: Everything runs in a public subnet, including your worker nodes.
Amazon EKS creates an elastic network interface in your private subnets to facilitate communication
to your worker nodes. This communication channel supports Kubernetes functionality such as kubectl
exec and kubectl logs. The security group that you specify when you create your cluster is applied to the
elastic network interfaces that are created for your cluster control plane.
Your VPC must have DNS hostname and DNS resolution support. Otherwise, your worker nodes cannot
register with your cluster. For more information, see Using DNS with Your VPC in the Amazon VPC User
Guide.
VPC IP Addressing
You can define both private (RFC 1918) and public (non-RFC 1918) CIDR ranges within the VPC used for
your Amazon EKS cluster. For more information, see VPCs and Subnets and IP Addressing in Your VPC in
the Amazon VPC User Guide.
The Amazon EKS control plane creates up to 4 cross-account elastic network interfaces in your VPC
for each cluster. Be sure that the subnets you specify have enough available IP addresses for the cross-
account elastic network interfaces and your pods.
Important
Docker runs in the 172.17.0.0/16 CIDR range in Amazon EKS clusters. We recommend that
your cluster's VPC subnets do not overlap this range. Otherwise, you will receive the following
error:
153
Amazon EKS User Guide
Subnet Tagging Requirement
Key Value
kubernetes.io/cluster/<cluster-name> shared
• Key: The <cluster-name> value matches your Amazon EKS cluster's name.
• Value: The shared value allows more than one cluster to use this VPC.
This tag is not required or created by Amazon EKS for 1.15 clusters. If you deploy a 1.15 cluster to a VPC
that already has this tag, the tag is not removed.
Key Value
kubernetes.io/cluster/<cluster-name> shared
Key Value
kubernetes.io/role/internal-elb 1
Key Value
kubernetes.io/role/elb 1
154
Amazon EKS User Guide
Cluster Security Group (available starting
with Amazon EKS clusters running
Kubernetes 1.14 and eks.3 platform version)
You can check for a cluster security group for your cluster in the AWS Management Console under the
cluster's Networking section, or with the following AWS CLI command:
If your cluster is running Kubernetes version 1.14 and platform version (p. 48) eks.3 or later, we
recommend that you add the cluster security group to all existing and future worker node groups.
For more information, see Security Groups for Your VPC in the Amazon VPC User Guide. Amazon EKS
managed node groups (p. 82) are automatically configured to use the cluster security group.
You can check the control plane security group for your cluster in the AWS Management Console under
the cluster's Networking section (listed as Additional security groups), or with the following AWS CLI
command:
If you launch worker nodes with the AWS CloudFormation template in the Getting Started with Amazon
EKS (p. 3) walkthrough, AWS CloudFormation modifies the control plane security group to allow
155
Amazon EKS User Guide
Control Plane and Worker Node Security Groups
(for Amazon EKS clusters earlier than Kubernetes
version 1.14 and platform version eks.3)
communication with the worker nodes. Amazon EKS strongly recommends that you use a dedicated
security group for each control plane (one per cluster). If you share a control plane security group with
other Amazon EKS clusters or resources, you may block or disrupt connections to those resources.
The security group for the worker nodes and the security group for the control plane communication to
the worker nodes have been set up to prevent communication to privileged ports in the worker nodes.
If your applications require added inbound or outbound access from the control plane or worker nodes,
you must add these rules to the security groups associated with your cluster. For more information, see
Security Groups for Your VPC in the Amazon VPC User Guide.
Note
To allow proxy functionality on privileged ports or to run the CNCF conformance tests yourself,
you must edit the security groups for your control plane and the worker nodes. The security
group on the worker nodes' side needs to allow inbound access for ports 0-65535 from the
control plane, and the control plane side needs to allow outbound access to the worker nodes
on ports 0-65535.
When cluster
endpoint private
access (p. 35)
is enabled: Any
security groups
that generate API
server client traffic
(such as kubectl
commands on
a bastion host
within your
cluster's VPC)
When cluster
endpoint private
access (p. 35)
is enabled: Any
security groups
that generate API
server client traffic
(such as kubectl
commands on
a bastion host
within your
cluster's VPC)
156
Amazon EKS User Guide
Pod Networking (CNI)
Minimum inbound Any protocol Any ports you All worker node
traffic (from other you expect expect your security groups
worker nodes) your worker worker nodes
nodes to use to use for
for inter-worker inter-worker
communication communication
* Worker nodes also require outbound internet access to the Amazon EKS APIs for cluster introspection
and node registration at launch time. To pull container images, they require access to the Amazon S3
and Amazon ECR APIs (and any other container registries, such as DockerHub). For more information, see
AWS IP Address Ranges in the AWS General Reference.
If you have more than one security group associated to your worker nodes, then one of the security
groups must have the following tag applied to it. If you have only one security group associated to your
worker nodes, then the tag is optional. For more information about tagging, see Working with Tags
Using the Console (p. 264).
Key Value
kubernetes.io/cluster/<cluster-name> owned
157
Amazon EKS User Guide
CNI Configuration Variables
The CNI plugin is responsible for allocating VPC IP addresses to Kubernetes nodes and configuring the
necessary networking for pods on each node. The plugin consists of two primary components:
• The L-IPAM daemon is responsible for attaching elastic network interfaces to instances, assigning
secondary IP addresses to elastic network interfaces, and maintaining a "warm pool" of IP addresses on
each node for assignment to Kubernetes pods when they are scheduled.
• The CNI plugin itself is responsible for wiring the host network (for example, configuring the interfaces
and virtual Ethernet pairs) and adding the correct interface to the pod namespace.
For more information about the design and networking configuration, see CNI plugin for Kubernetes
networking over AWS VPC.
Elastic network interface and secondary IP address limitations by Amazon EC2 instance types are
applicable. In general, larger instances can support more IP addresses. For more information, see IP
Addresses Per Network Interface Per Instance Type in the Amazon EC2 User Guide for Linux Instances.
Topics
• CNI Configuration Variables (p. 158)
• External Source Network Address Translation (SNAT) (p. 160)
• CNI Custom Networking (p. 161)
• CNI Metrics Helper (p. 164)
• Amazon VPC CNI Plugin for Kubernetes Upgrades (p. 166)
AWS_VPC_CNI_NODE_PORT_SUPPORT
Type: Boolean
158
Amazon EKS User Guide
CNI Configuration Variables
Default: true
Specifies whether NodePort services are enabled on a worker node's primary network interface.
This requires additional iptables rules and that the kernel's reverse path filter on the primary
interface is set to loose.
AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG
Type: Boolean
Default: false
Specifies that your pods may use subnets and security groups (within the same VPC as your
control plane resources) that are independent of your cluster's resourcesVpcConfig. By default,
pods share the same subnet and security groups as the worker node's primary interface. Setting
this variable to true causes ipamD to use the security groups and subnets in a worker node's
ENIConfig for elastic network interface allocation. You must create an ENIConfig custom
resource definition for each subnet that your pods will reside in, and then annotate each worker
node to use a specific ENIConfig (multiple worker nodes can be annotated with the same
ENIConfig). Worker nodes can only be annotated with a single ENIConfig at a time, and the
subnet in the ENIConfig must belong to the same Availability Zone that the worker node resides in.
For more information, see CNI Custom Networking (p. 161).
AWS_VPC_K8S_CNI_EXTERNALSNAT
Type: Boolean
Default: false
Specifies whether an external NAT gateway should be used to provide SNAT of secondary ENI IP
addresses. If set to true, the SNAT iptables rule and off-VPC IP rule are not applied, and these
rules are removed if they have already been applied.
Disable SNAT if you need to allow inbound communication to your pods from external VPNs, direct
connections, and external VPCs, and your pods do not need to access the Internet directly via an
Internet Gateway. However, your nodes must be running in a private subnet and connected to the
internet through an AWS NAT Gateway or another external NAT device.
For more information, see External Source Network Address Translation (SNAT) (p. 160).
WARM_ENI_TARGET
Type: Integer
Default: 1
Specifies the number of free elastic network interfaces (and all of their available IP addresses) that
the ipamD daemon should attempt to keep available for pod assignment on the node. By default,
ipamD attempts to keep 1 elastic network interface and all of its IP addresses available for pod
assignment.
Note
The number of IP addresses per network interface varies by instance type. For more
information, see IP Addresses Per Network Interface Per Instance Type in the Amazon EC2
User Guide for Linux Instances.
For example, an m4.4xlarge launches with 1 network interface and 30 IP addresses. If 5 pods are
placed on the node and 5 free IP addresses are removed from the IP address warm pool, then ipamD
attempts to allocate more interfaces until WARM_ENI_TARGET free interfaces are available on the
node.
Note
If WARM_IP_TARGET is set, then this environment variable is ignored and the
WARM_IP_TARGET behavior is used instead.
159
Amazon EKS User Guide
External SNAT
WARM_IP_TARGET
Type: Integer
Default: None
Specifies the number of free IP addresses that the ipamD daemon should attempt to keep available
for pod assignment on the node. For example, if WARM_IP_TARGET is set to 10, then ipamD
attempts to keep 10 free IP addresses available at all times. If the elastic network interfaces on the
node are unable to provide these free addresses, ipamD attempts to allocate more interfaces until
WARM_IP_TARGET free IP addresses are available.
Note
This environment variable overrides WARM_ENI_TARGET behavior.
• Enables pods to communicate bi-directionally with the internet. The worker node must be in a public
subnet and have a public or elastic IP address assigned to the primary private IP address of its primary
network interface. The traffic is translated to and from the public or elastic IP address and routed to
and from the internet by an internet gateway, as shown in the following picture.
SNAT is necessary because the internet gateway only knows how to translate between the primary
private and public or elastic IP address assigned to the primary elastic network interface of the
Amazon EC2 instance worker node that pods are running on.
• Prevents a device in other private IP address spaces (for example, VPC peering, Transit VPC, or Direct
Connect) from communicating directly to a pod that is not assigned the primary private IP address of
the primary elastic network interface of the Amazon EC2 instance worker node.
160
Amazon EKS User Guide
CNI Custom Networking
If the internet or devices in other private IP address spaces need to communicate with a pod that isn't
assigned the primary private IP address assigned to the primary elastic network interface of the Amazon
EC2 instance worker node that the pod is running on, then:
• The worker node must be deployed in a private subnet that has a route to a NAT device in a public
subnet.
• You need to enable external SNAT in the CNI plugin aws-node DaemonSet with the following
command:
Once external SNAT is enabled, the CNI plugin does not translate a pod's private IP address to the
primary private IP address assigned to the primary elastic network interface of the Amazon EC2 instance
worker node that the pod is running on when traffic is destined for an adddress outside of the VPC.
Traffic from the pod to the internet is externally translated to and from the public IP address of the NAT
device and routed to and from the internet by an internet gateway, as shown in the following picture.
• There are a limited number of IP addresses available in a subnet. This limits the number of pods that
can be created in the cluster. Using different subnets for pod groups allows you to increase the number
of available IP addresses.
• For security reasons, your pods must use different security groups or subnets than the node's primary
network interface.
• The worker nodes are configured in public subnets and you want the pods to be placed in private
subnets using a NAT Gateway. For more information, see External Source Network Address Translation
(SNAT) (p. 160).
161
Amazon EKS User Guide
CNI Custom Networking
Note
You can configure custom networking for self-managed node groups, but not for managed node
groups. The use cases discussed in this topic require the Amazon VPC CNI plugin for Kubernetes
version 1.4.0 or later. To check your CNI version, and upgrade if necessary, see Amazon VPC CNI
Plugin for Kubernetes Upgrades (p. 166).
Enabling a custom network effectively removes an available elastic network interface (and all of its
available IP addresses for pods) from each worker node that uses it. The primary network interface for
the worker node is not used for pod placement when a custom network is enabled.
1. Associate a secondary CIDR block to your cluster's VPC. For more information, see Associating a
Secondary IPv4 CIDR Block with Your VPC in the Amazon VPC User Guide.
2. Create a subnet in your VPC for each Availability Zone, using your secondary CIDR block. Your
custom subnets must be from a different VPC CIDR block than the subnet that your worker nodes
were launched into. For more information, see Creating a Subnet in Your VPC in the Amazon VPC
User Guide.
3. Set the AWS_VPC_K8S_CNI_CUSTOM_NETWORK_CFG=true environment variable to true in the
aws-node DaemonSet:
a. Create a file called ENIConfig.yaml and paste the following content into it:
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: eniconfigs.crd.k8s.amazonaws.com
spec:
scope: Cluster
group: crd.k8s.amazonaws.com
version: v1alpha1
names:
plural: eniconfigs
singular: eniconfig
kind: ENIConfig
5. Create an ENIConfig custom resource for each subnet that you want to schedule pods in.
a. Create a unique file for each elastic network interface configuration. Each file must
include the contents below with a unique value for name. In this example, a file named
region-codea.yaml is created. Replace the example values for name, subnet, and
securityGroups with your own values. In this example, the value for name is the same as
the Availability Zone that the subnet is in. If you don't have a specific security group that you
want to attach for your pods, you can leave that value empty for now. Later, you will specify the
worker node security group in the ENIConfig.
Note
Each subnet and security group combination requires its own custom resource.
apiVersion: crd.k8s.amazonaws.com/v1alpha1
162
Amazon EKS User Guide
CNI Custom Networking
kind: ENIConfig
metadata:
name: region-codea
spec:
securityGroups:
- sg-0dff111a1d11c1c11
subnet: subnet-011b111c1f11fdf11
b. Apply each custom resource file that you created to your cluster with the following command:
Note
Ensure that an annotation with the key k8s.amazonaws.com/eniConfig
for the ENI_CONFIG_ANNOTATION_DEF environment variable doesn't exist in
the container spec for the aws-node daemonset. If it exists, it overrides the
ENI_CONFIG_LABEL_DEF value, and should be removed. You can check to see if the
variable is set with the kubectl describe daemonset aws-node -n kube-
system | grep ENI_CONFIG_ANNOTATION_DEF command. If no output is returned,
then the variable is not set.
6. Create a new self-managed worker node group for each ENIConfig that you configured.
a. Determine the maximum number of pods that can be scheduled on each worker node using the
following formula.
For example, the m5.large instance type supports three network interfaces and ten IPv4
addresses per interface. Inserting the values into the formula, the instance can support a
maximum of 20 pods, as shown in the following calculation.
maxPods = (3 - 1) * (10 - 1) + 2 = 20
For more information about the the maximum number of network interfaces per instance type,
see Elastic Network Interfaces in the Amazon EC2 User Guide for Linux Instances.
b. Follow the steps in the Self-managed nodes tab of Launching Amazon EKS Linux Worker
Nodes (p. 88) to create each new self-managed worker node group. After you've opened the
AWS CloudFormation template, enter values as described in the instructions. For the following
fields however, ensure that you enter or select the listed values.
163
Amazon EKS User Guide
CNI Metrics Helper
spec:
securityGroups:
- sg-0dff222a2d22c2c22
subnet: subnet-022b222c2f22fdf22
8. If you have any worker nodes in your cluster that had pods placed on them before you completed
this procedure, you should terminate them. Only new nodes that are registered with the
k8s.amazonaws.com/eniConfig label will use the new custom networking feature.
When managing an Amazon EKS cluster, you may want to know how many IP addresses have been
assigned and how many are available. The CNI metrics helper helps you to:
When a worker node is provisioned, the CNI plugin automatically allocates a pool of secondary IP
addresses from the node’s subnet to the primary elastic network interface (eth0). This pool of IP
addresses is known as the warm pool, and its size is determined by the worker node’s instance type. For
example, a c4.large instance can support three elastic network interfaces and nine IP addresses per
interface. The number of IP addresses available for a given pod is one less than the maximum (of ten)
because one of the IP addresses is reserved for the elastic network interface itself. For more information,
see IP Addresses Per Network Interface Per Instance Type in the Amazon EC2 User Guide for Linux
Instances.
As the pool of IP addresses is depleted, the plugin automatically attaches another elastic network
interface to the instance and allocates another set of secondary IP addresses to that interface. This
process continues until the node can no longer support additional elastic network interfaces.
The following metrics are collected for your cluster and exported to CloudWatch:
• The maximum number of elastic network interfaces that the cluster can support
• The number of elastic network interfaces have been allocated to pods
• The number of IP addresses currently assigned to pods
• The total and maximum numbers of IP addresses available
• The number of ipamD errors
164
Amazon EKS User Guide
CNI Metrics Helper
1. Create a file called allow_put_metrics_data.json and populate it with the following policy
document.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "cloudwatch:PutMetricData",
"Resource": "*"
}
]
}
2. Create an IAM policy called CNIMetricsHelperPolicy for your worker node instance profile that
allows the CNI metrics helper to make calls to AWS APIs on your behalf. Use the following AWS CLI
command to create the IAM policy in your AWS account.
Output:
Name: aws-auth
Namespace: kube-system
Labels: <none>
Annotations: <none>
Data
====
mapRoles:
----
- groups:
- system:bootstrappers
- system:nodes
rolearn: arn:aws:iam::111122223333:role/eksctl-prod-nodegroup-standard-wo-
NodeInstanceRole-GKNS581EASPU
username: system:node:{{EC2PrivateDNSName}}
Events: <none>
Record the role name for any rolearn values that have the system:nodes group assigned to
them. In the above example output, the role name is eksctl-prod-nodegroup-standard-wo-
NodeInstanceRole-GKNS581EASPU. You should have one value for each node group in your
cluster.
4. Attach the new CNIMetricsHelperPolicy IAM policy to each of the worker node IAM roles you
identified earlier with the following command, substituting the red text with your own AWS account
number and worker node IAM role name.
165
Amazon EKS User Guide
CNI Upgrades
• Apply the CNI metrics helper manifest with the following command.
166
Amazon EKS User Guide
Installing or Upgrading CoreDNS
upgrade the CNI plugin on your cluster when new versions are released. To get a newer version of the
CNI plugin on existing clusters, you must manually upgrade the plugin.
The latest version that we recommend and install with new clusters is version 1.5.5. You can view the
different releases available for the plugin, and read the release notes for each version on GitHub.
Use the following procedures to check your CNI plugin version and upgrade to the latest recommended
version.
kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -
f 2
Output:
amazon-k8s-cni:1.5.3
In this example output, the CNI version is 1.5.3, which is earlier than the current recommended
version, 1.5.5. Use the following procedure to upgrade the CNI.
• Use the following command to upgrade your CNI version to the latest recommended version:
To check if your cluster is already running CoreDNS, use the following command.
If the output shows coredns in the pod names, then you're already running CoreDNS in your cluster. If
not, use the following procedure to update your DNS and service discovery provider to CoreDNS.
Note
The service for CoreDNS is still called kube-dns for backward compatibility.
167
Amazon EKS User Guide
Installing or Upgrading CoreDNS
export REGION="region-code"
c. Download the CoreDNS manifest from the Amazon EKS resource bucket.
d. Replace the variable placeholders in the dns.yaml file with your environment variable values
and apply the updated manifest to your cluster. The following command completes this in one
step.
Note
It might take several minutes for the expected output to return properly, depending on
the rate of DNS requests in your cluster.
In the following expected output, the number 23 is the DNS request count total.
3. Upgrade CoreDNS to the recommended version for your cluster by completing the steps in the
section called “Upgrading CoreDNS” (p. 169).
4. Scale down the kube-dns deployment to zero replicas.
168
Amazon EKS User Guide
Upgrading CoreDNS
Upgrading CoreDNS
1. Check the current version of your cluster's coredns deployment.
kubectl describe deployment coredns --namespace kube-system | grep Image | cut -d "/" -
f 3
Output:
coredns:v1.1.3
The recommended coredns versions for the corresponding Kubernetes versions are as follows:
b. Replace proxy in the following line with forward. Save the file and exit the editor.
proxy . /etc/resolv.conf
3. Update coredns to the recommended version, replacing region-code with your Region and
1.6.6 with your cluster's recommended coredns version:
169
Amazon EKS User Guide
Stars Policy Demo
1. Apply the Calico manifest from the aws/amazon-vpc-cni-k8s GitHub project. This manifest
creates DaemonSets in the kube-system namespace.
2. Watch the kube-system DaemonSets and wait for the calico-node DaemonSet to have the
DESIRED number of pods in the READY state. When this happens, Calico is working.
Output:
• If you are done using Calico in your Amazon EKS cluster, you can delete the DaemonSet with the
following command:
Before you create any network policies, all services can communicate bidirectionally. After you apply the
network policies, you can see that the client can only communicate with the frontend service, and the
backend can only communicate with the frontend.
170
Amazon EKS User Guide
Stars Policy Demo
3. To connect to the management UI, forward your local port 9001 to the management-ui service
running on your cluster:
4. Open a browser on your local system and point it to http://localhost:9001/. You should see the
management UI. The C node is the client service, the F node is the frontend service, and the B node
is the backend service. Each node has full communication access to all other nodes (as indicated by
the bold, colored lines).
5. Apply the following network policies to isolate the services from each other:
171
Amazon EKS User Guide
Stars Policy Demo
6. Refresh your browser. You see that the management UI can no longer reach any of the nodes, so
they don't show up in the UI.
7. Apply the following network policies to allow the management UI to access the services:
8. Refresh your browser. You see that the management UI can reach the nodes again, but the nodes
cannot communicate with each other.
9. Apply the following network policy to allow traffic from the frontend service to the backend service:
10. Apply the following network policy to allow traffic from the client namespace to the frontend
service:
172
Amazon EKS User Guide
Stars Policy Demo
11. (Optional) When you are done with the demo, you can delete its resources with the following
commands:
173
Amazon EKS User Guide
Installing kubectl
Topics
• Installing kubectl (p. 174)
• Installing aws-iam-authenticator (p. 179)
• Create a kubeconfig for Amazon EKS (p. 182)
• Managing Users or IAM Roles for your Cluster (p. 185)
Installing kubectl
Kubernetes uses a command line utility called kubectl for communicating with the cluster API server.
The kubectl binary is available in many operating system package managers, and this option is often
much easier than a manual download and install process. You can follow the instructions for your specific
operating system or package manager in the Kubernetes documentation to install.
This topic helps you to download and install the Amazon EKS-vended kubectl binaries for macOS, Linux,
and Windows operating systems. These binaries are identical to the upstream community versions, and
are not unique to Amazon EKS or AWS.
Note
You must use a kubectl version that is within one minor version difference of your Amazon EKS
cluster control plane . For example, a 1.12 kubectl client should work with Kubernetes 1.11,
1.12, and 1.13 clusters.
174
Amazon EKS User Guide
Installing kubectl
macOS
1. Download the Amazon EKS-vended kubectl binary for your cluster's Kubernetes version from
Amazon S3:
• Kubernetes 1.15:
• Kubernetes 1.14:
• Kubernetes 1.13:
• Kubernetes 1.12:
2. (Optional) Verify the downloaded binary with the SHA-256 sum for your binary.
a. Download the SHA-256 sum for your cluster's Kubernetes version for macOS:
• Kubernetes 1.15:
• Kubernetes 1.14:
• Kubernetes 1.13:
• Kubernetes 1.12:
c. Compare the generated SHA-256 sum in the command output against your downloaded
SHA-256 file. The two should match.
3. Apply execute permissions to the binary.
175
Amazon EKS User Guide
Installing kubectl
chmod +x ./kubectl
4. Copy the binary to a folder in your PATH. If you have already installed a version of kubectl, then
we recommend creating a $HOME/bin/kubectl and ensuring that $HOME/bin comes first in
your $PATH.
5. (Optional) Add the $HOME/bin path to your shell initialization file so that it is configured when
you open a shell.
6. After you install kubectl, you can verify its version with the following command:
Linux
1. Download the Amazon EKS-vended kubectl binary for your cluster's Kubernetes version from
Amazon S3:
• Kubernetes 1.15:
• Kubernetes 1.14:
• Kubernetes 1.13:
• Kubernetes 1.12:
2. (Optional) Verify the downloaded binary with the SHA-256 sum for your binary.
a. Download the SHA-256 sum for your cluster's Kubernetes version for Linux:
• Kubernetes 1.15:
• Kubernetes 1.14:
176
Amazon EKS User Guide
Installing kubectl
• Kubernetes 1.13:
• Kubernetes 1.12:
c. Compare the generated SHA-256 sum in the command output against your downloaded
SHA-256 file. The two should match.
3. Apply execute permissions to the binary.
chmod +x ./kubectl
4. Copy the binary to a folder in your PATH. If you have already installed a version of kubectl, then
we recommend creating a $HOME/bin/kubectl and ensuring that $HOME/bin comes first in
your $PATH.
5. (Optional) Add the $HOME/bin path to your shell initialization file so that it is configured when
you open a shell.
Note
This step assumes you are using the Bash shell; if you are using another shell, change
the command to use your specific shell initialization file.
6. After you install kubectl, you can verify its version with the following command:
Windows
• Kubernetes 1.15:
177
Amazon EKS User Guide
Installing kubectl
• Kubernetes 1.14:
• Kubernetes 1.13:
• Kubernetes 1.12:
3. (Optional) Verify the downloaded binary with the SHA-256 sum for your binary.
a. Download the SHA-256 sum for your cluster's Kubernetes version for Windows:
• Kubernetes 1.15:
• Kubernetes 1.14:
• Kubernetes 1.13:
• Kubernetes 1.12:
Get-FileHash kubectl.exe
c. Compare the generated SHA-256 sum in the command output against your downloaded
SHA-256 file. The two should match, although the PowerShell output will be uppercase.
4. Copy the binary to a folder in your PATH. If you have an existing directory in your PATH that
you use for command line utilities, copy the binary to that directory. Otherwise, complete the
following steps.
a. Create a new directory for your command line binaries, such as C:\bin.
b. Copy the kubectl.exe binary to your new directory.
c. Edit your user or system PATH environment variable to add the new directory to your PATH.
d. Close your PowerShell terminal and open a new one to pick up the new PATH variable.
5. After you install kubectl, you can verify its version with the following command:
178
Amazon EKS User Guide
Installing aws-iam-authenticator
Installing aws-iam-authenticator
Amazon EKS uses IAM to provide authentication to your Kubernetes cluster through the AWS IAM
Authenticator for Kubernetes. You can configure the stock kubectl client to work with Amazon EKS by
installing the AWS IAM Authenticator for Kubernetes and modifying your kubectl configuration file to
use it for authentication.
macOS
1. If you do not already have Homebrew installed on your Mac, install it with the following
command.
aws-iam-authenticator help
You can also install the AWS-vended version of the aws-iam-authenticator by following these
steps.
2. (Optional) Verify the downloaded binary with the SHA-256 sum provided in the same bucket
prefix.
c. Compare the generated SHA-256 sum in the command output against your downloaded
aws-iam-authenticator.sha256 file. The two should match.
3. Apply execute permissions to the binary.
chmod +x ./aws-iam-authenticator
179
Amazon EKS User Guide
Installing aws-iam-authenticator
aws-iam-authenticator help
Linux
2. (Optional) Verify the downloaded binary with the SHA-256 sum provided in the same bucket
prefix.
c. Compare the generated SHA-256 sum in the command output against your downloaded
aws-iam-authenticator.sha256 file. The two should match.
3. Apply execute permissions to the binary.
chmod +x ./aws-iam-authenticator
aws-iam-authenticator help
180
Amazon EKS User Guide
Installing aws-iam-authenticator
Windows
1. If you do not already have Chocolatey installed on your Windows system, see Installing
Chocolatey.
2. Open a PowerShell terminal window and install the aws-iam-authenticator package with
the following command:
aws-iam-authenticator help
1. Open a PowerShell terminal window and download the Amazon EKS-vended aws-iam-
authenticator binary from Amazon S3:
2. (Optional) Verify the downloaded binary with the SHA-256 sum provided in the same bucket
prefix.
Get-FileHash aws-iam-authenticator.exe
c. Compare the generated SHA-256 sum in the command output against your downloaded
SHA-256 file. The two should match, although the PowerShell output will be uppercase.
3. Copy the binary to a folder in your PATH. If you have an existing directory in your PATH that
you use for command line utilities, copy the binary to that directory. Otherwise, complete the
following steps.
a. Create a new directory for your command line binaries, such as C:\bin.
b. Copy the aws-iam-authenticator.exe binary to your new directory.
c. Edit your user or system PATH environment variable to add the new directory to your PATH.
d. Close your PowerShell terminal and open a new one to pick up the new PATH variable.
4. Test that the aws-iam-authenticator binary works.
aws-iam-authenticator help
If you have an existing Amazon EKS cluster, create a kubeconfig file for that cluster. For more
information, see Create a kubeconfig for Amazon EKS (p. 182). Otherwise, see Creating an Amazon
EKS Cluster (p. 21) to create a new Amazon EKS cluster.
181
Amazon EKS User Guide
Create a kubeconfig for Amazon EKS
This section offers two procedures to create or update your kubeconfig. You can quickly create or update
a kubeconfig with the AWS CLI update-kubeconfig command by using the first procedure, or you can
create a kubeconfig manually with the second procedure.
Amazon EKS uses the aws eks get-token command, available in version 1.18.17 or later of the AWS
CLI or the AWS IAM Authenticator for Kubernetes with kubectl for cluster authentication. If you have
installed the AWS CLI on your system, then by default the AWS IAM Authenticator for Kubernetes will
use the same credentials that are returned with the following command:
For more information, see Configuring the AWS CLI in the AWS Command Line Interface User Guide.
1. Ensure that you have version 1.18.17 or later of the AWS CLI installed. To install or upgrade the AWS
CLI, see Installing the AWS Command Line Interface in the AWS Command Line Interface User Guide.
Note
Your system's Python version must be 2.7.9 or later. Otherwise, you receive hostname
doesn't match errors with AWS CLI calls to Amazon EKS.
You can check your AWS CLI version with the following command:
aws --version
Important
Package managers such yum, apt-get, or Homebrew for macOS are often behind several
versions of the AWS CLI. To ensure that you have the latest version, see Installing the AWS
Command Line Interface in the AWS Command Line Interface User Guide.
2. Use the AWS CLI update-kubeconfig command to create or update your kubeconfig for your cluster.
• By default, the resulting configuration file is created at the default kubeconfig path (.kube/
config) in your home directory or merged with an existing kubeconfig at that location. You can
specify another path with the --kubeconfig option.
• You can specify an IAM role ARN with the --role-arn option to use for authentication when you
issue kubectl commands. Otherwise, the IAM entity in your default AWS CLI or SDK credential
chain is used. You can view your default AWS CLI or SDK identity by running the aws sts get-
caller-identity command.
• For more information, see the help page with the aws eks update-kubeconfig help command or
see update-kubeconfig in the AWS CLI Command Reference.
Note
To run the following command, your account must be assigned the
eks:DescribeCluster IAM permission for the cluster name that you specify.
182
Amazon EKS User Guide
Create a kubeconfig for Amazon EKS
Note
If you receive the error "aws-iam-authenticator": executable file not found
in $PATH, your kubectl isn't configured for Amazon EKS. For more information, see
Installing aws-iam-authenticator (p. 179).
If you receive any other authorization or resource type errors, see Unauthorized or Access
Denied (kubectl) (p. 275) in the troubleshooting section.
Output:
mkdir -p ~/.kube
2. Open your favorite text editor and copy one of the kubeconfig code blocks below into it,
depending on your preferred client token method.
• To use the AWS CLI aws eks get-token command (requires version 1.18.17 or later of the AWS
CLI):
apiVersion: v1
clusters:
- cluster:
server: <endpoint-url>
certificate-authority-data: <base64-encoded-ca-cert>
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws
args:
- "eks"
- "get-token"
- "--cluster-name"
- "<cluster-name>"
✓ - "--role"
✓ - "<role-arn>"
✓ env:
✓ - name: AWS_PROFILE
✓ value: "<aws-profile>"
apiVersion: v1
clusters:
183
Amazon EKS User Guide
Create a kubeconfig for Amazon EKS
- cluster:
server: <endpoint-url>
certificate-authority-data: <base64-encoded-ca-cert>
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: aws
name: aws
current-context: aws
kind: Config
preferences: {}
users:
- name: aws
user:
exec:
apiVersion: client.authentication.k8s.io/v1alpha1
command: aws-iam-authenticator
args:
- "token"
- "-i"
- "<cluster-name>"
✓ - "-r"
✓ - "<role-arn>"
✓ env:
✓ - name: AWS_PROFILE
✓ value: "<aws-profile>"
3. Replace the <endpoint-url> with the endpoint URL that was created for your cluster.
4. Replace the <base64-encoded-ca-cert> with the certificateAuthority.data that was
created for your cluster.
5. Replace the <cluster-name> with your cluster name.
6. (Optional) To assume an IAM role to perform cluster operations instead of the default AWS
credential provider chain, uncomment the -r or --role and <role-arn> lines and substitute an
IAM role ARN to use with your user.
7. (Optional) To always use a specific named AWS credential profile (instead of the default AWS
credential provider chain), uncomment the env lines and substitute <aws-profile> with the
profile name to use.
8. Save the file to the default kubectl folder, with your cluster name in the file name. For example, if
your cluster name is devel, save the file to ~/.kube/config-devel.
9. Add that file path to your KUBECONFIG environment variable so that kubectl knows where to look
for your cluster configuration.
export KUBECONFIG=$KUBECONFIG:~/.kube/config-devel
10. (Optional) Add the configuration to your shell initialization file so that it is configured when you
open a shell.
184
Amazon EKS User Guide
Managing Users or IAM Roles for your Cluster
[System.Environment]::SetEnvironmentVariable('KUBECONFIG', $ENV:KUBECONFIG,
'Machine')
Note
If you receive the error "aws-iam-authenticator": executable file not found
in $PATH, your kubectl isn't configured for Amazon EKS. For more information, see
Installing aws-iam-authenticator (p. 179).
If you receive any other authorization or resource type errors, see Unauthorized or Access
Denied (kubectl) (p. 275) in the troubleshooting section.
Output:
The aws-auth ConfigMap is applied as part of the Getting Started with Amazon EKS (p. 3) guide which
provides a complete end-to-end walkthrough from creating an Amazon EKS cluster to deploying a
sample Kubernetes application. It is initially created to allow your worker nodes to join your cluster, but
you also use this ConfigMap to add RBAC access to IAM users and roles. If you have not launched worker
nodes and applied the aws-auth ConfigMap, you can do so with the following procedure.
If you receive an error stating "Error from server (NotFound): configmaps "aws-auth"
not found", then proceed with the following steps to apply the stock ConfigMap.
2. Download, edit, and apply the AWS authenticator configuration map.
185
Amazon EKS User Guide
Managing Users or IAM Roles for your Cluster
b. Open the file with your favorite text editor. Replace the <ARN of instance role (not
instance profile)> snippet with the Amazon Resource Name (ARN) of the IAM role that is
associated with your worker nodes, and save the file. You can inspect the AWS CloudFormation
stack outputs for your worker node groups and look for the following values:
• InstanceRoleARN (for worker node groups that were created with eksctl)
• NodeInstanceRole (for worker node groups that were created with Amazon EKS-vended AWS
CloudFormation templates in the AWS Management Console)
Important
Do not modify any other lines in this file.
apiVersion: v1
kind: ConfigMap
metadata:
name: aws-auth
namespace: kube-system
data:
mapRoles: |
- rolearn: <ARN of instance role (not instance profile)>
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
c. Apply the configuration. This command may take a few minutes to finish.
Note
If you receive the error "aws-iam-authenticator": executable file
not found in $PATH, your kubectl isn't configured for Amazon EKS. For more
information, see Installing aws-iam-authenticator (p. 179).
If you receive any other authorization or resource type errors, see Unauthorized or
Access Denied (kubectl) (p. 275) in the troubleshooting section.
3. Watch the status of your nodes and wait for them to reach the Ready status.
1. Ensure that the AWS credentials that kubectl is using are already authorized for your cluster. The
IAM user that created the cluster has these permissions by default.
2. Open the aws-auth ConfigMap.
Note
If you receive an error stating "Error from server (NotFound): configmaps "aws-
auth" not found", then use the previous procedure to apply the stock ConfigMap.
186
Amazon EKS User Guide
Managing Users or IAM Roles for your Cluster
Example ConfigMap:
✓ Please edit the object below. Lines beginning with a '✓' will be ignored,
✓ and an empty file will abort the edit. If an error occurs while saving this file will
be
✓ reopened with the relevant failures.
✓
apiVersion: v1
data:
mapRoles: |
- rolearn: arn:aws:iam::111122223333:role/doc-test-worker-nodes-NodeInstanceRole-
WDO5P42N3ETB
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
kind: ConfigMap
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"v1","data":{"mapRoles":"- rolearn: arn:aws:iam::111122223333:role/
doc-test-worker-nodes-NodeInstanceRole-WDO5P42N3ETB\n username: system:node:
{{EC2PrivateDNSName}}\n groups:\n - system:bootstrappers\n -
system:nodes\n"},"kind":"ConfigMap","metadata":{"annotations":{},"name":"aws-
auth","namespace":"kube-system"}}
creationTimestamp: 2018-04-04T18:49:10Z
name: aws-auth
namespace: kube-system
resourceVersion: "780"
selfLink: /api/v1/namespaces/kube-system/configmaps/aws-auth
uid: dcc31de5-3838-11e8-af26-02e00430057c
• To add an IAM user: add the user details to the mapUsers section of the ConfigMap, under
data. Add this section if it does not already exist in the file. Each entry supports the following
parameters:
• userarn: The ARN of the IAM user to add.
• username: The user name within Kubernetes to map to the IAM user. By default, the user name
is the ARN of the IAM user.
• groups: A list of groups within Kubernetes to which the user is mapped to. For more
information, see Default Roles and Role Bindings in the Kubernetes documentation.
• To add an IAM role (for example, for federated users): add the role details to the mapRoles
section of the ConfigMap, under data. Add this section if it does not already exist in the file. Each
entry supports the following parameters:
• rolearn: The ARN of the IAM role to add.
• username: The user name within Kubernetes to map to the IAM role. By default, the user name
is the ARN of the IAM role.
• groups: A list of groups within Kubernetes to which the role is mapped. For more information,
see Default Roles and Role Bindings in the Kubernetes documentation.
• A mapRoles section that adds the worker node instance role so that worker nodes can register
themselves with the cluster.
• A mapUsers section with the AWS users admin from the default AWS account, and ops-user
from another AWS account. Both users are added to the system:masters group.
187
Amazon EKS User Guide
Managing Users or IAM Roles for your Cluster
✓ Please edit the object below. Lines beginning with a '✓' will be ignored,
✓ and an empty file will abort the edit. If an error occurs while saving this file will
be
✓ reopened with the relevant failures.
✓
apiVersion: v1
data:
mapRoles: |
- rolearn: arn:aws:iam::555555555555:role/devel-worker-nodes-
NodeInstanceRole-74RF4UBDUKL6
username: system:node:{{EC2PrivateDNSName}}
groups:
- system:bootstrappers
- system:nodes
mapUsers: |
- userarn: arn:aws:iam::555555555555:user/admin
username: admin
groups:
- system:masters
- userarn: arn:aws:iam::111122223333:user/ops-user
username: ops-user
groups:
- system:masters
188
Amazon EKS User Guide
Installing or Upgrading eksctl
For more information and to see the official documentation, visit https://eksctl.io/.
Choose the tab below that best represents your client setup.
macOS
Important
The current release is a release candidate. To install the release candidate, you must
download an archive file for your operating system from https://github.com/weaveworks/
eksctl/releases/tag/0.15.0-rc.2, extract eksctl, and then execute it, rather than using the
numbered steps below.
The easiest way to get started with Amazon EKS and macOS is by installing eksctl with Homebrew.
The eksctl Homebrew recipe installs eksctl and any other dependencies that are required for
Amazon EKS, such as kubectl and the aws-iam-authenticator.
1. If you do not already have Homebrew installed on macOS, install it with the following
command.
4. Test that your installation was successful with the following command.
189
Amazon EKS User Guide
Installing or Upgrading eksctl
eksctl version
Note
The GitTag version should be at least 0.15.0-rc.2. If not, check your terminal
output for any installation or upgrade errors, or manually download an archive of
the release for your operating system from https://github.com/weaveworks/eksctl/
releases, extract eksctl, and then execute it.
Linux
Important
The current release is a release candidate. To install the release candidate, you must
download an archive file for your operating system from https://github.com/weaveworks/
eksctl/releases/tag/0.15.0-rc.2, extract eksctl, and then execute it, rather than using the
numbered steps below.
1. Download and extract the latest release of eksctl with the following command.
3. Test that your installation was successful with the following command.
eksctl version
Note
The GitTag version should be at least 0.15.0-rc.2. If not, check your terminal
output for any installation or upgrade errors, or manually download an archive of
the release for your operating system from https://github.com/weaveworks/eksctl/
releases, extract eksctl, and then execute it.
Windows
Important
The current release is a release candidate. To install the release candidate, you must
download an archive file for your operating system from https://github.com/weaveworks/
eksctl/releases/tag/0.15.0-rc.2, extract eksctl, and then execute it, rather than using the
numbered steps below.
1. If you do not already have Chocolatey installed on your Windows system, see Installing
Chocolatey.
2. Install or upgrade eksctl and the aws-iam-authenticator.
190
Amazon EKS User Guide
Installing or Upgrading eksctl
3. Test that your installation was successful with the following command.
eksctl version
Note
The GitTag version should be at least 0.15.0-rc.2. If not, check your terminal
output for any installation or upgrade errors, or manually download an archive of
the release for your operating system from https://github.com/weaveworks/eksctl/
releases, extract eksctl, and then execute it.
191
Amazon EKS User Guide
Note
If you receive the error "aws-iam-authenticator": executable file not found
in $PATH, your kubectl isn't configured for Amazon EKS. For more information, see
Installing aws-iam-authenticator (p. 179).
If you receive any other authorization or resource type errors, see Unauthorized or Access
Denied (kubectl) (p. 275) in the troubleshooting section.
Output:
Output:
Output:
192
Amazon EKS User Guide
Output:
Output:
Output:
7. Query the services in your cluster and wait until the External IP column for the guestbook service
is populated.
Note
It might take several minutes before the IP address is available.
193
Amazon EKS User Guide
Important
If you are unable to connect to the external IP address with your browser, be sure that your
corporate firewall is not blocking non-standards ports, like 3000. You can try switching to a
guest network to verify.
When you are finished experimenting with your guest book application, you should clean up the
resources that you created for it.
• The following command deletes all of the services and replication controllers for the guest book
application:
Note
If you receive the error "aws-iam-authenticator": executable file not found
in $PATH, your kubectl isn't configured for Amazon EKS. For more information, see
Installing aws-iam-authenticator (p. 179).
If you receive any other authorization or resource type errors, see Unauthorized or Access
Denied (kubectl) (p. 275) in the troubleshooting section.
If you are done with your Amazon EKS cluster, you should delete it and its resources so that you do
not incur additional charges. For more information, see Deleting a Cluster (p. 42).
194
Amazon EKS User Guide
Choose the tab below that corresponds to your desired installation method:
curl and jq
To install metrics-server from GitHub on an Amazon EKS cluster using curl and jq
If you have a macOS or Linux system with curl, tar, gzip, and the jq JSON parser installed, you
can download, extract, and install the latest release with the following commands. Otherwise, use
the next procedure to download the latest version using a web browser.
1. Open a terminal window and navigate to a directory where you would like to download the
latest metrics-server release.
2. Copy and paste the commands below into your terminal window and type Enter to execute
them. These commands download the latest release, extract it, and apply the version 1.8+
manifests to your cluster.
3. Verify that the metrics-server deployment is running the desired number of pods with the
following command.
Output:
Web browser
To install metrics-server from GitHub on an Amazon EKS cluster using a web browser
1. Download and extract the latest version of the metrics server code from GitHub.
195
Amazon EKS User Guide
a. Navigate to the latest release page of the metrics-server project on GitHub (https://
github.com/kubernetes-sigs/metrics-server/releases/latest), then choose a source code
archive for the latest release to download it.
Note
If you are downloading to a remote server, you can use the following wget
command, substituting the alternate-colored text with the latest version
number.
b. Navigate to your downloads location and extract the source code archive. For example, if
you downloaded the .tar.gz archive, use the following command to extract (substituting
your release version).
3. Verify that the metrics-server deployment is running the desired number of pods with the
following command.
Output:
196
Amazon EKS User Guide
Viewing the Raw Metrics
Example output:
...
✓ HELP rest_client_requests_total Number of HTTP requests, partitioned by status code,
method, and host.
✓ TYPE rest_client_requests_total counter
rest_client_requests_total{code="200",host="127.0.0.1:21362",method="POST"} 4994
rest_client_requests_total{code="200",host="127.0.0.1:443",method="DELETE"} 1
rest_client_requests_total{code="200",host="127.0.0.1:443",method="GET"} 1.326086e+06
rest_client_requests_total{code="200",host="127.0.0.1:443",method="PUT"} 862173
rest_client_requests_total{code="404",host="127.0.0.1:443",method="GET"} 2
rest_client_requests_total{code="409",host="127.0.0.1:443",method="POST"} 3
rest_client_requests_total{code="409",host="127.0.0.1:443",method="PUT"} 8
✓ HELP ssh_tunnel_open_count Counter of ssh tunnel total open attempts
✓ TYPE ssh_tunnel_open_count counter
ssh_tunnel_open_count 0
✓ HELP ssh_tunnel_open_fail_count Counter of ssh tunnel failed open attempts
✓ TYPE ssh_tunnel_open_fail_count counter
ssh_tunnel_open_fail_count 0
This raw output returns verbatim what the API server exposes. These metrics are represented in a
Prometheus format. This format allows the API server to expose different metrics broken down by line.
Each line includes a metric name, tags, and a value.
metric_name{"tag"="value"[,...]} value
While this endpoint is useful if you are looking for a specific metric, you typically want to analyze these
metrics over time. To do this, you can deploy Prometheus into your cluster. Prometheus is a monitoring
and time series database that scrapes exposed endpoints and aggregates data, allowing you to filter,
graph, and query the results.
Deploying Prometheus
This topic helps you deploy Prometheus into your cluster with Helm V3. If you already have Helm
installed, you can check your version with the helm version command. Helm is a package manager
197
Amazon EKS User Guide
Deploying Prometheus
for Kubernetes clusters. For more information about Helm and how to install it, see Using Helm with
Amazon EKS (p. 201).
After you configure Helm for your Amazon EKS cluster, you can use it to deploy Prometheus with the
following steps.
2. Deploy Prometheus.
3. Verify that all of the pods in the prometheus namespace are in the READY state.
Output:
4. Use kubectl to port forward the Prometheus console to your local machine.
198
Amazon EKS User Guide
Deploying Prometheus
199
Amazon EKS User Guide
Deploying Prometheus
All of the Kubernetes endpoints that are connected to Prometheus using service discovery are
displayed.
200
Amazon EKS User Guide
• If you're using macOS with Homebrew, install the binaries with the following command.
• If you're using Windows with Chocolatey, install the binaries with the following command.
• If you're using Linux, install the binaries with the following commands.
2. To pick up the new binary in your PATH, Close your current terminal window and open a new one.
3. Confirm that Helm is running with the following command.
helm help
4. At this point, you can run any Helm commands (such as helm install chart_name) to install,
modify, delete, or query Helm charts in your cluster. If you're new to Helm and don't have a specific
chart to install, you can:
• Experiment by installing an example chart. See Install an Example Chart in the Helm Quickstart
Guide.
• Install an Amazon EKS chart from the eks-charts GitHub repo or from Helm Hub.
201
Amazon EKS User Guide
Prerequisites
Prerequisites
This tutorial assumes the following:
• You have created an Amazon EKS cluster by following the steps in Getting Started with Amazon
EKS (p. 3).
• The security groups for your control plane elastic network interfaces and worker nodes follow the
recommended settings in Amazon EKS Security Group Considerations (p. 154).
• You are using a kubectl client that is configured to communicate with your Amazon EKS cluster (p. 16).
202
Amazon EKS User Guide
Step 1: Deploy the Kubernetes Metrics Server
curl and jq
To install metrics-server from GitHub on an Amazon EKS cluster using curl and jq
If you have a macOS or Linux system with curl, tar, gzip, and the jq JSON parser installed, you
can download, extract, and install the latest release with the following commands. Otherwise, use
the next procedure to download the latest version using a web browser.
1. Open a terminal window and navigate to a directory where you would like to download the
latest metrics-server release.
2. Copy and paste the commands below into your terminal window and type Enter to execute
them. These commands download the latest release, extract it, and apply the version 1.8+
manifests to your cluster.
3. Verify that the metrics-server deployment is running the desired number of pods with the
following command.
Output:
Web browser
To install metrics-server from GitHub on an Amazon EKS cluster using a web browser
1. Download and extract the latest version of the metrics server code from GitHub.
a. Navigate to the latest release page of the metrics-server project on GitHub (https://
github.com/kubernetes-sigs/metrics-server/releases/latest), then choose a source code
archive for the latest release to download it.
Note
If you are downloading to a remote server, you can use the following wget
command, substituting the alternate-colored text with the latest version
number.
b. Navigate to your downloads location and extract the source code archive. For example, if
you downloaded the .tar.gz archive, use the following command to extract (substituting
your release version).
203
Amazon EKS User Guide
Step 2: Deploy the Dashboard
3. Verify that the metrics-server deployment is running the desired number of pods with the
following command.
Output:
Output:
namespace/kubernetes-dashboard created
serviceaccount/kubernetes-dashboard created
service/kubernetes-dashboard created
secret/kubernetes-dashboard-certs created
secret/kubernetes-dashboard-csrf created
secret/kubernetes-dashboard-key-holder created
configmap/kubernetes-dashboard-settings created
role.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrole.rbac.authorization.k8s.io/kubernetes-dashboard created
rolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
clusterrolebinding.rbac.authorization.k8s.io/kubernetes-dashboard created
deployment.apps/kubernetes-dashboard created
service/dashboard-metrics-scraper created
deployment.apps/dashboard-metrics-scraper created
204
Amazon EKS User Guide
Step 4: Connect to the Dashboard
1. Create a file called eks-admin-service-account.yaml with the text below. This manifest
defines a service account and cluster role binding called eks-admin.
apiVersion: v1
kind: ServiceAccount
metadata:
name: eks-admin
namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: eks-admin
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- kind: ServiceAccount
name: eks-admin
namespace: kube-system
2. Apply the service account and cluster role binding to your cluster.
Output:
1. Retrieve an authentication token for the eks-admin service account. Copy the
<authentication_token> value from the output. You use this token to connect to the dashboard.
kubectl -n kube-system describe secret $(kubectl -n kube-system get secret | grep eks-
admin | awk '{print $1}')
Output:
Name: eks-admin-token-b5zv4
Namespace: kube-system
Labels: <none>
Annotations: kubernetes.io/service-account.name=eks-admin
kubernetes.io/service-account.uid=bcfe66ac-39be-11e8-97e8-026dce96b6e8
Type: kubernetes.io/service-account-token
Data
====
205
Amazon EKS User Guide
Step 5: Next Steps
kubectl proxy
3. To access the dashboard endpoint, open the following link with a web browser: http://
localhost:8001/api/v1/namespaces/kubernetes-dashboard/services/https:kubernetes-dashboard:/
proxy/#!/login.
4. Choose Token, paste the <authentication_token> output from the previous command into the
Token field, and choose SIGN IN.
Note
It may take a few minutes before CPU and memory metrics appear in the dashboard.
206
Amazon EKS User Guide
Scenario
This topic helps you use AWS App Mesh with an actual service that is running on Kubernetes. You
can either integrate Kubernetes with App Mesh resources by completing the steps in this topic or by
installing the App Mesh Kubernetes integration components. The integration components automatically
complete the tasks in this topic for you, enabling you to integrate with App Mesh directly from
Kubernetes. For more information, see Configure App Mesh Integration with Kubernetes.
Scenario
To illustrate how to use App Mesh with Kubernetes, assume that you have an application with the
following characteristics:
• You want to send 75 percent of the traffic from serviceA to serviceB and 25 percent of the traffic
to serviceBv2 to ensure that serviceBv2 is bug free before you send 100 percent of the traffic
from serviceA to it.
• You want to be able to easily adjust the traffic weighting so that 100 percent of the traffic goes to
serviceBv2 once it's proven to be reliable. Once all traffic is being sent to serviceBv2, you want to
deprecate serviceB.
• You don't want to have to change any existing application code or service discovery registration for
your actual services to meet the previous requirements.
To meet your requirements, you've decided to create an App Mesh service mesh with virtual services,
virtual nodes, a virtual router, and a route. After implementing your mesh, you update the pod specs for
your services to use the Envoy proxy. Once updated, your services communicate with each other through
the Envoy proxy rather than directly with each other.
Prerequisites
App Mesh supports Linux services that are registered with DNS, AWS Cloud Map, or both. To use this
getting started guide, we recommend that you have three existing services that are registered with DNS.
You can create a service mesh and its resources even if the services don't exist, but you can't use the
mesh until you have deployed actual services.
207
Amazon EKS User Guide
Step 1: Create a Mesh and Virtual Service
If you don't already have Kubernetes running, then you can create an Amazon EKS cluster. For more
information, see Getting Started with Amazon EKS using eksctl. If you don't already have some
services running on Kubernetes, you can deploy a test application. For more information, see Launch a
Guest Book Application.
The remaining steps assume that the actual services are named serviceA, serviceB, and serviceBv2
and that all services are discoverable through a namespace named apps.local.
• A mesh named apps, since all of the services in the scenario are registered to the apps.local
namespace.
• A virtual service named serviceb.apps.local, since the virtual service represents a service that is
discoverable with that name, and you don't want to change your code to reference another name. A
virtual service named servicea.apps.local is added in a later step.
You can use the AWS Management Console or the AWS CLI version 1.18.16 or higher to complete the
following steps. If using the AWS CLI, use the aws --version command to check your installed AWS
CLI version. If you don't have version 1.18.16 or higher installed, you must install or update the AWS CLI.
Select the tab for the tool that you want to use.
AWS CLI
208
Amazon EKS User Guide
Step 3: Create a Virtual Router and Route
Create a virtual node named serviceB, since one of the virtual nodes represents the actual service
named serviceB. The actual service that the virtual node represents is discoverable through DNS with
a hostname of serviceb.apps.local. Alternately, you can discover actual services using AWS Cloud
Map. The virtual node will listen for traffic using the HTTP/2 protocol on port 80. Other protocols are
also supported, as are health checks. You will create virtual nodes for serviceA and serviceBv2 in a
later step.
AWS CLI
{
"meshName": "apps",
"spec": {
"listeners": [
{
"portMapping": {
"port": 80,
"protocol": "http2"
}
}
],
"serviceDiscovery": {
"dns": {
"hostname": "serviceB.apps.local"
}
}
},
"virtualNodeName": "serviceB"
}
2. Create the virtual node with the create-virtual-node command using the JSON file as input.
• A virtual router named serviceB, since the serviceB.apps.local virtual service doesn't initiate
outbound communication with any other service. Remember that the virtual service that you created
previously is an abstraction of your actual serviceb.apps.local service. The virtual service sends
traffic to the virtual router. The virtual router will listen for traffic using the HTTP/2 protocol on port
80. Other protocols are also supported.
209
Amazon EKS User Guide
Step 3: Create a Virtual Router and Route
• A route named serviceB. It will route 100 percent of its traffic to the serviceB virtual node. You'll
change the weight in a later step once you've added the serviceBv2 virtual node. Though not
covered in this guide, you can add additional filter criteria for the route and add a retry policy to cause
the Envoy proxy to make multiple attempts to send traffic to a virtual node when it experiences a
communication problem.
AWS CLI
{
"meshName": "apps",
"spec": {
"listeners": [
{
"portMapping": {
"port": 80,
"protocol": "http2"
}
}
]
},
"virtualRouterName": "serviceB"
}
b. Create the virtual router with the create-virtual-router command using the JSON file as
input.
2. Create a route.
{
"meshName" : "apps",
"routeName" : "serviceB",
"spec" : {
"httpRoute" : {
"action" : {
"weightedTargets" : [
{
"virtualNode" : "serviceB",
"weight" : 100
}
]
210
Amazon EKS User Guide
Step 4: Review and Create
},
"match" : {
"prefix" : "/"
}
}
},
"virtualRouterName" : "serviceB"
}
b. Create the route with the create-route command using the JSON file as input.
Choose Edit if you need to make any changes in any section. Once you're satisfied with the settings,
choose Create mesh service.
AWS CLI
Review the settings of the mesh you created with the describe-mesh command.
Review the settings of the virtual service that you created with the describe-virtual-service
command.
Review the settings of the virtual node that you created with the describe-virtual-node command.
Review the settings of the virtual router that you created with the describe-virtual-router command.
Review the settings of the route that you created with the describe-route command.
• Create one virtual node named serviceBv2 and another named serviceA. Both virtual nodes
listen for requests over HTTP/2 port 80. For the serviceA virtual node, configure a backend of
211
Amazon EKS User Guide
Step 5: Create Additional Resources
serviceb.apps.local, since all outbound traffic from the serviceA virtual node is sent to the
virtual service named serviceb.apps.local. Though not covered in this guide, you can also specify
a file path to write access logs to for a virtual node.
• Create one additional virtual service named servicea.apps.local, which will send all traffic
directly to the serviceA virtual node.
• Update the serviceB route that you created in a previous step to send 75 percent of its traffic to the
serviceB virtual node and 25 percent of its traffic to the serviceBv2 virtual node. Over time, you
can continue to modify the weights until serviceBv2 receives 100 percent of the traffic. Once all
traffic is sent to serviceBv2, you can deprecate the serviceB virtual node and actual service. As
you change weights, your code doesn't require any modification, because the serviceb.apps.local
virtual and actual service names don't change. Recall that the serviceb.apps.local virtual service
sends traffic to the virtual router, which routes the traffic to the virtual nodes. The service discovery
names for the virtual nodes can be changed at any time.
AWS CLI
{
"meshName": "apps",
"spec": {
"listeners": [
212
Amazon EKS User Guide
Step 5: Create Additional Resources
{
"portMapping": {
"port": 80,
"protocol": "http2"
}
}
],
"serviceDiscovery": {
"dns": {
"hostname": "serviceBv2.apps.local"
}
}
},
"virtualNodeName": "serviceBv2"
}
{
"meshName" : "apps",
"spec" : {
"backends" : [
{
"virtualService" : {
"virtualServiceName" : "serviceb.apps.local"
}
}
],
"listeners" : [
{
"portMapping" : {
"port" : 80,
"protocol" : "http2"
}
}
],
"serviceDiscovery" : {
"dns" : {
"hostname" : "servicea.apps.local"
}
}
},
"virtualNodeName" : "serviceA"
}
3. Update the serviceb.apps.local virtual service that you created in a previous step to send
its traffic to the serviceB virtual router. When the virtual service was originally created, it
didn't send traffic anywhere, since the serviceB virtual router hadn't been created yet.
213
Amazon EKS User Guide
Step 5: Create Additional Resources
{
"meshName" : "apps",
"spec" : {
"provider" : {
"virtualRouter" : {
"virtualRouterName" : "serviceB"
}
}
},
"virtualServiceName" : "serviceb.apps.local"
}
{
"meshName" : "apps",
"routeName" : "serviceB",
"spec" : {
"http2Route" : {
"action" : {
"weightedTargets" : [
{
"virtualNode" : "serviceB",
"weight" : 75
},
{
"virtualNode" : "serviceBv2",
"weight" : 25
}
]
},
"match" : {
"prefix" : "/"
}
}
},
"virtualRouterName" : "serviceB"
}
{
"meshName" : "apps",
"spec" : {
"provider" : {
"virtualNode" : {
"virtualNodeName" : "serviceA"
}
214
Amazon EKS User Guide
Step 6: Update Services
}
},
"virtualServiceName" : "servicea.apps.local"
}
Mesh summary
Before you created the service mesh, you had three actual services named servicea.apps.local,
serviceb.apps.local, and servicebv2.apps.local. In addition to the actual services, you now
have a service mesh that contains the following resources that represent the actual services:
• Two virtual services. The proxy sends all traffic from the servicea.apps.local virtual service to the
serviceb.apps.local virtual service through a virtual router.
• Three virtual nodes named serviceA, serviceB, and serviceBv2. The Envoy proxy uses the service
discovery information configured for the virtual nodes to look up the IP addresses of the actual
services.
• One virtual router with one route that instructs the Envoy proxy to route 75 percent of inbound traffic
to the serviceB virtual node and 25 percent of the traffic to the serviceBv2 virtual node.
• Authorize the Envoy proxy that you deploy with each Kubernetes pod to read the configuration
of one or more virtual nodes. For more information about how to authorize the proxy, see Proxy
authorization.
• Update each of your existing Kubernetes pod specs to use the Envoy proxy.
App Mesh vends the following custom container images that you must add to your Kubernetes pod
specifications:
• Specify one of the following App Mesh Envoy container images, depending on which region you want
to pull the image from.
• All supported Regions other than me-south-1. You can replace us-west-2 with any region other
than me-south-1.
840364872350.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-envoy:v1.12.2.1-prod
• me-south-1 Region:
772975370895.dkr.ecr.me-south-1.amazonaws.com/aws-appmesh-envoy:v1.12.2.1-prod
Envoy uses the configuration defined in the App Mesh control plane to determine where to send your
application traffic.
You must use the App Mesh Envoy container image until the Envoy project team merges changes that
support App Mesh. For additional details, see the GitHub roadmap issue.
215
Amazon EKS User Guide
Step 6: Update Services
Update each pod specification in your application to include these containers, as shown in the following
example. Once updated, deploy the new specifications to update your services and start using App
Mesh with your Kubernetes application. The following example shows updating the serviceB pod
specification, that aligns to the scenario. To complete the scenario, you also need to update the
serviceBv2 and serviceA pod specifications by changing the values appropriately. For your own
applications, substitute your mesh name and virtual node name for the APPMESH_VIRTUAL_NODE_NAME
value, and add a list of ports that your application listens on for the APPMESH_APP_PORTS value.
Substitute the Amazon EC2 instance AWS Region for the AWS_REGION value.
spec:
containers:
- name: envoy
image: 840364872350.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-envoy:v1.12.2.1-prod
securityContext:
runAsUser: 1337
env:
- name: "APPMESH_VIRTUAL_NODE_NAME"
value: "mesh/apps/virtualNode/serviceB"
- name: "ENVOY_LOG_LEVEL"
value: "info"
- name: "AWS_REGION"
value: "aws_region_name"
initContainers:
- name: proxyinit
image: 111345817488.dkr.ecr.us-west-2.amazonaws.com/aws-appmesh-proxy-route-
manager:v2
securityContext:
capabilities:
add:
- NET_ADMIN
env:
- name: "APPMESH_START_ENABLED"
value: "1"
- name: "APPMESH_IGNORE_UID"
value: "1337"
- name: "APPMESH_ENVOY_INGRESS_PORT"
value: "15000"
- name: "APPMESH_ENVOY_EGRESS_PORT"
value: "15001"
- name: "APPMESH_APP_PORTS"
value: "application_port_list"
- name: "APPMESH_EGRESS_IGNORED_IP"
value: "169.254.169.254"
- name: "APPMESH_EGRESS_IGNORED_PORTS"
value: "22"
216
Amazon EKS User Guide
Prerequisites
• App Mesh controller for Kubernetes – The controller is accompanied by the deployment of three
Kubernetes custom resource definitions: mesh, virtual service, and virtual node. The
controller watches for creation, modification, and deletion of the custom resources and makes
changes to the corresponding App Mesh mesh, virtual service (including virtual router and
route), and virtual node resources through the App Mesh API. To learn more or contribute to the
controller, see the GitHub project.
• App Mesh sidecar injector for Kubernetes – The injector installs as a webhook and injects the App
Mesh sidecar container images into Kubernetes pods running in specific, labeled namespaces. To learn
more or contribute, see the GitHub project.
The features discussed in this topic are available as an open-source beta. This means that these
features are well tested. Support for the features will not be dropped, though details may change. If
the schema or schematics of a feature changes, instructions for migrating to the next version will be
provided. This migration may require deleting, editing, and re-creating Kubernetes API objects.
Prerequisites
To use the controller and sidecar injector, you must have the following resources:
• An existing Kubernetes cluster running version 1.12 or later. If you don't have an existing cluster, you
can deploy one using the Getting Started with Amazon EKS guide.
• A kubectl client that is configured to communicate with your Kubernetes cluster. If you're using
Amazon Elastic Kubernetes Service, you can use the instructions for installing kubectl and
configuring a kubeconfig file.
• jq and Open SSL installed.
1. The controller requires that your account and your Kubernetes worker nodes are able to work with
App Mesh resources. Attach the AWSAppMeshFullAccess policy to the role that is attached to your
Kubernetes worker nodes. If you are using a pod identity solution, make sure that the controller pod
is bound to the policy.
2. To create the Kubernetes custom resources and launch the controller, download the following yaml
file and apply it to your cluster with the following command.
217
Amazon EKS User Guide
Step 2: Install the Sidecar Injector
curl https://raw.githubusercontent.com/aws/aws-app-mesh-controller-for-k8s/master/
deploy/all.yaml | kubectl apply -f -
A Kubernetes namespace named appmesh-system is created and a container running the controller
is deployed into the namespace.
3. Confirm that the controller is running with the following command.
4. Confirm that the Kubernetes custom resources for App Mesh were created with the following
command.
If the custom resources were created, output similar to the following is returned.
NAME CREATED AT
meshes.appmesh.k8s.aws 2019-05-08T14:17:26Z
virtualnodes.appmesh.k8s.aws 2019-05-08T14:17:26Z
virtualservices.appmesh.k8s.aws 2019-05-08T14:17:26Z
1. Export the name of the mesh you want to create with the following command.
export MESH_NAME=my-mesh
2. Export the region of the mesh that you want to create with the following command. Replace
region with the Region that your Kubernetes cluster is deployed in.
export MESH_REGION=region
3. Download and execute the sidecar injector installation script with the following command.
curl https://raw.githubusercontent.com/aws/aws-app-mesh-inject/master/scripts/
install.sh | bash
A container with the sidecar injector is deployed into the Kubernetes namespace named appmesh-
system. If the injector successfully installed, the last several lines of the output returned are similar
to the following text.
deployment.apps/appmesh-inject created
mutatingwebhookconfiguration.admissionregistration.k8s.io/appmesh-inject created
waiting for aws-app-mesh-inject to start
218
Amazon EKS User Guide
Step 3: Configure App Mesh
Create a Mesh
When you create a mesh custom resource, you trigger the creation of an App Mesh mesh. The mesh
name that you specify must be the same as the mesh name you exported when you installed the sidecar
injector (p. 218). If the mesh name that you specify already exists, a new mesh is not created.
apiVersion: appmesh.k8s.aws/v1beta1
kind: Mesh
metadata:
name: my-mesh
apiVersion: appmesh.k8s.aws/v1beta1
kind: VirtualService
metadata:
name: my-svc-a
namespace: my-namespace
spec:
meshName: my-mesh
routes:
- name: route-to-svc-a
http:
match:
prefix: /
action:
weightedTargets:
- virtualNodeName: my-app-a
weight: 1
219
Amazon EKS User Guide
Sidecar Injection
apiVersion: appmesh.k8s.aws/v1beta1
kind: VirtualNode
metadata:
name: my-app-a
namespace: my-namespace
spec:
meshName: my-mesh
listeners:
- portMapping:
port: 9000
protocol: http
serviceDiscovery:
dns:
hostName: my-app-a.my-namespace.svc.cluster.local
backends:
- virtualService:
virtualServiceName: my-svc-a
Sidecar Injection
You enable sidecar injection for a Kubernetes namespace. When necessary, you can override the injector's
default behavior for each pod you deploy in a Kubernetes namespace that you've enabled the injector
for.
The App Mesh sidecar container images will be automatically injected into each pod that you deploy into
the namespace.
• appmesh.k8s.aws/mesh: mesh-name – Add when you want to use a different mesh name than the one
that you specified when you installed the injector.
• appmesh.k8s.aws/ports: "ports" – Specify particular ports when you don't want all of the container
ports defined in a pod spec passed to the sidecars as application ports.
• appmesh.k8s.aws/egressIgnoredPorts: ports – Specify a comma separated list of port numbers for
outbound traffic that you want ignored. By default all outbound traffic ports will be routed, except
port 22 (SSH).
• appmesh.k8s.aws/virtualNode: virtual-node-name – Specify your own name if you don't want the
virtual node name passed to the sidecars to be <deployment name>--<namespace>.
• appmesh.k8s.aws/sidecarInjectorWebhook: disabled – Add when you don't want the injector enabled for
a pod.
apiVersion: appmesh.k8s.aws/v1beta1
kind: Deployment
spec:
220
Amazon EKS User Guide
Step 4: Remove Integration Components (Optional)
metadata:
annotations:
appmesh.k8s.aws/mesh: my-mesh2
appmesh.k8s.aws/ports: "8079,8080"
appmesh.k8s.aws/egressIgnoredPorts: "3306"
appmesh.k8s.aws/virtualNode: my-app
appmesh.k8s.aws/sidecarInjectorWebhook: disabled
Prerequisites
Before you deploy the sample application, you must meet the following prerequisites:
• Meet all of the prerequisites in Tutorial: Configure App Mesh Integration with Kubernetes (p. 217).
• Have the App Mesh controller for Kubernetes and the App Mesh sidecar injector for Kubernetes
installed and configured. When you install the sidecar injector, specify color-mesh as the name of your
mesh. To learn more about the controller and sidecar injector and how to install and configure them,
see Tutorial: Configure App Mesh Integration with Kubernetes (p. 217).
• ColorGateway – A simple http service written in Go that is exposed to external clients and that
responds to http://service-name:port/color. The gateway responds with a color retrieved from color-
teller and a histogram of colors observed at the server that responded up to the point when you made
the request.
• ColorTeller – A simple http service written in Go that is configured to return a color. Multiple variants
of the service are deployed. Each service is configured to return a specific color.
1. To deploy the color mesh sample application, download the following file and apply it to your
Kubernetes cluster with the following command.
221
Amazon EKS User Guide
Deploy a Sample Application
curl https://raw.githubusercontent.com/aws/aws-app-mesh-controller-for-k8s/master/
examples/color.yaml | kubectl apply -f -
2. View the resources deployed by the sample application with the following command.
In the output, you see a collection of virtual services, virtual nodes, and mesh custom resources
along with native Kubernetes deployments, pods, and services. Your output will be similar to the
following output.
NAME AGE
virtualservice.appmesh.k8s.aws/colorgateway.appmesh-demo 37s
virtualservice.appmesh.k8s.aws/colorteller.appmesh-demo 37s
NAME AGE
mesh.appmesh.k8s.aws/color-mesh 38s
NAME AGE
virtualnode.appmesh.k8s.aws/colorgateway 39s
virtualnode.appmesh.k8s.aws/colorteller 39s
virtualnode.appmesh.k8s.aws/colorteller-black 39s
virtualnode.appmesh.k8s.aws/colorteller-blue 39s
virtualnode.appmesh.k8s.aws/colorteller-red 38s
You can use the AWS Management Console or AWS CLI to see the App Mesh mesh, virtual
service, virtual router, route, and virtual node resources that were automatically created
by the controller. All of the resources were deployed to the appmesh-demo namespace, which was
labelled with appmesh.k8s.aws/sidecarInjectorWebhook: enabled. Since the injector saw
this label for the namespace, it injected the App Mesh sidecar container images into each of the
pods. Using kubectl describe pod <pod-name> -n appmesh-demo, you can see that the App
Mesh sidecar container images are included in each of the pods that were deployed.
222
Amazon EKS User Guide
Run Application
Run Application
Complete the following steps to run the application.
1. In a terminal, use the following command to create a container in the appmesh-demo namespace
that has curl installed and open a shell to it. In later steps, this terminal is referred to as Terminal A.
2. From Terminal A, run the following command to curl the color gateway in the color mesh application
100 times. The gateway routes traffic to separate virtual nodes that return either white, black, or
blue as a response.
100 responses are returned. Each response looks similar to the following text:
In this line of output, the colorgateway routed the request to the blue virtual node. The numbers for
each color denote the percentage of responses from each virtual node. The number for each color in
each response is cumulative over time. The percentage is similar for each color because, by default,
the weighting defined for each virtual node is the same in the color.yaml file you used to install the
sample application.
Change Configuration
Change the configuration and run the application again to see the effect of the changes.
1. In a separate terminal from Terminal A, edit the colorteller.appmesh-demo virtual service with the
following command.
In the editor, you can see that the weight value of each virtualNodeName is 1. Because the weight
of each virtual node is the same, traffic routed to each virtual node is approximately even. To route
all traffic to the black node only, change the values for colorteller.appmesh-demo and colorteller-
blue to 0, as shown in the following text. Save the configuration and exit the editor.
spec:
meshName: color-mesh
routes:
- http:
action:
weightedTargets:
- virtualNodeName: colorteller.appmesh-demo
weight: 0
- virtualNodeName: colorteller-blue
weight: 0
- virtualNodeName: colorteller-black.appmesh-demo
weight: 1
223
Amazon EKS User Guide
Remove Application
This time, all lines of output look similar to the following text.
Black is the response every time because the gateway is now routing all traffic to the black virtual
node. Even though all traffic is now going to black, the white and blue virtual nodes still have
response percentages, because the numbers are based on relative percentages over time. When you
executed the requests in a previous step, white and blue responded, which is why they still have
response percentages. You can see that the relative percentages decrease for white and blue with
each response, while the percentage for black increases.
Remove Application
When you've finished with the sample application, you can remove it by completing the following steps.
1. Use the following commands to remove the sample application and the App Mesh resources that
were created.
2. Optional: If you want to remove the controller and sidecar injector, see Remove integration
components (p. ).
224
Amazon EKS User Guide
To get started using AWS Deep Learning Containers on Amazon EKS, see AWS Deep Learning Containers
on Amazon EKS in the AWS Deep Learning AMI Developer Guide.
225
Amazon EKS User Guide
Identity and Access Management
Security is a shared responsibility between AWS and you. The shared responsibility model describes this
as security of the cloud and security in the cloud:
• Security of the cloud – AWS is responsible for protecting the infrastructure that runs AWS services
in the AWS Cloud. For Amazon EKS, AWS is responsible for the Kubernetes control plane, which
includes the control plane nodes and etcd database. Third-party auditors regularly test and verify the
effectiveness of our security as part of the AWS compliance programs. To learn about the compliance
programs that apply to Amazon EKS, see AWS Services in Scope by Compliance Program.
• Security in the cloud – Your responsibility includes the following areas.
• The security configuration of the data plane, including the configuration of the security groups that
allow traffic to pass from the Amazon EKS control plane into the customer VPC
• The configuration of the worker nodes and the containers themselves
• The worker node guest operating system (including updates and security patches)
• Other associated application software:
• Setting up and managing network controls, such as firewall rules
• Managing platform-level identity and access management, either with or in addition to IAM
• The sensitivity of your data, your company’s requirements, and applicable laws and regulations
This documentation helps you understand how to apply the shared responsibility model when using
Amazon EKS. The following topics show you how to configure Amazon EKS to meet your security and
compliance objectives. You also learn how to use other AWS services that help you to monitor and secure
your Amazon EKS resources.
Topics
• Identity and Access Management for Amazon EKS (p. 226)
• Logging and Monitoring in Amazon EKS (p. 255)
• Compliance Validation for Amazon EKS (p. 256)
• Resilience in Amazon EKS (p. 256)
• Infrastructure Security in Amazon EKS (p. 257)
• Configuration and Vulnerability Analysis in Amazon EKS (p. 257)
• Pod Security Policy (p. 258)
Audience
How you use AWS Identity and Access Management (IAM) differs, depending on the work you do in
Amazon EKS.
226
Amazon EKS User Guide
Authenticating With Identities
Service user – If you use the Amazon EKS service to do your job, then your administrator provides you
with the credentials and permissions that you need. As you use more Amazon EKS features to do your
work, you might need additional permissions. Understanding how access is managed can help you
request the right permissions from your administrator. If you cannot access a feature in Amazon EKS, see
Troubleshooting Amazon EKS Identity and Access (p. 255).
Service administrator – If you're in charge of Amazon EKS resources at your company, you probably
have full access to Amazon EKS. It's your job to determine which Amazon EKS features and resources
your employees should access. You must then submit requests to your IAM administrator to change the
permissions of your service users. Review the information on this page to understand the basic concepts
of IAM. To learn more about how your company can use IAM with Amazon EKS, see How Amazon EKS
Works with IAM (p. 230).
IAM administrator – If you're an IAM administrator, you might want to learn details about how you can
write policies to manage access to Amazon EKS. To view example Amazon EKS identity-based policies
that you can use in IAM, see Amazon EKS Identity-Based Policy Examples (p. 233).
You must be authenticated (signed in to AWS) as the AWS account root user, an IAM user, or by assuming
an IAM role. You can also use your company's single sign-on authentication, or even sign in using Google
or Facebook. In these cases, your administrator previously set up identity federation using IAM roles.
When you access AWS using credentials from another company, you are assuming a role indirectly.
To sign in directly to the AWS Management Console, use your password with your root user email or your
IAM user name. You can access AWS programmatically using your root user or IAM user access keys. AWS
provides SDK and command line tools to cryptographically sign your request using your credentials. If
you don’t use AWS tools, you must sign the request yourself. Do this using Signature Version 4, a protocol
for authenticating inbound API requests. For more information about authenticating requests, see
Signature Version 4 Signing Process in the AWS General Reference.
Regardless of the authentication method that you use, you might also be required to provide additional
security information. For example, AWS recommends that you use multi-factor authentication (MFA) to
increase the security of your account. To learn more, see Using Multi-Factor Authentication (MFA) in AWS
in the IAM User Guide.
227
Amazon EKS User Guide
Authenticating With Identities
pair. You cannot recover the secret access key in the future. Instead, you must generate a new access key
pair.
An IAM group is an identity that specifies a collection of IAM users. You can't sign in as a group. You
can use groups to specify permissions for multiple users at a time. Groups make permissions easier to
manage for large sets of users. For example, you could have a group named IAMAdmins and give that
group permissions to administer IAM resources.
Users are different from roles. A user is uniquely associated with one person or application, but a role
is intended to be assumable by anyone who needs it. Users have permanent long-term credentials, but
roles provide temporary credentials. To learn more, see When to Create an IAM User (Instead of a Role) in
the IAM User Guide.
IAM Roles
An IAM role is an identity within your AWS account that has specific permissions. It is similar to an IAM
user, but is not associated with a specific person. You can temporarily assume an IAM role in the AWS
Management Console by switching roles. You can assume a role by calling an AWS CLI or AWS API
operation or by using a custom URL. For more information about methods for using roles, see Using IAM
Roles in the IAM User Guide.
IAM roles with temporary credentials are useful in the following situations:
• Temporary IAM user permissions – An IAM user can assume an IAM role to temporarily take on
different permissions for a specific task.
• Federated user access – Instead of creating an IAM user, you can use existing identities from AWS
Directory Service, your enterprise user directory, or a web identity provider. These are known as
federated users. AWS assigns a role to a federated user when access is requested through an identity
provider. For more information about federated users, see Federated Users and Roles in the IAM User
Guide.
• Cross-account access – You can use an IAM role to allow someone (a trusted principal) in a different
account to access resources in your account. Roles are the primary way to grant cross-account access.
However, with some AWS services, you can attach a policy directly to a resource (instead of using a role
as a proxy). To learn the difference between roles and resource-based policies for cross-account access,
see How IAM Roles Differ from Resource-based Policies in the IAM User Guide.
• AWS service access – A service role is an IAM role that a service assumes to perform actions in your
account on your behalf. When you set up some AWS service environments, you must define a role
for the service to assume. This service role must include all the permissions that are required for the
service to access the AWS resources that it needs. Service roles vary from service to service, but many
allow you to choose your permissions as long as you meet the documented requirements for that
service. Service roles provide access only within your account and cannot be used to grant access
to services in other accounts. You can create, modify, and delete a service role from within IAM. For
example, you can create a role that allows Amazon Redshift to access an Amazon S3 bucket on your
behalf and then load data from that bucket into an Amazon Redshift cluster. For more information, see
Creating a Role to Delegate Permissions to an AWS Service in the IAM User Guide.
• Applications running on Amazon EC2 – You can use an IAM role to manage temporary credentials
for applications that are running on an EC2 instance and making AWS CLI or AWS API requests.
This is preferable to storing access keys within the EC2 instance. To assign an AWS role to an EC2
instance and make it available to all of its applications, you create an instance profile that is attached
to the instance. An instance profile contains the role and enables programs that are running on the
EC2 instance to get temporary credentials. For more information, see Using an IAM Role to Grant
Permissions to Applications Running on Amazon EC2 Instances in the IAM User Guide.
To learn whether to use IAM roles, see When to Create an IAM Role (Instead of a User) in the IAM User
Guide.
228
Amazon EKS User Guide
Managing Access Using Policies
An IAM administrator can use policies to specify who has access to AWS resources, and what actions
they can perform on those resources. Every IAM entity (user or role) starts with no permissions. In other
words, by default, users can do nothing, not even change their own password. To give a user permission
to do something, an administrator must attach a permissions policy to a user. Or the administrator can
add the user to a group that has the intended permissions. When an administrator gives permissions to a
group, all users in that group are granted those permissions.
IAM policies define permissions for an action regardless of the method that you use to perform the
operation. For example, suppose that you have a policy that allows the iam:GetRole action. A user with
that policy can get role information from the AWS Management Console, the AWS CLI, or the AWS API.
Identity-Based Policies
Identity-based policies are JSON permissions policy documents that you can attach to an identity, such
as an IAM user, role, or group. These policies control what actions that identity can perform, on which
resources, and under what conditions. To learn how to create an identity-based policy, see Creating IAM
Policies in the IAM User Guide.
Identity-based policies can be further categorized as inline policies or managed policies. Inline policies
are embedded directly into a single user, group, or role. Managed policies are standalone policies that
you can attach to multiple users, groups, and roles in your AWS account. Managed policies include AWS
managed policies and customer managed policies. To learn how to choose between a managed policy or
an inline policy, see Choosing Between Managed Policies and Inline Policies in the IAM User Guide.
Resource-Based Policies
Resource-based policies are JSON policy documents that you attach to a resource such as an Amazon S3
bucket. Service administrators can use these policies to define what actions a specified principal (account
member, user, or role) can perform on that resource and under what conditions. Resource-based policies
are inline policies. There are no managed resource-based policies.
• Permissions boundaries – A permissions boundary is an advanced feature in which you set the
maximum permissions that an identity-based policy can grant to an IAM entity (IAM user or role).
You can set a permissions boundary for an entity. The resulting permissions are the intersection of
229
Amazon EKS User Guide
How Amazon EKS Works with IAM
entity's identity-based policies and its permissions boundaries. Resource-based policies that specify
the user or role in the Principal field are not limited by the permissions boundary. An explicit deny
in any of these policies overrides the allow. For more information about permissions boundaries, see
Permissions Boundaries for IAM Entities in the IAM User Guide.
• Service control policies (SCPs) – SCPs are JSON policies that specify the maximum permissions for
an organization or organizational unit (OU) in AWS Organizations. AWS Organizations is a service for
grouping and centrally managing multiple AWS accounts that your business owns. If you enable all
features in an organization, then you can apply service control policies (SCPs) to any or all of your
accounts. The SCP limits permissions for entities in member accounts, including each AWS account
root user. For more information about Organizations and SCPs, see How SCPs Work in the AWS
Organizations User Guide.
• Session policies – Session policies are advanced policies that you pass as a parameter when you
programmatically create a temporary session for a role or federated user. The resulting session's
permissions are the intersection of the user or role's identity-based policies and the session policies.
Permissions can also come from a resource-based policy. An explicit deny in any of these policies
overrides the allow. For more information, see Session Policies in the IAM User Guide.
Topics
• Amazon EKS Identity-Based Policies (p. 230)
• Amazon EKS Resource-Based Policies (p. 232)
• Authorization Based on Amazon EKS Tags (p. 232)
• Amazon EKS IAM Roles (p. 232)
Actions
The Action element of an IAM identity-based policy describes the specific action or actions that will be
allowed or denied by the policy. Policy actions usually have the same name as the associated AWS API
operation. The action is used in a policy to grant permissions to perform the associated operation.
Policy actions in Amazon EKS use the following prefix before the action: eks:. For example, to
grant someone permission to get descriptive information about an Amazon EKS cluster, you include
the DescribeCluster action in their policy. Policy statements must include either an Action or
NotAction element.
To specify multiple actions in a single statement, separate them with commas as follows:
230
Amazon EKS User Guide
How Amazon EKS Works with IAM
You can specify multiple actions using wildcards (*). For example, to specify all actions that begin with
the word Describe, include the following action:
"Action": "eks:Describe*"
To see a list of Amazon EKS actions, see Actions Defined by Amazon Elastic Kubernetes Service in the
IAM User Guide.
Resources
The Resource element specifies the object or objects to which the action applies. Statements must
include either a Resource or a NotResource element. You specify a resource using an ARN or using the
wildcard (*) to indicate that the statement applies to all resources.
arn:${Partition}:eks:${Region}:${Account}:cluster/${ClusterName}
For more information about the format of ARNs, see Amazon Resource Names (ARNs) and AWS Service
Namespaces.
For example, to specify the dev cluster in your statement, use the following ARN:
"Resource": "arn:aws:eks:region-code:123456789012:cluster/dev"
To specify all clusters that belong to a specific account and Region, use the wildcard (*):
"Resource": "arn:aws:eks:region-code:123456789012:cluster/*"
Some Amazon EKS actions, such as those for creating resources, cannot be performed on a specific
resource. In those cases, you must use the wildcard (*).
"Resource": "*"
To see a list of Amazon EKS resource types and their ARNs, see Resources Defined by Amazon Elastic
Kubernetes Service in the IAM User Guide. To learn with which actions you can specify the ARN of each
resource, see Actions Defined by Amazon Elastic Kubernetes Service.
Condition Keys
Amazon EKS does not provide any service-specific condition keys, but it does support using some global
condition keys. To see all AWS global condition keys, see AWS Global Condition Context Keys in the IAM
User Guide.
Examples
To view examples of Amazon EKS identity-based policies, see Amazon EKS Identity-Based Policy
Examples (p. 233).
When you create an Amazon EKS cluster, the IAM entity user or role, such as a federated user that
creates the cluster, is automatically granted system:masters permissions in the cluster's RBAC
231
Amazon EKS User Guide
How Amazon EKS Works with IAM
configuration. To grant additional AWS users or roles the ability to interact with your cluster, you must
edit the aws-auth ConfigMap within Kubernetes.
For additional information about working with the ConfigMap, see Managing Users or IAM Roles for your
Cluster (p. 185).
Service-Linked Roles
Service-linked roles allow AWS services to access resources in other services to complete an action on
your behalf. Service-linked roles appear in your IAM account and are owned by the service. An IAM
administrator can view but not edit the permissions for service-linked roles.
Amazon EKS supports service-linked roles. For details about creating or managing Amazon EKS service-
linked roles, see Using Service-Linked Roles for Amazon EKS (p. 235).
Service Roles
This feature allows a service to assume a service role on your behalf. This role allows the service to
access resources in other services to complete an action on your behalf. Service roles appear in your
IAM account and are owned by the account. This means that an IAM administrator can change the
permissions for this role. However, doing so might break the functionality of the service.
Amazon EKS supports service roles. For more information, see the section called “Service IAM
Role” (p. 237) and the section called “Worker Node IAM Role” (p. 239).
232
Amazon EKS User Guide
Identity-Based Policy Examples
To learn how to create an IAM identity-based policy using these example JSON policy documents, see
Creating Policies on the JSON Tab in the IAM User Guide.
When you create an Amazon EKS cluster, the IAM entity user or role, such as a federated user that
creates the cluster, is automatically granted system:masters permissions in the cluster's RBAC
configuration. To grant additional AWS users or roles the ability to interact with your cluster, you must
edit the aws-auth ConfigMap within Kubernetes.
For additional information about working with the ConfigMap, see Managing Users or IAM Roles for your
Cluster (p. 185).
Topics
• Policy Best Practices (p. 233)
• Using the Amazon EKS Console (p. 233)
• Allow Users to View Their Own Permissions (p. 234)
• Update a Kubernetes cluster (p. 235)
• List or describe all clusters (p. 235)
• Get Started Using AWS Managed Policies – To start using Amazon EKS quickly, use AWS managed
policies to give your employees the permissions they need. These policies are already available in
your account and are maintained and updated by AWS. For more information, see Get Started Using
Permissions With AWS Managed Policies in the IAM User Guide.
• Grant Least Privilege – When you create custom policies, grant only the permissions required
to perform a task. Start with a minimum set of permissions and grant additional permissions as
necessary. Doing so is more secure than starting with permissions that are too lenient and then trying
to tighten them later. For more information, see Grant Least Privilege in the IAM User Guide.
• Enable MFA for Sensitive Operations – For extra security, require IAM users to use multi-factor
authentication (MFA) to access sensitive resources or API operations. For more information, see Using
Multi-Factor Authentication (MFA) in AWS in the IAM User Guide.
• Use Policy Conditions for Extra Security – To the extent that it's practical, define the conditions under
which your identity-based policies allow access to a resource. For example, you can write conditions to
specify a range of allowable IP addresses that a request must come from. You can also write conditions
to allow requests only within a specified date or time range, or to require the use of SSL or MFA. For
more information, see IAM JSON Policy Elements: Condition in the IAM User Guide.
233
Amazon EKS User Guide
Identity-Based Policy Examples
create an identity-based policy that is more restrictive than the minimum required permissions, the
console won't function as intended for entities (IAM users or roles) with that policy.
To ensure that those entities can still use the Amazon EKS console, create a policy with your own unique
name, such as AmazonEKSAdminPolicy. Attach the policy to the entities. For more information, see
Adding Permissions to a User in the IAM User Guide:
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"eks:*"
],
"Resource": "*"
}
]
}
You don't need to allow minimum console permissions for users that are making calls only to the AWS
CLI or the AWS API. Instead, allow access to only the actions that match the API operation that you're
trying to perform.
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "ViewOwnUserInfo",
"Effect": "Allow",
"Action": [
"iam:GetUserPolicy",
"iam:ListGroupsForUser",
"iam:ListAttachedUserPolicies",
"iam:ListUserPolicies",
"iam:GetUser"
],
"Resource": [
"arn:aws:iam::*:user/${aws:username}"
]
},
{
"Sid": "NavigateInConsole",
"Effect": "Allow",
"Action": [
"iam:GetGroupPolicy",
"iam:GetPolicyVersion",
"iam:GetPolicy",
"iam:ListAttachedGroupPolicies",
"iam:ListGroupPolicies",
"iam:ListPolicyVersions",
"iam:ListPolicies",
"iam:ListUsers"
],
"Resource": "*"
}
234
Amazon EKS User Guide
Using Service-Linked Roles
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "eks:UpdateClusterVersion",
"Resource": "arn:aws:eks:*:111122223333:cluster/dev"
}
]
}
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"eks:DescribeCluster",
"eks:ListClusters"
],
"Resource": "*"
}
]
}
A service-linked role makes setting up Amazon EKS easier because you don’t have to manually add
the necessary permissions. Amazon EKS defines the permissions of its service-linked roles, and unless
defined otherwise, only Amazon EKS can assume its roles. The defined permissions include the trust
policy and the permissions policy, and that permissions policy cannot be attached to any other IAM
entity.
You can delete a service-linked role only after first deleting its related resources. This protects your
Amazon EKS resources because you can't inadvertently remove permission to access the resources.
For information about other services that support service-linked roles, see AWS Services That Work with
IAM and look for the services that have Yes in the Service-Linked Role column. Choose a Yes with a link
to view the service-linked role documentation for that service.
235
Amazon EKS User Guide
Using Service-Linked Roles
• eks-nodegroup.amazonaws.com
The following role permissions policy allows Amazon EKS to complete AWS API actions on the specified
resources:
• AWSServiceRoleForAmazonEKSNodegroup
You must configure permissions to allow an IAM entity such as a user, group, or role to create, edit, or
delete a service-linked role. For more information, see Service-Linked Role Permissions in the IAM User
Guide.
If you delete this service-linked role, and then need to create it again, you can use the same process to
recreate the role in your account. When you create another managed node group, Amazon EKS creates
the service-linked role for you again.
236
Amazon EKS User Guide
Service IAM Role
5. Repeat this procedure for any other node groups in the cluster and for any other clusters in your
account.
Use the IAM console, the AWS CLI, or the AWS API to delete the
AWSServiceRoleForAmazonEKSNodegroup service-linked role. For more information, see Deleting a
Service-Linked Role in the IAM User Guide.
• AmazonEKSServicePolicy
• AmazonEKSClusterPolicy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks.amazonaws.com"
},
"Action": "sts:AssumeRole"
237
Amazon EKS User Guide
Service IAM Role
}
]
}
AWS CloudFormation
1. Save the following AWS CloudFormation template to a text file on your local system.
---
AWSTemplateFormatVersion: '2010-09-09'
Description: 'Amazon EKS Service Role'
Resources:
eksServiceRole:
Type: AWS::IAM::Role
Properties:
AssumeRolePolicyDocument:
Version: '2012-10-17'
Statement:
- Effect: Allow
Principal:
Service:
- eks.amazonaws.com
Action:
- sts:AssumeRole
ManagedPolicyArns:
- arn:aws:iam::aws:policy/AmazonEKSServicePolicy
- arn:aws:iam::aws:policy/AmazonEKSClusterPolicy
Outputs:
RoleArn:
Description: The role that Amazon EKS will use to create AWS resources for
Kubernetes clusters
238
Amazon EKS User Guide
Worker Node IAM Role
• AmazonEKSWorkerNodePolicy
• AmazonEKS_CNI_Policy
• AmazonEC2ContainerRegistryReadOnly
{
"Version": "2012-10-17",
"Statement": [
239
Amazon EKS User Guide
Worker Node IAM Role
{
"Effect": "Allow",
"Principal": {
"Service": "ec2.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
}
To create your Amazon EKS worker node role in the IAM console
AWS CloudFormation
To create your Amazon EKS worker node role using AWS CloudFormation
https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2019-11-15/amazon-eks-
nodegroup-role.yaml
5. On the Specify stack details page, for Stack name enter a name such as eks-node-group-
instance-role and choose Next.
240
Amazon EKS User Guide
Pod Execution Role
6. (Optional) On the Configure stack options page, you can choose to tag your stack resources.
Choose Next.
7. On the Review page, check the box in the Capabilities section and choose Create stack.
8. When your stack is created, select it in the console and choose Outputs.
9. Record the NodeInstanceRole value for the IAM role that was created. You need this when you
create your node group.
When your cluster creates pods on AWS Fargate infrastructure, the pod needs to make calls to AWS APIs
on your behalf, for example, to pull container images from Amazon ECR. The Amazon EKS pod execution
role provides the IAM permissions to do this.
When you create a Fargate profile, you must specify a pod execution role to use with your pods. This
role is added to the cluster's Kubernetes Role Based Access Control (RBAC) for authorization, so that the
kubelet that is running on the Fargate infrastructure can register with your Amazon EKS cluster. This is
what allows Fargate infrastructure to appear in your cluster as nodes.
Before you create a Fargate profile, you must create an IAM role with the following IAM policy:
• AmazonEKSFargatePodExecutionRolePolicy
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Service": "eks-fargate-pods.amazonaws.com"
},
"Action": "sts:AssumeRole"
}
]
241
Amazon EKS User Guide
IAM Roles for Service Accounts
Applications must sign their AWS API requests with AWS credentials. This feature provides a strategy for
managing credentials for your applications, similar to the way that Amazon EC2 instance profiles provide
credentials to Amazon EC2 instances. Instead of creating and distributing your AWS credentials to the
containers or using the Amazon EC2 instance’s role, you can associate an IAM role with a Kubernetes
service account. The applications in the pod’s containers can then use an AWS SDK or the AWS CLI to
make API requests to authorized AWS services.
The IAM roles for service accounts feature provides the following benefits:
• Least privilege — By using the IAM roles for service accounts feature, you no longer need to provide
extended permissions to the worker node IAM role so that pods on that node can call AWS APIs. You
can scope IAM permissions to a service account, and only pods that use that service account have
access to those permissions. This feature also eliminates the need for third-party solutions such as
kiam or kube2iam.
• Credential isolation — A container can only retrieve credentials for the IAM role that is associated
with the service account to which it belongs. A container never has access to credentials that are
intended for another container that belongs to another pod.
• Auditability — Access and event logging is available through CloudTrail to help ensure retrospective
auditing.
To get started, see Enabling IAM Roles for Service Accounts on your Cluster (p. 247).
For an end-to-end walkthrough using eksctl, see Walkthrough: Updating a DaemonSet to Use IAM for
Service Accounts (p. 253).
Topics
• IAM Roles for Service Accounts Technical Overview (p. 243)
242
Amazon EKS User Guide
IAM Roles for Service Accounts
Kubernetes has long used service accounts as its own internal identity system. Pods can authenticate
with the Kubernetes API server using an auto-mounted token (which was a non-OIDC JWT) that only
the Kubernetes API server could validate. These legacy service account tokens do not expire, and
rotating the signing key is a difficult process. In Kubernetes version 1.12, support was added for a new
ProjectedServiceAccountToken feature, which is an OIDC JSON web token that also contains the
service account identity, and supports a configurable audience.
Amazon EKS now hosts a public OIDC discovery endpoint per cluster containing the signing keys for the
ProjectedServiceAccountToken JSON web tokens so external systems, like IAM, can validate and
accept the OIDC tokens issued by Kubernetes.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::AWS_ACCOUNT_ID:oidc-provider/OIDC_PROVIDER"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"OIDC_PROVIDER:sub":
"system:serviceaccount:SERVICE_ACCOUNT_NAMESPACE:SERVICE_ACCOUNT_NAME"
}
}
}
]
}
243
Amazon EKS User Guide
IAM Roles for Service Accounts
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::AWS_ACCOUNT_ID:oidc-provider/OIDC_PROVIDER"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringLike": {
"OIDC_PROVIDER:sub": "system:serviceaccount:SERVICE_ACCOUNT_NAMESPACE:*"
}
}
}
]
}
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::AWS_ACCOUNT_ID:role/IAM_ROLE_NAME
Pod Configuration
The Amazon EKS Pod Identity Webhook on the cluster watches for pods that are associated with service
accounts with this annotation and applies the following environment variables to them.
AWS_ROLE_ARN=arn:aws:iam::AWS_ACCOUNT_ID:role/IAM_ROLE_NAME
AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token
Note
Your cluster does not need to use the mutating web hook to configure the environment
variables and token file mounts; you can choose to configure pods to add these environment
variables manually.
Supported versions of the AWS SDK (p. 246) look for these environment variables first in the credential
chain provider. The role credentials are used for pods that meet this criteria.
Note
When a pod uses AWS credentials from an IAM role associated with a service account, the
AWS CLI or other SDKs in the containers for that pod use the credentials provided by that role
exclusively. They no longer inherit any IAM permissions from the worker node IAM role.
By default, only containers that run as root have the proper file system permissions to read the web
identity token file. You can provide these permissions by having your containers run as root, or by
providing the following security context for the containers in your manifest. The fsGroup ID is arbitrary,
and you can choose any valid group ID. For more information about the implications of setting a security
context for your pods, see Configure a Security Context for a Pod or Container in the Kubernetes
documentation.
apiVersion: extensions/v1beta1
244
Amazon EKS User Guide
IAM Roles for Service Accounts
kind: Deployment
metadata:
name: my-app
spec:
template:
metadata:
labels:
app: my-app
spec:
serviceAccountName: my-app
containers:
- name: my-app
image: my-app:latest
securityContext:
fsGroup: 1337
...
Example
In this example, Account A would provide Account B with the OIDC issuer URL from their cluster. Account
B follows the instructions in Enabling IAM Roles for Service Accounts on your Cluster (p. 247) and
Creating an IAM Role and Policy for your Service Account (p. 248) using the OIDC issuer URL from
Account A's cluster. Then a cluster administrator annotates the service account in Account A's cluster to
use the role from Account B.
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT_B_AWS_ACCOUNT_ID:role/IAM_ROLE_NAME
Example
In this example, Account B creates an IAM policy with the permissions to give to pods in Account A's
cluster. Account B attaches that policy to an IAM role with a trust relationship that allows AssumeRole
permissions to Account A (111111111111), as shown below.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"AWS": "arn:aws:iam::111111111111:root"
},
"Action": "sts:AssumeRole",
"Condition": {}
}
]
245
Amazon EKS User Guide
IAM Roles for Service Accounts
Account A creates a role with a trust policy that gets credentials from the identity provider created with
the cluster's OIDC issuer URL, as shown below.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::111111111111:oidc-provider/oidc.eks.region-
code.amazonaws.com/id/EXAMPLEC061A78C479E31025A21AC4CDE191335D05820BE5CE"
},
"Action": "sts:AssumeRoleWithWebIdentity"
}
]
}
Account A attaches a policy to that role with the following permissions to assume the role that Account B
created.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "sts:AssumeRole",
"Resource": "arn:aws:iam::222222222222:role/account-b-role"
}
]
}
The application code for pods to assume Account B's role uses two profiles: account_b_role and
account_a_role. The account_b_role profile uses the account_a_role profile as its source. For
the AWS CLI, the ~/.aws/config file would look like the following example.
[profile account_b_role]
source_profile = account_a_role
role_arn=arn:aws:iam::222222222222:role/account-b-role
[profile account_a_role]
web_identity_token_file = /var/run/secrets/eks.amazonaws.com/serviceaccount/token
role_arn=arn:aws:iam::111111111111:role/account-a-role
To specify chained profiles for other AWS SDKs, consult their documentation.
246
Amazon EKS User Guide
IAM Roles for Service Accounts
Many popular Kubernetes add-ons, such as the Cluster Autoscaler and the ALB Ingress Controller support
IAM roles for service accounts. The Amazon VPC CNI plugin for Kubernetes has been updated with a
supported version of the AWS SDK for Go, and you can use the IAM roles for service accounts feature to
provide the required permissions for the CNI to work.
To ensure that you are using a supported SDK, follow the installation instructions for your preferred SDK
at Tools for Amazon Web Services when you build your containers.
If your cluster supports IAM roles for service accounts, it will have an OpenID Connect issuer URL
associated with it. You can view this URL in the Amazon EKS console, or you can use the following AWS
CLI command to retrieve it.
Important
You must use at least version 1.18.17 of the AWS CLI to receive the proper output from this
command. For more information, see Installing the AWS CLI in the AWS Command Line Interface
User Guide.
Output:
https://oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B716D3041E
To use IAM roles for service accounts in your cluster, you must create an OIDC identity provider in the
IAM console.
eksctl
To create an IAM OIDC identity provider for your cluster with eksctl
1. Check your eksctl version with the following command. This procedure assumes that you have
installed eksctl and that your eksctl version is at least 0.15.0-rc.2.
eksctl version
For more information about installing or upgrading eksctl, see Installing or Upgrading
eksctl (p. 189).
247
Amazon EKS User Guide
IAM Roles for Service Accounts
2. Create your OIDC identity provider for your cluster with the following command. Substitute
cluster_name with your own value.
To create an IAM OIDC identity provider for your cluster with the AWS Management
Console
1. Retrieve the OIDC issuer URL from the Amazon EKS console description of your cluster or use
the following AWS CLI command.
Important
You must use at least version 1.18.17 of the AWS CLI to receive the proper output from
this command. For more information, see Installing the AWS CLI in the AWS Command
Line Interface User Guide.
After you have enabled the IAM OIDC identity provider for your cluster, you can create IAM roles to
associate with a service account in your cluster. For more information, see Creating an IAM Role and
Policy for your Service Account (p. 248)
You must also create an IAM role for your Kubernetes service accounts to use before you associate it with
a service account. The trust relationship is scoped to your cluster and service account so that each cluster
and service account combination requires its own role. You can then attach a specific IAM policy to the
role that gives the containers in your pods the permissions you desire. The following procedures describe
how to do this.
• A policy to allow read-only access to an Amazon S3 bucket. You could store configuration information
or a bootstrap script in this bucket, and the containers in your pod can read the file from the bucket
and load it into your application.
• A policy to allow paid container images from AWS Marketplace.
248
Amazon EKS User Guide
IAM Roles for Service Accounts
The example below allows permission to the my-pod-secrets-bucket Amazon S3 bucket. You
can modify the policy document to suit your specific needs.
{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": [
"s3:GetObject"
],
"Resource": [
"arn:aws:s3:::my-pod-secrets-bucket/*"
]
}
]
}
The example below gives the required permissions to use a paid container image from AWS
Marketplace.
{
"Version": "2012-10-17",
"Statement": [
{
"Action": [
"aws-marketplace:RegisterUsage"
],
"Effect": "Allow",
"Resource": "*"
}
]
}
eksctl
Create the service account and IAM role with the following command. Substitute the example
values with your own values.
Note
This command only works for clusters that were created with eksctl. If you didn't create
your cluster with eksctl, then use the instructions on the AWS Management Console or
AWS CLI tabs.
249
Amazon EKS User Guide
IAM Roles for Service Accounts
An AWS CloudFormation template was deployed that created an IAM role and attached the IAM
policy to it. The role was associated with a Kubernetes service account.
AWS Management Console
1. Retrieve the OIDC issuer URL from the Amazon EKS console description of your cluster, or use
the following AWS CLI command.
Important
You must use at least version 1.18.17 of the AWS CLI to receive the proper output from
this command. For more information, see Installing the AWS CLI in the AWS Command
Line Interface User Guide.
1. Edit the OIDC provider suffix and change it from :aud to :sub.
2. Replace sts.amazonaws.com with your service account ID.
"oidc.eks.region-code.amazonaws.com/id/EXAMPLED539D4633E53DE1B716D3041E:sub":
"system:serviceaccount:SERVICE_ACCOUNT_NAMESPACE:SERVICE_ACCOUNT_NAME"
AWS CLI
1. Set your AWS account ID to an environment variable with the following command.
250
Amazon EKS User Guide
IAM Roles for Service Accounts
2. Set your OIDC identity provider to an environment variable with the following command,
replacing your cluster name.
Important
You must use at least version 1.18.17 of the AWS CLI to receive the proper output from
this command. For more information, see Installing the AWS CLI in the AWS Command
Line Interface User Guide.
3. Set the service account namespace to an environment variable with the following command,
replacing your namespace name.
SERVICE_ACCOUNT_NAMESPACE=kube-system
4. Set the service account name to an environment variable with the following command,
replacing service-account-name with your service account name.
SERVICE_ACCOUNT_NAME=service-account-name
6. Run the following AWS CLI command to create the role, replacing your IAM role name and
description.
7. Run the following command to attach your IAM policy to your role, replacing your IAM role
name and policy ARN.
251
Amazon EKS User Guide
IAM Roles for Service Accounts
8. Associate the IAM role with a Kubernetes service account. For more information, see Specifying
an IAM Role for your Service Account (p. 252).
apiVersion: v1
kind: ServiceAccount
metadata:
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::AWS_ACCOUNT_ID:role/IAM_ROLE_NAME
1. Use the following command to annotate your service account with the ARN of the IAM role that you
want to use with your service account. Be sure to substitute your own values for the alternate-
colored example values to use with your pods.
2. Delete and re-create any existing pods that are associated with the service account to apply the
credential environment variables. The mutating web hook does not apply them to pods that are
already running. The following command deletes the existing the aws-node DaemonSet pods and
deploys them with the service account annotation. You can modify the namespace, deployment
type, and label to update your specific pods.
4. Describe one of the pods and verify that the AWS_WEB_IDENTITY_TOKEN_FILE and
AWS_ROLE_ARN environment variables exist.
Output:
AWS_VPC_K8S_CNI_LOGLEVEL=DEBUG
AWS_ROLE_ARN=arn:aws:iam::AWS_ACCOUNT_ID:role/IAM_ROLE_NAME
AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token
The IAM role was created by eksctl when you created the Kubernetes service account in a previous
step.
252
Amazon EKS User Guide
IAM Roles for Service Accounts
When you implement IAM roles for service accounts for a pod, the containers in the pod have all
permissions assigned to the service account and the worker node IAM role. If you implement IAM roles
for service accounts for all pods in a cluster, you may want to prevent the containers in the pods from
using the permissions assigned to the worker node IAM role. Keep in mind however, that there may
be certain key permissions on the worker node IAM role that pods need to function. It’s important to
properly scope your service account IAM roles so that your pods have all of the necessary permissions.
For example, the worker node IAM role is assigned permissions to pull container images from Amazon
ECR. If a pod isn't assigned those permissions, then the pod can't pull container images from Amazon
ECR.
To prevent all containers in all pods on a worker node from using the permissions assigned to the
worker node IAM role (while still allowing the permissions that are assigned to the service account), run
the following iptables commands on your worker nodes (as root) or include them in your instance
bootstrap user data script.
Important
These commands completely block all containers running on a worker node from querying the
instance metadata service for any metadata, not just the credentials for the worker node IAM
role. Do not run these commands on worker nodes that run pods that you haven't implemented
IAM roles for service accounts for or none of the containers on the node will have any of the
permissions assigned to the worker node IAM role.
For ease of use, this topic uses eksctl to configure IAM roles for service accounts. However, if you would
rather use the AWS Management Console, the AWS CLI, or one of the AWS SDKs, the same basic concepts
apply, but you will have to modify the steps to use the procedures in Enabling IAM Roles for Service
Accounts on your Cluster (p. 247)
To configure the CNI plugin to use IAM roles for service accounts
1. Check your eksctl version with the following command. This procedure assumes that you have
installed eksctl and that your eksctl version is at least 0.15.0-rc.2.
253
Amazon EKS User Guide
IAM Roles for Service Accounts
eksctl version
For more information about installing or upgrading eksctl, see Installing or Upgrading
eksctl (p. 189).
2. Check the version of your cluster's Amazon VPC CNI Plugin for Kubernetes. Use the following
command to print your cluster's CNI version.
kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -
f 2
Output:
amazon-k8s-cni:1.5.3
If your CNI version is earlier than 1.5.5, complete the following steps to create a service account and
then upgrade your CNI version to the latest version:
a. Create an OIDC identity provider for your cluster with the following command. Substitute the
cluster name with your own value.
b. Create a Kubernetes service account with the following command. Substitute cluster_name
with your own value. This command deploys an AWS CloudFormation stack that creates an IAM
role, attaches the AmazonEKS_CNI_Policy AWS managed policy to it, and binds the IAM role
to the service account.
c. Upgrade your CNI version to the latest version. The manifest specifies the aws-node service
account that you created in the previous step.
3. Watch the roll out, and wait for the DESIRED count of the deployment to match the UP-TO-DATE
count. Press Ctrl + c to exit.
Output:
254
Amazon EKS User Guide
Troubleshooting
5. Check the version of your cluster's Amazon VPC CNI Plugin for Kubernetes again, confirming that the
version is 1.5.5.
kubectl describe daemonset aws-node --namespace kube-system | grep Image | cut -d "/" -
f 2
Output:
amazon-k8s-cni:1.5.5
6. Describe one of the pods and verify that the AWS_WEB_IDENTITY_TOKEN_FILE and
AWS_ROLE_ARN environment variables exist.
Output:
AWS_VPC_K8S_CNI_LOGLEVEL=DEBUG
AWS_ROLE_ARN=arn:aws:iam::111122223333:role/eksctl-prod-addon-iamserviceaccount-kube-
sys-Role1-V66K5I6JLDGK
AWS_WEB_IDENTITY_TOKEN_FILE=/var/run/secrets/eks.amazonaws.com/serviceaccount/token
The IAM role was created by eksctl when you created the Kubernetes service account in a previous
step.
7. Remove the AmazonEKS_CNI_Policy policy from your worker node IAM role.
Now your CNI plugin pods are getting their IAM permissions from their own role, and the instance role no
longer can provide those permissions to other pods.
Amazon EKS is integrated with AWS CloudTrail, a service that provides a record of actions taken by a
user, role, or an AWS service in Amazon EKS. CloudTrail captures all API calls for Amazon EKS as events.
255
Amazon EKS User Guide
Compliance Validation
The calls captured include calls from the Amazon EKS console and code calls to the Amazon EKS API
operations. For more information, see Logging Amazon EKS API Calls with AWS CloudTrail (p. 267).
For a list of AWS services in scope of specific compliance programs, see AWS Services in Scope by
Compliance Program. For general information, see AWS Compliance Programs.
You can download third-party audit reports using AWS Artifact. For more information, see Downloading
Reports in AWS Artifact.
Your compliance responsibility when using Amazon EKS is determined by the sensitivity of your data,
your company's compliance objectives, and applicable laws and regulations. AWS provides the following
resources to help with compliance:
• Security and Compliance Quick Start Guides – These deployment guides discuss architectural
considerations and provide steps for deploying security- and compliance-focused baseline
environments on AWS.
• Architecting for HIPAA Security and Compliance Whitepaper – This whitepaper describes how
companies can use AWS to create HIPAA-compliant applications.
• AWS Compliance Resources – This collection of workbooks and guides might apply to your industry
and location.
• AWS Config – This AWS service assesses how well your resource configurations comply with internal
practices, industry guidelines, and regulations.
• AWS Security Hub – This AWS service provides a comprehensive view of your security state within AWS
that helps you check your compliance with security industry standards and best practices.
Amazon EKS runs Kubernetes control plane instances across multiple Availability Zones to ensure high
availability. Amazon EKS automatically detects and replaces unhealthy control plane instances, and it
provides automated version upgrades and patching for them.
This control plane consists of at least two API server nodes and three etcd nodes that run across three
Availability Zones within a Region. Amazon EKS automatically detects and replaces unhealthy control
plane instances, restarting them across the Availability Zones within the Region as needed. Amazon EKS
leverages the architecture of AWS Regions in order to maintain high availability. Because of this, Amazon
EKS is able to offer an SLA for API server endpoint availability.
For more information about AWS Regions and Availability Zones, see AWS Global Infrastructure.
256
Amazon EKS User Guide
Infrastructure Security
You use AWS published API calls to access Amazon EKS through the network. Clients must support
Transport Layer Security (TLS) 1.0 or later. We recommend TLS 1.2 or later. Clients must also support
cipher suites with perfect forward secrecy (PFS) such as Ephemeral Diffie-Hellman (DHE) or Elliptic Curve
Ephemeral Diffie-Hellman (ECDHE). Most modern systems such as Java 7 and later support these modes.
Additionally, requests must be signed by using an access key ID and a secret access key that is associated
with an IAM principal. Or you can use the AWS Security Token Service (AWS STS) to generate temporary
security credentials to sign requests.
When you create an Amazon EKS cluster, you specify the Amazon VPC subnets for your cluster to use.
Amazon EKS requires subnets in at least two Availability Zones. We recommend a network architecture
that uses private subnets for your worker nodes and public subnets for Kubernetes to create internet-
facing load balancers within.
For more information about VPC considerations, see Cluster VPC Considerations (p. 152).
If you create your VPC and worker node groups with the AWS CloudFormation templates provided in the
Getting Started with Amazon EKS (p. 3) walkthrough, then your control plane and worker node security
groups are configured with our recommended settings.
For more information about security group considerations, see Amazon EKS Security Group
Considerations (p. 154).
When you create a new cluster, Amazon EKS creates an endpoint for the managed Kubernetes API server
that you use to communicate with your cluster (using Kubernetes management tools such as kubectl).
By default, this API server endpoint is public to the internet, and access to the API server is secured using
a combination of AWS Identity and Access Management (IAM) and native Kubernetes Role Based Access
Control (RBAC).
You can enable private access to the Kubernetes API server so that all communication between your
worker nodes and the API server stays within your VPC. You can limit the IP addresses that can access
your API server from the internet, or completely disable internet access to the API server.
For more information about modifying cluster endpoint access, see Modifying Cluster Endpoint
Access (p. 35).
You can implement network policies with tools such as Project Calico (p. 169). Project Calico is a third
party open source project. For more information, see the Project Calico documentation.
You can update an Amazon EKS cluster (p. 29) to newer Kubernetes versions. As new Kubernetes versions
become available in Amazon EKS, we recommend that you proactively update your clusters to use
the latest available version. For more information about Kubernetes versions in EKS, see Amazon EKS
Kubernetes Versions (p. 45).
257
Amazon EKS User Guide
Pod Security Policy
Track security or privacy events for Amazon Linux 2 at the Amazon Linux Security Center or subscribe to
the associated RSS feed. Security and privacy events include an overview of the issue affected, packages,
and instructions for updating your instances to correct the issue.
You can use Amazon Inspector to check for unintended network accessibility of your worker nodes and
for vulnerabilities on those Amazon EC2 instances.
You can view the default policy with the following command.
Output:
For more details, you can describe the policy with the following command.
Output:
Name: eks.privileged
Settings:
Allow Privileged: true
Allow Privilege Escalation: 0xc0004ce5f8
Default Add Capabilities: <none>
Required Drop Capabilities: <none>
258
Amazon EKS User Guide
Amazon EKS Default Pod Security Policy
Allowed Capabilities: *
Allowed Volume Types: *
Allow Host Network: true
Allow Host Ports: 0-65535
Allow Host PID: true
Allow Host IPC: true
Read Only Root Filesystem: false
SELinux Context Strategy: RunAsAny
User: <none>
Role: <none>
Type: <none>
Level: <none>
Run As User Strategy: RunAsAny
Ranges: <none>
FSGroup Strategy: RunAsAny
Ranges: <none>
Supplemental Groups Strategy: RunAsAny
Ranges: <none>
The following example shows the full YAML file for the eks.privileged pod security policy, its cluster
role, and cluster role binding.
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: eks.privileged
annotations:
kubernetes.io/description: 'privileged allows full unrestricted access to
pod features, as if the PodSecurityPolicy controller was not enabled.'
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
labels:
kubernetes.io/cluster-service: "true"
eks.amazonaws.com/component: pod-security-policy
spec:
privileged: true
allowPrivilegeEscalation: true
allowedCapabilities:
- '*'
volumes:
- '*'
hostNetwork: true
hostPorts:
- min: 0
max: 65535
hostIPC: true
hostPID: true
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
readOnlyRootFilesystem: false
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: eks:podsecuritypolicy:privileged
labels:
kubernetes.io/cluster-service: "true"
259
Amazon EKS User Guide
Amazon EKS Default Pod Security Policy
eks.amazonaws.com/component: pod-security-policy
rules:
- apiGroups:
- policy
resourceNames:
- eks.privileged
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: eks:podsecuritypolicy:authenticated
annotations:
kubernetes.io/description: 'Allow all authenticated users to create privileged pods.'
labels:
kubernetes.io/cluster-service: "true"
eks.amazonaws.com/component: pod-security-policy
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: eks:podsecuritypolicy:privileged
subjects:
- kind: Group
apiGroup: rbac.authorization.k8s.io
name: system:authenticated
After you create custom pod security policies for your cluster, you can delete the default Amazon EKS
eks.privileged pod security policy to enable your custom policies.
If you are upgrading from an earlier version of Kubernetes, or have modified or deleted the default
Amazon EKS eks.privileged pod security policy, you can restore it with the following steps.
1. Create a file called privileged-podsecuritypolicy.yaml and paste the YAML file contents
below into it.
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: eks.privileged
annotations:
kubernetes.io/description: 'privileged allows full unrestricted access to
pod features, as if the PodSecurityPolicy controller was not enabled.'
seccomp.security.alpha.kubernetes.io/allowedProfileNames: '*'
labels:
260
Amazon EKS User Guide
Amazon EKS Default Pod Security Policy
kubernetes.io/cluster-service: "true"
eks.amazonaws.com/component: pod-security-policy
spec:
privileged: true
allowPrivilegeEscalation: true
allowedCapabilities:
- '*'
volumes:
- '*'
hostNetwork: true
hostPorts:
- min: 0
max: 65535
hostIPC: true
hostPID: true
runAsUser:
rule: 'RunAsAny'
seLinux:
rule: 'RunAsAny'
supplementalGroups:
rule: 'RunAsAny'
fsGroup:
rule: 'RunAsAny'
readOnlyRootFilesystem: false
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: eks:podsecuritypolicy:privileged
labels:
kubernetes.io/cluster-service: "true"
eks.amazonaws.com/component: pod-security-policy
rules:
- apiGroups:
- policy
resourceNames:
- eks.privileged
resources:
- podsecuritypolicies
verbs:
- use
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: eks:podsecuritypolicy:authenticated
annotations:
kubernetes.io/description: 'Allow all authenticated users to create privileged
pods.'
labels:
kubernetes.io/cluster-service: "true"
eks.amazonaws.com/component: pod-security-policy
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: eks:podsecuritypolicy:privileged
subjects:
- kind: Group
apiGroup: rbac.authorization.k8s.io
name: system:authenticated
261
Amazon EKS User Guide
Amazon EKS Default Pod Security Policy
262
Amazon EKS User Guide
Tag Basics
Contents
• Tag Basics (p. 263)
• Tagging Your Resources (p. 263)
• Tag Restrictions (p. 264)
• Working with Tags Using the Console (p. 264)
• Working with Tags Using the CLI or API (p. 265)
Tag Basics
A tag is a label that you assign to an AWS resource. Each tag consists of a key and an optional value, both
of which you define.
Tags enable you to categorize your AWS resources by, for example, purpose, owner, or environment.
When you have many resources of the same type, you can quickly identify a specific resource based on
the tags you've assigned to it. For example, you can define a set of tags for your Amazon EKS clusters to
help you track each cluster's owner and stack level. We recommend that you devise a consistent set of
tag keys for each resource type. You can then search and filter the resources based on the tags that you
add.
Tags are not automatically assigned to your resources. After you add a tag, you can edit tag keys and
values or remove tags from a resource at any time. If you delete a resource, any tags for the resource are
also deleted.
Tags don't have any semantic meaning to Amazon EKS and are interpreted strictly as a string of
characters. You can set the value of a tag to an empty string, but you can't set the value of a tag to null.
If you add a tag that has the same key as an existing tag on that resource, the new value overwrites the
old value.
You can work with tags using the AWS Management Console, the AWS CLI, and the Amazon EKS API.
Note
Amazon EKS tags are not currently supported by eksctl.
If you're using AWS Identity and Access Management (IAM), you can control which users in your AWS
account have permission to create, edit, or delete tags.
If you're using the Amazon EKS console, you can apply tags to new resources when they are created or to
existing resources at any time using the Tags tab on the relevant resource page.
If you're using the Amazon EKS API, the AWS CLI, or an AWS SDK, you can apply tags to new resources
using the tags parameter on the relevant API action or to existing resources using the TagResource
API action. For more information, see TagResource.
263
Amazon EKS User Guide
Tag Restrictions
Some resource-creating actions enable you to specify tags for a resource when the resource is created.
If tags cannot be applied during resource creation, the resource creation process fails. This ensures that
resources you intended to tag on creation are either created with specified tags or not created at all. If
you tag resources at the time of creation, you don't need to run custom tagging scripts after resource
creation.
The following table describes the Amazon EKS resources that can be tagged, and the resources that can
be tagged on creation.
Tag Restrictions
The following basic restrictions apply to tags:
When you select a resource-specific page in the Amazon EKS console, it displays a list of those resources.
For example, if you select Clusters from the navigation pane, the console displays a list of Amazon
EKS clusters. When you select a resource from one of these lists (for example, a specific cluster), if the
resource supports tags, you can view and manage its tags on the Tags tab.
264
Amazon EKS User Guide
Adding Tags on an Individual Resource On Creation
• To add a tag — choose Add tag and then specify the key and value for each tag.
• To delete a tag — choose Remove tag.
6. Repeat this process for each tag you want to add or delete, and then choose Update to finish.
The following examples show how to tag or untag resources using the AWS CLI.
265
Amazon EKS User Guide
Working with Tags Using the CLI or API
The following command lists the tags associated with an existing resource.
Some resource-creating actions enable you to specify tags when you create the resource. The following
actions support tagging on creation.
266
Amazon EKS User Guide
Amazon EKS Information in CloudTrail
If you create a trail, you can enable continuous delivery of CloudTrail events to an Amazon S3 bucket,
including events for Amazon EKS. If you don't configure a trail, you can still view the most recent events
in the CloudTrail console in Event history. Using the information collected by CloudTrail, you can
determine the request that was made to Amazon EKS, the IP address from which the request was made,
who made the request, when it was made, and additional details.
To learn more about CloudTrail, see the AWS CloudTrail User Guide.
For an ongoing record of events in your AWS account, including events for Amazon EKS, create a trail.
A trail enables CloudTrail to deliver log files to an Amazon S3 bucket. By default, when you create a
trail in the console, the trail applies to all AWS Regions. The trail logs events from all Regions in the
AWS partition and delivers the log files to the Amazon S3 bucket that you specify. Additionally, you can
configure other AWS services to further analyze and act upon the event data collected in CloudTrail logs.
For more information, see the following:
All Amazon EKS actions are logged by CloudTrail and are documented in the Amazon EKS API Reference.
For example, calls to the CreateCluster, ListClusters and DeleteCluster sections generate
entries in the CloudTrail log files.
Every event or log entry contains information about who generated the request. The identity
information helps you determine the following:
• Whether the request was made with root or AWS Identity and Access Management (IAM) user
credentials.
• Whether the request was made with temporary security credentials for a role or federated user.
• Whether the request was made by another AWS service.
267
Amazon EKS User Guide
Understanding Amazon EKS Log File Entries
The following example shows a CloudTrail log entry that demonstrates the CreateCluster action.
{
"eventVersion": "1.05",
"userIdentity": {
"type": "IAMUser",
"principalId": "AKIAIOSFODNN7EXAMPLE",
"arn": "arn:aws:iam::111122223333:user/username",
"accountId": "111122223333",
"accessKeyId": "AKIAIOSFODNN7EXAMPLE",
"userName": "username"
},
"eventTime": "2018-05-28T19:16:43Z",
"eventSource": "eks.amazonaws.com",
"eventName": "CreateCluster",
"awsRegion": "region-code",
"sourceIPAddress": "205.251.233.178",
"userAgent": "PostmanRuntime/6.4.0",
"requestParameters": {
"resourcesVpcConfig": {
"subnetIds": [
"subnet-a670c2df",
"subnet-4f8c5004"
]
},
"roleArn": "arn:aws:iam::111122223333:role/AWSServiceRoleForAmazonEKS-CAC1G1VH3ZKZ",
"clusterName": "test"
},
"responseElements": {
"cluster": {
"clusterName": "test",
"status": "CREATING",
"createdAt": 1527535003.208,
"certificateAuthority": {},
"arn": "arn:aws:eks:region-code:111122223333:cluster/test",
"roleArn": "arn:aws:iam::111122223333:role/AWSServiceRoleForAmazonEKS-CAC1G1VH3ZKZ",
"version": "1.10",
"resourcesVpcConfig": {
"securityGroupIds": [],
"vpcId": "vpc-21277358",
"subnetIds": [
"subnet-a670c2df",
"subnet-4f8c5004"
]
}
}
},
"requestID": "a7a0735d-62ab-11e8-9f79-81ce5b2b7d37",
"eventID": "eab22523-174a-499c-9dd6-91e7be3ff8e3",
"readOnly": false,
"eventType": "AwsApiCall",
"recipientAccountId": "111122223333"
}
268
Amazon EKS User Guide
Prerequisites
Prerequisites
The following are the prerequisites for using Amazon EKS worker nodes on AWS Outposts:
• You must have installed and configured an Outpost in your on-premises data center.
• You must have a reliable network connection between your Outpost and its AWS Region.
• The AWS Region for the Outpost must support Amazon EKS. For a list of supported Regions, see
Amazon EKS Service Endpoints in the AWS General Reference.
Limitations
The following are the limitations of using Amazon EKS on Outposts:
• AWS Identity and Access Management, Application Load Balancer, Network Load Balancer, Classic Load
Balancer, and Amazon Route 53 run in the AWS Region, not on Outposts. This will increase latencies
between the services and the containers.
• AWS Fargate is not available on AWS Outposts.
• If network connectivity between your Outpost and its AWS Region is lost, your nodes will continue
to run. However, you cannot create new nodes or take new actions on existing deployments until
connectivity is restored. In case of instance failures, the instance will not be automatically replaced.
The Kubernetes master runs in the Region, and missing heartbeats caused by things like a loss of
connectivity to the Availability Zone could lead to failures. The failed heartbeats will lead to pods on
the Outposts being marked as unhealthy, and eventually the node status will time out and pods will be
marked for eviction. For more information, see Node Controller.
• We recommend that you provide reliable, highly available, and low-latency connectivity between your
Outpost and its AWS Region.
269
Amazon EKS User Guide
Creating Amazon EKS nodes on an Outpost
An Outpost is an extension of an AWS Region, and you can extend a VPC in an account to span multiple
Availability Zones and any associated Outpost locations. When you configure your Outpost, you associate
a subnet with it to extend your Regional VPC environment to your on-premises facility. Instances on an
Outpost appear as part of your Regional VPC, similar to an Availability Zone with associated subnets.
To create Amazon EKS nodes on an Outpost with the AWS CLI, specify a security group and a subnet
associated with your Outpost.
1. Create a VPC.
2. Create Outpost subnets. The --outpost-arn parameter must be specified for the subnet to be
created for the Outpost. (This step is different for AWS Outposts.)
3. Create a cluster, specifying the subnets for the Outpost. (This step is different for AWS Outposts.)
4. Create the node group. Specify an instance type that is available on your Outpost. (This step is
different for AWS Outposts.)
270
Amazon EKS User Guide
Creating Amazon EKS nodes on an Outpost
271
Amazon EKS User Guide
Management Tools
Related Projects
These open source projects extend the functionality of Kubernetes clusters running on AWS, including
clusters managed by Amazon EKS.
Management Tools
Related management tools for Amazon EKS and Kubernetes clusters.
eksctl
eksctl is a simple CLI tool for creating clusters on Amazon EKS.
Networking
Related networking projects for Amazon EKS and Kubernetes clusters.
272
Amazon EKS User Guide
ExternalDNS
ExternalDNS
ExternalDNS synchronizes exposed Kubernetes services and ingresses with DNS providers including
Amazon Route 53 and AWS Service Discovery.
Security
Related security projects for Amazon EKS and Kubernetes clusters.
Machine Learning
Related machine learning projects for Amazon EKS and Kubernetes clusters.
Kubeflow
A machine learning toolkit for Kubernetes.
Auto Scaling
Related auto scaling projects for Amazon EKS and Kubernetes clusters.
Cluster Autoscaler
Cluster Autoscaler is a tool that automatically adjusts the size of the Kubernetes cluster based on CPU
and memory pressure.
273
Amazon EKS User Guide
Escalator
Escalator
Escalator is a batch or job optimized horizontal autoscaler for Kubernetes.
Monitoring
Related monitoring projects for Amazon EKS and Kubernetes clusters.
Prometheus
Prometheus is an open-source systems monitoring and alerting toolkit.
Jenkins X
CI/CD solution for modern cloud applications on Amazon EKS and Kubernetes clusters.
274
Amazon EKS User Guide
Insufficient Capacity
Insufficient Capacity
If you receive the following error while attempting to create an Amazon EKS cluster, then one of the
Availability Zones you specified does not have sufficient capacity to support a cluster.
Retry creating your cluster with subnets in your cluster VPC that are hosted in the Availability Zones
returned by this error message.
• The aws-auth-cm.yaml file does not have the correct IAM role ARN for your worker nodes. Ensure
that the worker node IAM role ARN (not the instance profile ARN) is specified in your aws-auth-
cm.yaml file. For more information, see Launching Amazon EKS Linux Worker Nodes (p. 88).
• The ClusterName in your worker node AWS CloudFormation template does not exactly match the
name of the cluster you want your worker nodes to join. Passing an incorrect value to this field results
in an incorrect configuration of the worker node's /var/lib/kubelet/kubeconfig file, and the
nodes will not join the cluster.
• The worker node is not tagged as being owned by the cluster. Your worker nodes must have the
following tag applied to them, where <cluster_name> is replaced with the name of your cluster.
Key Value
kubernetes.io/cluster/<cluster-name> owned
275
Amazon EKS User Guide
hostname doesn't match
This could be because the cluster was created with one set of AWS credentials (from an IAM user or role),
and kubectl is using a different set of credentials.
When an Amazon EKS cluster is created, the IAM entity (user or role) that creates the cluster is added
to the Kubernetes RBAC authorization table as the administrator (with system:master permissions).
Initially, only that IAM user can make calls to the Kubernetes API server using kubectl. For more
information, see Managing Users or IAM Roles for your Cluster (p. 185). Also, the AWS IAM Authenticator
for Kubernetes uses the AWS SDK for Go to authenticate against your Amazon EKS cluster. If you use the
console to create the cluster, you must ensure that the same IAM user credentials are in the AWS SDK
credential chain when you are running kubectl commands on your cluster.
If you install and configure the AWS CLI, you can configure the IAM credentials for your user. If the
AWS CLI is configured properly for your user, then the AWS IAM Authenticator for Kubernetes can find
those credentials as well. For more information, see Configuring the AWS CLI in the AWS Command Line
Interface User Guide.
If you assumed a role to create the Amazon EKS cluster, you must ensure that kubectl is configured to
assume the same role. Use the following command to update your kubeconfig file to use an IAM role. For
more information, see Create a kubeconfig for Amazon EKS (p. 182).
To map an IAM user to a Kubernetes RBAC user, see Managing Users or IAM Roles for your
Cluster (p. 185) or watch a video about how to map a user.
Error: : error upgrading connection: error dialing backend: dial tcp 172.17.nn.nn:10250:
getsockopt: no route to host
276
Amazon EKS User Guide
CNI Log Collection Tool
CIDR blocks for public endpoint access. For more information, see Amazon EKS Cluster Endpoint Access
Control (p. 35).
If your managed node group encounters a health issue, Amazon EKS returns an error message to help
you to diagnose the issue. The following error messages and their associated descriptions are shown
below.
• AutoScalingGroupNotFound: We couldn't find the Auto Scaling group associated with the managed
node group. You may be able to recreate an Auto Scaling group with the same settings to recover.
• Ec2SecurityGroupNotFound: We couldn't find the cluster security group for the cluster. You must
recreate your cluster.
• Ec2SecurityGroupDeletionFailure: We could not delete the remote access security group for your
managed node group. Remove any dependencies from the security group.
• Ec2LaunchTemplateNotFound: We couldn't find the Amazon EC2 launch template for your managed
node group. You may be able to recreate a launch template with the same settings to recover.
• Ec2LaunchTemplateVersionMismatch: The Amazon EC2 launch template version for your managed
node group does not match the version that Amazon EKS created. You may be able to revert to the
version that Amazon EKS created to recover.
• IamInstanceProfileNotFound: We couldn't find the IAM instance profile for your managed node group.
You may be able to recreate an instance profile with the same settings to recover.
• IamNodeRoleNotFound: We couldn't find the IAM role for your managed node group. You may be able
to recreate an IAM role with the same settings to recover.
• AsgInstanceLaunchFailures: Your Auto Scaling group is experiencing failures while attempting to
launch instances.
• NodeCreationFailure: Your launched instances are unable to register with your Amazon EKS cluster.
Common causes of this failure are insufficient worker node IAM role (p. 239) permissions or lack of
outbound internet access for the nodes.
• InstanceLimitExceeded: Your AWS account is unable to launch any more instances of the specified
instance type. You may be able to request an Amazon EC2 instance limit increase to recover.
• InsufficientFreeAddresses: One or more of the subnets associated with your managed node group
does not have enough available IP addresses for new nodes.
• AccessDenied: Amazon EKS or one or more of your managed nodes is unable to communicate with
your cluster API server.
• InternalFailure: These errors are usually caused by an Amazon EKS server-side issue.
277
Amazon EKS User Guide
Container runtime network not ready
Use the following command to run the script on your worker node:
Note
If the script is not present at that location, then the CNI container failed to run. You can
manually download and run the script with the following command:
curl https://raw.githubusercontent.com/aws/amazon-vpc-cni-k8s/master/scripts/aws-
cni-support.sh | sudo bash
The errors are most likely related to the AWS IAM Authenticator configuration map not being applied to
the worker nodes. The configuration map provides the system:bootstrappers and system:nodes
Kubernetes RBAC permissions for worker nodes to register to the cluster. For more information, see
To enable worker nodes to join your cluster on the Self-managed nodes tab of Launching Amazon
EKS Linux Worker Nodes (p. 88). Ensure that you specify the Role ARN of the instance role in the
configuration map, not the Instance Profile ARN.
The authenticator does not recognize a Role ARN if it includes a path other than /, such as the following
example:
arn:aws:iam::111122223333:role/development/apps/prod-iam-role-NodeInstanceRole-621LVEXAMPLE
When specifying a Role ARN in the configuration map that includes a path other than /, you must drop
the path. The ARN above would be specified as the following:
arn:aws:iam::111122223333:role/prod-iam-role-NodeInstanceRole-621LVEXAMPLE
278
Amazon EKS User Guide
TLS handshake timeout
server.go:233] failed to run Kubelet: could not init cloud provider "aws": error finding
instance i-1111f2222f333e44c: "error listing AWS instances: \"RequestError: send request
failed\\ncaused by: Post net/http: TLS handshake timeout\""
The kubelet process will continually respawn and test the API server endpoint. The error can also occur
temporarily during any procedure that performs a rolling update of the cluster in the control plane, such
as a configuration change or version update.
To resolve the issue, check the route table and security groups to ensure that traffic from the worker
nodes can reach the public endpoint.
Error: ErrImagePull
If you have 1.12 worker nodes deployed into a China region, you may see the following text in an error
message in your kubelet logs:
To resolve the issue, ensure that you pull the image from an Amazon Elastic Container Registry
repository that is in the same region that your worker node is deployed in.
Troubleshooting IAM
This topic covers some common errors that you may see while using Amazon EKS with IAM and how to
work around them.
AccessDeniedException
If you receive an AccessDeniedException when calling an AWS API operation, then the AWS Identity
and Access Management (IAM) user or role credentials that you are using do not have the required
permissions to make that call.
In the above example message, the user does not have permissions to call the Amazon EKS
DescribeCluster API operation. To provide Amazon EKS admin permissions to a user, see Amazon EKS
Identity-Based Policy Examples (p. 233).
For more general information about IAM, see Controlling Access Using Policies in the IAM User Guide.
279
Amazon EKS User Guide
I Want to View My Access Keys
user name and password. Ask that person to update your policies to allow you to pass a role to Amazon
EKS.
Some AWS services allow you to pass an existing role to that service, instead of creating a new service
role or service-linked role. To do this, you must have permissions to pass the role to the service.
The following example error occurs when an IAM user named marymajor tries to use the console to
perform an action in Amazon EKS. However, the action requires the service to have permissions granted
by a service role. Mary does not have permissions to pass the role to the service.
In this case, Mary asks her administrator to update her policies to allow her to perform the
iam:PassRole action.
Access keys consist of two parts: an access key ID (for example, AKIAIOSFODNN7EXAMPLE) and a secret
access key (for example, wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY). Like a user name and
password, you must use both the access key ID and secret access key together to authenticate your
requests. Manage your access keys as securely as you do your user name and password.
Important
Do not provide your access keys to a third party, even to help find your canonical user ID. By
doing this, you might give someone permanent access to your account.
When you create an access key pair, you are prompted to save the access key ID and secret access key in
a secure location. The secret access key is available only at the time you create it. If you lose your secret
access key, you must add new access keys to your IAM user. You can have a maximum of two access keys.
If you already have two, you must delete one key pair before creating a new one. To view instructions,
see Managing Access Keys in the IAM User Guide.
To get started right away, see Creating Your First IAM Delegated User and Group in the IAM User Guide.
• To learn whether Amazon EKS supports these features, see How Amazon EKS Works with IAM (p. 230).
280
Amazon EKS User Guide
I Want to Allow People Outside of My AWS
Account to Access My Amazon EKS Resources
• To learn how to provide access to your resources across AWS accounts that you own, see Providing
Access to an IAM User in Another AWS Account That You Own in the IAM User Guide.
• To learn how to provide access to your resources to third-party AWS accounts, see Providing Access to
AWS Accounts Owned by Third Parties in the IAM User Guide.
• To learn how to provide access through identity federation, see Providing Access to Externally
Authenticated Users (Identity Federation) in the IAM User Guide.
• To learn the difference between using roles and resource-based policies for cross-account access, see
How IAM Roles Differ from Resource-based Policies in the IAM User Guide.
281
Amazon EKS User Guide
The following table provides quotas for Amazon EKS that cannot be changed.
282
Amazon EKS User Guide
Kubernetes Version 1.15 Added Kubernetes version 1.15 March 10, 2020
support for new clusters and
version upgrades.
Amazon EKS region Amazon EKS is now available February 26, 2020
expansion (p. 283) in the Beijing (cn-north-1)
and Ningxia (cn-northwest-1)
regions.
Amazon FSx for Lustre CSI Driver Added topic for installing the December 23, 2019
Amazon FSx for Lustre CSI Driver
on Kubernetes 1.14 Amazon EKS
clusters.
Restrict network access to the Amazon EKS now enables you December 20, 2019
public access endpoint of a to restrict the CIDR ranges
cluster that can communicate to the
public access endpoint of the
Kubernetes API server.
Resolve the private access Amazon EKS now enables you December 13, 2019
endpoint address for a cluster to resolve the private access
from outside of a VPC endpoint of the Kubernetes API
server from outside of a VPC.
(Beta) Amazon EC2 A1 instance Launch Amazon EC2 A1 instance December 4, 2019
worker nodes worker nodes that register with
your Amazon EKS cluster.
AWS Fargate on Amazon EKS Amazon EKS Kubernetes clusters December 3, 2019
now support running pods on
Fargate.
Amazon EKS region Amazon EKS is now available November 21, 2019
expansion (p. 283) in the Canada (Central) (ca-
central-1) region.
283
Amazon EKS User Guide
Amazon EKS region Amazon EKS is now available in October 16, 2019
expansion (p. 283) the South America (São Paulo)
(sa-east-1) region.
Kubernetes Dashboard Update Updated topic for installing September 28, 2019
the Kubernetes dashboard on
Amazon EKS clusters to use the
beta 2.0 version.
Amazon EFS CSI Driver Added topic for installing the September 19, 2019
Amazon EFS CSI Driver on
Kubernetes 1.14 Amazon EKS
clusters.
Amazon EC2 Systems Manager Added topic for retrieving September 18, 2019
parameter for Amazon EKS- the Amazon EKS-optimized
optimized AMI ID AMI ID using an Amazon EC2
Systems Manager parameter.
The parameter eliminates the
need for you to look up AMI IDs.
Amazon EKS resource tagging Manage tagging of your Amazon September 16, 2019
EKS clusters.
Amazon EBS CSI Driver Added topic for installing the September 9, 2019
Amazon EBS CSI Driver on
Kubernetes 1.14 Amazon EKS
clusters.
New Amazon EKS-optimized AMI Amazon EKS has updated the September 6, 2019
patched for CVE-2019-9512 and Amazon EKS-optimized AMI to
CVE-2019-9514 address CVE-2019-9512 and
CVE-2019-9514.
284
Amazon EKS User Guide
IAM Roles for Service Accounts With IAM roles for service September 3, 2019
accounts on Amazon EKS
clusters, you can associate an
IAM role with a Kubernetes
service account. With this
feature, you no longer need to
provide extended permissions to
the worker node IAM role so that
pods on that node can call AWS
APIs.
Amazon EKS region Amazon EKS is now available in August 29, 2019
expansion (p. 283) the Middle East (Bahrain) (me-
south-1) region.
Amazon EKS platform version New platform versions to August 28, 2019
update address CVE-2019-9512 and
CVE-2019-9514.
Amazon EKS region Amazon EKS is now available July 31, 2019
expansion (p. 283) in the Asia Pacific (Hong Kong)
(ap-east-1) region.
Added topic on ALB Ingress The AWS ALB Ingress Controller July 11, 2019
Controller for Kubernetes is a controller
that triggers the creation of an
Application Load Balancer when
Ingress resources are created.
285
Amazon EKS User Guide
Kubernetes Version 1.13 Added Kubernetes version 1.13 June 18, 2019
support for new clusters and
version upgrades.
New Amazon EKS-optimized AMI Amazon EKS has updated the June 17, 2019
patched for AWS-2019-005 Amazon EKS-optimized AMI
to address the vulnerabilities
described in AWS-2019-005.
Amazon EKS platform version New platform version for May 21, 2019
update Kubernetes 1.11 and 1.10
clusters to support custom DNS
names in the Kubelet certificate
and improve etcd performance.
Getting Started with eksctl This getting started guide helps May 10, 2019
you to install all of the required
resources to get started with
Amazon EKS using eksctl, a
simple command line utility
for creating and managing
Kubernetes clusters on Amazon
EKS.
AWS CLI get-token The aws eks get-token May 10, 2019
command (p. 283) command was added to the
AWS CLI so that you no longer
need to install the AWS IAM
Authenticator for Kubernetes
to create client security
tokens for cluster API server
communication. Upgrade your
AWS CLI installation to the latest
version to take advantage of
this new functionality. For more
information, see Installing the
AWS Command Line Interface in
the AWS Command Line Interface
User Guide.
286
Amazon EKS User Guide
Amazon EKS platform version New platform version for May 8, 2019
update Kubernetes 1.12 clusters to
support custom DNS names
in the Kubelet certificate and
improve etcd performance.
This fixes a bug that caused
worker node Kubelet daemons
to request a new certificate
every few seconds.
Amazon EKS Control Plane Amazon EKS control plane April 4, 2019
Logging logging makes it easy for
you to secure and run your
clusters by providing audit and
diagnostic logs directly from
the Amazon EKS control plane
to CloudWatch Logs in your
account.
Added App Mesh Getting Added documentation for March 27, 2019
Started Guide getting started with App Mesh
and Kubernetes.
Amazon EKS API server endpoint Added documentation for March 19, 2019
private access disabling public access for
your Amazon EKS cluster's
Kubernetes API server endpoint.
Added topic for installing the The Kubernetes metrics server is March 18, 2019
Kubernetes metrics server an aggregator of resource usage
data in your cluster.
Added list of related open source These open source projects March 15, 2019
projects extend the functionality of
Kubernetes clusters running
on AWS, including clusters
managed by Amazon EKS.
Added topic for installing Helm The helm package manager for March 11, 2019
locally Kubernetes helps you install
and manage applications on
your Kubernetes cluster. This
topic helps you install and run
the helm and tiller binaries
locally so that you can install
and manage charts using the
helm CLI on your local system.
287
Amazon EKS User Guide
Amazon EKS platform version New platform version updating March 8, 2019
update Amazon EKS Kubernetes 1.11
clusters to patch level 1.11.8 to
address CVE-2019-1002100.
Increased cluster limit Amazon EKS has increased the February 13, 2019
number of clusters that you can
create in a region from 3 to 50.
Amazon EKS region Amazon EKS is now available February 13, 2019
expansion (p. 283) in the Europe (London) (eu-
west-2), Europe (Paris) (eu-
west-3), and Asia Pacific
(Mumbai) (ap-south-1)
regions.
New Amazon EKS-optimized AMI Amazon EKS has updated the February 11, 2019
patched for ALAS-2019-1156 Amazon EKS-optimized AMI
to address the vulnerability
described in ALAS-2019-1156.
New Amazon EKS-optimized AMI Amazon EKS has updated the January 9, 2019
patched for ALAS2-2019-1141 Amazon EKS-optimized AMI to
address the CVEs referenced in
ALAS2-2019-1141.
Amazon EKS region Amazon EKS is now available December 19, 2018
expansion (p. 283) in the following additional
regions: Europe (Frankfurt) (eu-
central-1), Asia Pacific (Tokyo)
(ap-northeast-1), Asia Pacific
(Singapore) (ap-southeast-1),
and Asia Pacific (Sydney) (ap-
southeast-2).
Amazon EKS cluster updates Added documentation for December 12, 2018
Amazon EKS cluster Kubernetes
version updates and worker
node replacement.
Amazon EKS region Amazon EKS is now available December 11, 2018
expansion (p. 283) in the Europe (Stockholm) (eu-
north-1) region.
Added version 1.0.0 support for The Application Load Balancer November 20, 2018
the Application Load Balancer ingress controller releases
ingress controller version 1.0.0 with formal
support from AWS.
288
Amazon EKS User Guide
Added support for CNI network The Amazon VPC CNI plugin October 16, 2018
configuration for Kubernetes version 1.2.1
now supports custom network
configuration for secondary pod
network interfaces.
Added support for Amazon EKS platform version October 10, 2018
MutatingAdmissionWebhook 1.10-eks.2 now supports
and MutatingAdmissionWebhook
ValidatingAdmissionWebhook and
ValidatingAdmissionWebhook
admission controllers.
Added Partner AMI information Canonical has partnered with October 3, 2018
Amazon EKS to create worker
node AMIs that you can use in
your clusters.
Added instructions for AWS CLI Amazon EKS has added the September 21, 2018
update-kubeconfig command update-kubeconfig to the
AWS CLI to simplify the process
of creating a kubeconfig file
for accessing your cluster.
New Amazon EKS-optimized Amazon EKS has updated the September 13, 2018
AMIs Amazon EKS-optimized AMIs
(with and without GPU support)
to provide various security fixes
and AMI optimizations.
Amazon EKS platform version New platform version with August 31, 2018
update support for Kubernetes
aggregation layer and the
Horizontal Pod Autoscaler(HPA).
New Amazon EKS-optimized Amazon EKS has updated the August 22, 2018
AMIs and GPU support Amazon EKS-optimized AMI to
use a new AWS CloudFormation
worker node template and
bootstrap script. In addition, a
new Amazon EKS-optimized AMI
with GPU support is available.
New Amazon EKS-optimized AMI Amazon EKS has updated the August 14, 2018
patched for ALAS2-2018-1058 Amazon EKS-optimized AMI to
address the CVEs referenced in
ALAS2-2018-1058.
Amazon EKS-optimized AMI Amazon EKS has open-sourced July 10, 2018
build scripts the build scripts that are used to
build the Amazon EKS-optimized
AMI. These build scripts are now
available on GitHub.
289
Amazon EKS User Guide
290