So I was perusing the AWS Updates about what is new and coming soon and I really was excited when I read that AWS CodePipeline adds Native Amazon EKS Deployment Support (Ref: AWS CodePipeline adds native Amazon EKS deployment support - AWS)
It has been possible for a while to deploy to an Amazon EKS cluster but histortically you needed to have compute infrastructure running to aid in the deployment (which costs money to run). This new mechanism improves the deployment mechanism.
Whether your EKS Cluster is public or private you would be able to utilize this mechanism. In practical terms I would think the audience for this guide would already host EKS clusters and deploy to it but since I currently don't in my lab environment I will document the process end to end.
Ref: Tutorial: Deploy to Amazon EKS with CodePipeline - AWS CodePipeline
How is it done?
As the guide below will show it is done through the use of an EKS action which is now available in a CodePipelinev2 Pipeline. That is the caveat. The pipelines created to deploy code need to be a v2 Pipeline.
For Demonstration purposes I could easily use a public EKS cluster however I tend to favour things like EKS environments being hosted privately for a more secure environment so I will deploy an environment with a Private Cluster, inside of a VPC (with a NAT Gateway) and then deploy an EKS Cluster within it.
Guide: Deploy a Private VPC with EKS Cluster
Creating VPC with Network connectivity
AWS Provide a really easy way to stand up a VPC nowadays. In the Console of your AWS account go to VPC and then select Create VPC & More

I kept the default config except that I changed the VPC name from "project" to "eks" and enabled a NAT Gateway in one AZs.
This VPC will serve as the location for our EKS Cluster
Creating a Private EKS Cluster
Once the VPC is ready then you can navigate to the Amazon Elastic Kubernetes Service location on the Management Console to create the cluster (easy as pie). I say easy as pie because the updated EKS Cluster Creation dialog is quite intuitive.

It detects the VPC I created, has selected 2 private subnets already for the cluster and the only work for me is to give the cluster a name and then create the recommended Cluster IAM Role and Node IAM Role and then create the cluster. The process to do that is Next, Next, Finish once you click the create recommended role. Then refresh the role list and the Auto created role will be there. Click create to finish the cluster provisioning.
Update Code Pipeline Service Role.
Since I am demonstrating how to do this in a Private Cluster there are permissions you need to grant the CodePipeline Role for it to work.
The following Policy document came directly from the referenced AWS Tutorial so you would add this, substituting the ARNs for your subnets within the newly created VPC (or existing one if you had one already).
If you are creating a New pipeline (as I am doing since this is a demo) you may create a new service role for codepipeline. If you do so then simply add an inline policy to that policy after it has been created. If there is an existing codepipeline service role that will be used, then add the policy document below. Just remember to substitute the example values for the real ones in your environment
What the policy below is doing in reality is enabling Codepipeline access to create Network Interfaces in the subnets where your EKS Cluster is operating and delete them which is how Codepipeline would natively deploy code to the cluster.
Grant IAM Access to the Service Role
Create Pipeline
At this stage (as of Feb 2025 when I am writing this) Codepipeline doesnt have a "Deployment Category" for EKS. I would assume since they just announced the capability that it will come to make it easier. But even so it is easy enough to do.
From the CodePipeline GUI on the console click "Create Pipeline" and then under category select "Custom Pipeline"

give your pipeline a name and either reference the existing service role or create a new one. I am creating a new one in my Lab. I am using the default role name which is generated from the Standard Prefix followed by the region and then the pipeline. Since it is a new role I will need to amend it afterwards with the permissions mentioned in the above policy doc

In the Source stage we are defining the code we are using for deployment. I am going to lean on our friends at AWS again and use their sample Linux app for my source (since I dont have a handy EKS app available).
Check it out: Deploy a sample application on Linux - Amazon EKS
I have put the yaml files from this guide into a private github repository and specified that as the source for my deployment.
Codepipeline supports multiple providers for source (such as github, gitlab, bitbucket). Codecommit is still available for existing users but no longer available for new customers.

Click Next once you have specified the appropriate source
There is nothing to do on the build or test stages so skip that and the test stage and proceed to deploy
You will see below that there is an Amazon EKS option there so select that

specify the EKS Cluster name (if you dont see one then there is no cluster online that your aws login has access to) and then specify the deployment details. It could be either a helm deployment or kubectl. Since I am using the yaml files from the sample Linux App AWS provide it will be kubectl.
for manifest paths (since they are coming from the github source and are in the root folder I use ./eks-sample-deployment.yaml,./eks-sample-service.yaml

Add the Subnets that your compute nodes are in. You can add a security group to the pipeline which will apply to the Network Interfaces that get created. It is a good idea to do this. If you dont then it will rely on a default security group that might apply. Finish the deployment.
You will probably see a "failed" for the initial deployment (especially if it is a completely new EKS and Codepipeline setup)

since I havent added the permissions as per the guide this is to be expected.
Edit the IAM Permissions for the Service role
In the IAM Console under Roles find the service role created for the pipeline and under add permissions select create inline policy.
Switch to JSON and then paste the sample policy document.
Change the EKS Cluster ARN and the Subnet ARNs in the sample for real values, save and name the policy.
Finally you need to add the Code Pipeline Service role into the EKS Access.
This is done from the EKS Console for the cluster and there is a table named Access. Under Create Access Entity you can add the role ARN for the Codepipeline Service role and click Next.

AmazonEKSClusterAdminPolicy is the mentioned in the sample and I have used this policy in my demo. It's not best practice to use this managed policy for the role access going forward. In production you should create a custom policy that has the permissions required but no more.
After I did configure the IAM access entry, I ran the pipeline again after this and the deployments went through successfully.

And within the EKS Environment I can see the deployment is running. If I wanted to access the service then I would need to be in a network allowed by the security group which but if I was then I should be able to bring the service up on it's IP in a web browser and get a default page.

Add comment
Comments