The progression of the software development lifecycle from scheduled releases to a continuous integration model addresses the needs of a demanding and continually evolving marketplace. In response to these changes, AWS introduced their CodePipeline offering in 2015. AWS CodePipeline allows you to automate the release process for your application or service.
AWS CodePipeline is a workflow management tool that allows users to construct and manage a process whereby their code can be built, tested and deployed into either a test and/or production environment. When problems occur with any step in the process, the pipeline is halted, thereby ensuring that preventable bugs and errors are not automatically deployed into your environment.
Workflows can be constructed either through the AWS command line, or through an intuitive user interface. Build steps can be configured to use either AWS offerings, third-party tools, or a combination of both.
Something which may be important to note before you begin building a pipeline is that the AWS process only saves the pipeline at the end of the configuration process. Unfortunately, this means that if you get halfway through, and need to leave the process to configure a component service, or take a break, you might find yourself starting again from the beginning when you return.
Operational Visibility From AWS
Machine data holds hidden secrets that deliver true insights about the operational health of your AWS infrastructure. Learn more about operational visibility from AWS today!
The Benefits of Using AWS CodePipeline
CodePipeline is a workflow management tool, which allows the user to configure a series of steps to form a pipeline. The service leverages many of the management tools already in the AWS environment, such as AWS CodeCommit and AWS CodeDeploy, but it does not limit itself to aggregating only internal services. The user can also create integrations with tools and services such as GitHub and Jenkins.
[Learn More: AWS CodePipeline vs Jenkins]
CodePipeline is highly configurable and has a very short learning curve. Users who are familiar with the Amazon ecosystem will find it extremely easy to create a pipeline for their applications and services, and like all AWS services, IAM roles are required to be configured to ensure that those who need access have it, and those who don’t are restricted.
Setting Up Your New New Pipeline
We need to take care of some basic setup before we create the pipeline. First, you’ll need access to the AWS environment. If you don’t already have access, you can sign up for a free account. Additional configuration and setup may be required during the course of this walkthrough of the CodePipeline service, but I’ll point this out along the way, and link to appropriate resources where possible.
You’ll want to navigate to the homepage for the CodePipeline utility. You can find a link to CodePipeline from the Console, or if you’re in the US West region, you can navigate with this link.
If you’re familiar with AWS, you’ll be aware that IAM roles are key to providing appropriate access to those people and services who need it. The pipeline we’ll be creating will require a lot of access to different parts of your account, and while it may seem like a good idea to define this upfront, we’re going to forego the defining of security roles until the end of the configuration process.
From the homepage for CodePipeline, click on the Get Started button. For the purpose of this example, I’ll call mine CP_Example_Pipeline. Choose your own name, and then click on the Next Step button.
Setting the Source for Your Pipeline
Now that our pipeline has a name, we’re ready to get started. At the time of this writing, AWS offers three options to provide the source code for the pipeline:
- Amazon S3
- AWS CodeCommit
In order to use S3 as the source, you’ll need to provide a bucket name for an S3 bucket which has versioning enabled. Both Amazon CodeCommit and GitHub are similar, but let’s look at how to integrate them in a little more detail so you know what to expect. When you select an option, the appropriate fields appear below the selection.
CodeCommit and CodePipeline
In this example, I have a CodeCommit repository set up under the name, Email_Validator, and I’m going to use the master branch as the source for the pipeline.
GitHub and CodePipeline
The AWS CodePipeline integration with GitHub is relatively simple as well. After selecting GitHub as the source provider, click on the “Connect to GitHub” button.
In order to connect to GitHub, you’ll need to authenticate to your GitHub account. (It is important to note that it is your GitHub credentials that should be entered here, and not your AWS credentials.)
Once you’ve authenticated to GitHub, you’ll need to review and authorize permissions which will allow AWS to have admin access to Repository webhooks and services, as well as access to all public and private repositories. Clicking on the “Authorize application” button will then direct you back to the AWS CodePipeline process, in order to complete the integration.
Finally, you’ll need to select the repository and branch you would like to use as your source. Fortunately, the integration allows AWS to auto-populate the text boxes, so you don’t have to do too much additional typing.
Now that we have a source, it’s time to work on the build process.
Defining the Build Process for Your Pipeline
The next step in the creation of the pipeline is to define the build provider. At the time of this writing, AWS offers three options for build providers:
- AWS CodeBuild
- Solano CI
Not sure which to use? Check out AWS CodePipeline vs. Jenkins CI Server.
Let’s consider the configuration to add either AWS CodeBuild or Jenkins.
AWS CodeBuild Integration
For this example, I have a CodeBuild project already set up called Email_Validator. If you already have a build project configured, you can simply select AWS CodeBuild as the Build provider, click the radio button for Select an existing build project, and then choose a project name from the list which will appear in the Project name box.
You can also define a new CodeBuild project as part of this process if you would like. It requires the same inputs as setting up a CodeBuild project independently in the AWS environment, but that is beyond the scope of this article.
The option to add Jenkins as part of the pipeline is a handy feature if you already have your builds configured to run on Jenkins. This is especially handy if you have a Jenkins server running on another EC2 instance.
However, if you’re new to the system, then it should be noted that not only will you have to set up and configure the Jenkins server, and build the appropriate build jobs, but you will also have to ensure that the following prerequisites are in place in order for CodePipeline to connect to and interact with Jenkins.
Prior to connecting Jenkins as the build step in the pipeline, ensure that you have the following set up and configured:
- IAM Instance role which allows communication between CodePipeline and the instance of Jenkins.
- Ruby gems, Rake and Haml are installed on the instance which hosts the Jenkins service, and environment variables set to allow Rake commands to be executed from the command line.
- The AWS CodePipeline Plugin for Jenkins is installed on the Jenkins service.
Additionally, the build job should be configured with:
A build trigger which polls the AWS CodePipeline each minute.
A post build action which uses the AWS CodePipeline Publisher, and is configured to use the identical Provider name as is defined in the Build step. (See Fig. 5)
Once each of those items is setup and configured, you can then include the Jenkins server as part of the pipeline. Start by selecting Add Jenkins for the Build provider selection box. Enter a Provider name, which needs to be identical to the Provider name included in the post build action of the Jenkins job. (It doesn’t matter which one you set first, as long as the other matches exactly.)
Finally, enter the Server URL, and the name of the project to be executed. An example of what the configuration form looks like is included below in Fig. 5.
Deploying the Build with CodePipeline
With our project built, the next step to configure in the pipeline is the deployment to an environment for testing and/or production use. AWS CodePipeline offers three options to do this—or you can select No Deployment if you want CodePipeline to only build your project. Those three deployment options are:
- AWS Elastic Beanstalk
- AWS CodeDeploy
- AWS CloudFormation
We’ll look at configuring the use of AWS Elastic Beanstalk and AWS CodeDeploy to see how each is configured. If you are more familiar with and typically use AWS CloudFormation to manage your environment, Amazon provides detailed instructions here.
AWS Elastic Beanstalk Deployment
AWS Elastic Beanstalk is perhaps the easiest way to deploy and manage an application in AWS. Application code is uploaded, basic configuration provided, and AWS uses its Auto Scaling and Elastic Load Balancing to create and maintain an environment which will scale based on need.
In order to include deployment of your project using Elastic Beanstalk, you need to have an environment configured in AWS. You simply provide the application name and environment name to CodePipeline in order to integrate it.
AWS CodeDeploy Deployment
AWS CodeDeploy is a service which enables the automated and coordinated deployment of an application into an environment. Two deployment types are available.
In-place deployment identifies all instances which host the application, and systematically takes each offline, replaces the application, and then brings each back online on the same collection of instances.
Blue-green deployment creates a new environment with new instances. The new applications are loaded onto the new instance, and after an optional wait time for verification and testing, an Elastic Load Balancer (ELB) begins rerouting traffic from the original environment to the new environment.
Additional information on AWS CodeDeploy, including how to create and configure new deployments, can be found here.
In order to integrate CodeDeploy into the pipeline, you’ll need to create a new deployment, and then enter the Application name and the Deployment group, in order to integrate it into the pipeline.
Determining Appropriate Access for the Pipeline
Once configured, the pipeline will need access to a variety of services and tools within the AWS environment. Fortunately, the pipeline creation process makes this really easy to handle. Once you’ve configured the source, build and deployment configurations for your pipeline, you have the option to select the AWS service role under which the pipeline will be executed.
If you’ve previously defined other pipelines, you may already have a role defined, in which case you can enter it in directly. If that is not the case, there is a Create role button, which will analyze the pipeline, and determine which policies will be required for its successful execution.
Putting the Final Touches on Your Pipeline
Once the service role has been configured, you’ll have the opportunity to review your pipeline configuration, and create your pipeline. Below is the summary of the pipeline configured as part of this walkthrough. Your pipeline may be similar. Clicking on Create pipeline will create the pipeline, and begin its first execution.
- Continuous Delivery
- DevOps as a Service: Build Automation in the Cloud
- Agile DevOps
Complete visibility for DevSecOps
Reduce downtime and move from reactive to proactive monitoring.