The Operational Excellence Pillar-3

The Operational Excellence Pillar-3

Custom Deployment Configurations
You can also create custom deployment configurations. This is useful if you want to customize how many instances CodeDeploy attempts to deploy to simultaneously. The deployment must complete successfully on these instances before CodeDeploy moves onto the remaining instances in the deployment group. Hence, the value you must specify when creating a custom deployment configuration is called the number of healthy instances. The number of healthy instances can be a percentage of all instances in the group or a number of instances.

Lifecycle Events
An instance deployment is divided into lifecycle events, which include stopping the application (if applicable), installing prerequisites, installing the application, and validating the application. During some of these lifecycle events, you can have the agent execute a lifecycle event hook, which is a script of your choosing. The following are all the lifecycle events during which you can have the agent automatically run a script:

ApplicationStop You can use this hook to gracefully stop an application prior to an emplace deployment. You can also use it to perform cleanup tasks. This event occurs prior to the agent copying any application files from your repository. This event doesn’t occur on original instances in a blue/green deployment, nor does it occur the first time you deploy to an instance.

BeforeInstall This event occurs after the agent copies the application files to a temporary location on the instance, but before it copies them to their final location. If your application files require some manipulation, such as decryption or the insertion of a unique identifier, this would be the hook to use.

AfterInstall Once the agent copies your application files to their final destination, you can perform further needed tasks such as setting file permissions.
ApplicationStart You use this hook to start your application. For example, on a Linux instance running an Apache web server, this may be as simple as running a script that executes the systemctl httpd start command.
ValidateService Here you can check that the application is working as expected. For instance, you may check that it’s generating log files or that it’s established a connection to a backend database. This is the final event for in-place deployments that don’t use an elastic load balancer.
BeforeBlockTraffic With an in-place deployment using an elastic load balancer, this event occurs first, before the instance is unregistered.
AfterBlockTraffic This event occurs after the instance is unregistered from an elastic load balancer. You may hook this event to wait for user sessions or in-process transfers to complete.
BeforeAllowTraffic For deployments using an elastic load balancer, this event occurs after the application is installed and validated. You may use this hook to perform any tasks needed to warm up the application or otherwise prepare it to accept traffic.
AfterAllowTraffic This is the final event for deployments using an elastic load balancer. Notice that not all of these lifecycle events can occur on all instances in a blue/green deployment. The BeforeBlockTraffic event, for example, wouldn’t occur on a replacement instance since it makes no sense for CodeDeploy to unregister a replacement instance from a load balancer during a deployment. Each script run during a lifecycle event must complete successfully before
CodeDeploy will allow the deployment to advance to the next event. By default, the agent will wait one hour for the script to complete before it considers the instance deployment failed. You can optionally set the timeout to a lower value, as shown in the following section.

The Application Specification File
The application specification file defines where the agent should copy the application files onto your instance and what scripts it should run during the deployment process. You must place the file in the root of your application repository and name it appspec.yml. It consists of the following five sections:
Version Currently the only allowed version of the AppSpec file is 0.0.
OS Because the CodeDeploy agent works only on Linux and Windows, you must specify one of these as the operating system. Files This specifies one or more source and destination pairs identifying the files or directories to copy from your repository to the instance.
Permissions This optionally sets ownership, group membership, file permissions, and SELinux context labels for the files after they’re copied to the instance. This applies to Amazon Linux, Ubuntu, and RedHat Enterprise Linux instances only.
Hooks This section is where you define the scripts the agent should run at each lifecycle event. You must specify the name of the lifecycle event followed by a tuple containing the following:
Location This must be a full path to an executable.
Timeout You can optionally set the time in seconds that the agent will wait for the script to execute before it considers the instance deployment failed.
Run As For Amazon Linux and Ubuntu instances, this lets you specify the user the script should run as.

Triggers and Alarms
You can optionally set up triggers to generate an SNS notification for certain deployment and instance events, such as when a deployment succeeds or fails.
You can also configure your deployment group to monitor up to 10 CloudWatch alarms. If an alarm exceeds or falls below a threshold you define, the deployment will stop.

Rollbacks
You can optionally have CodeDeploy roll back, or revert, to the last successful revision ofan application if the deployment fails or if a CloudWatch alarm is triggered during deployment. Despite the name, rollbacks are actually new deployments.

CodePipeline
CodePipeline lets you automate the different stages of your software development and release process. These stages are often implemented using continuous integration (CI) and continuous delivery (CD) workflows or pipelines. Continuous integration (CI) and continuous delivery (CD) are different but related concepts.

Continuous Integration
Continuous integration is a method whereby developers use a version control system such as Git to regularly submit or check in their changes to a common repository. This first stage of the pipeline is the source stage. Depending on the application, a build system may compile the code or build it into a binary file, such as an executable, AMI, or container image. This is called the build stage. One goal of continuous integration is to ensure that the code developers are adding to the repository works as expected and meets the requirements of the application. Thus, the build stage may also include unit tests, such as verifying that a function given a certain input returns the correct output. This way, if a change to an application causes something to break, the developer can learn of the error and fix it early. Not all applications require a build stage. For example, a web-based application using an interpreted language like PHP doesn’t need to be compiled.

Compliance
Compliance insights show how the patch and association status of your instances stacks up against the rules you’ve configured. Patch compliance shows the number of instances that have the patches in their configured baseline, as well as details of the specific patches installed. Association compliance shows the number of instances that have had an association successfully executed against them.

Continuous Delivery
Continuous delivery incorporates elements of the continuous integration process but also deploys the application to production. A goal of continuous delivery is to allow frequent updates to an application while minimizing the risk of failure. To do this, continuous delivery pipelines usually include a test stage. As with the build stage, the actions performed in test stage depend on the application. For example, testing a web application may include deploying it to a test web server and verifying that the web pages display the correct content. On the other hand, testing a Linux executable that you plan to release publicly may involve deploying it to test servers running a variety of different Linux distributions and versions. The final stage is deployment in which the application is deployed to production. Although continuous delivery can be fully automated without requiring human intervention, it’s common to require manual approval before releasing an application to production. You can also schedule releases to occur regularly or during opportune times such as maintenance windows. Because continuous integration and continuous delivery pipelines overlap, you’ll often see them combined into one to form the term CI/CD pipeline. Keep in mind that even though a CI/CD pipeline includes every stage from source to deployment, that doesn’t mean you have to deploy to production every time you make a change. You can add logic to require a manual approval before deployment. Or you can disable transitions from one stage to the next. For instance, you may disable the transition from the test stage to the deployment stage until you’re actually ready to deploy.

Creating the Pipeline
Every CodeDeploy pipeline must include at least two stages and can have up to 10. Within each stage, you must define at least one task or action to occur during the stage. An action can be one of the following types:
■ Source
■ Build
■ Test
■ Approval
■ Deploy
■ Invoke
CodePipeline integrates with other AWS and third-party providers to perform the actions. You can have up to 20 actions in the same stage, and they can run sequentially or in parallel. For example, during your testing stage you can have two separate test actions that execute concurrently. Note that different action types can occur in the same stage. For instance, you can perform build and test actions in the same stage.

Source
The source action type specifies the source of your application files. The first stage of a pipeline must include at least one source action and can’t include any other types of actions. Valid providers for the source type are CodeCommit, S3, or GitHub. If you specify CodeCommit or S3, you must also specify the ARN of a repository or bucket. AWS can use CloudWatch events to detect when a change is made to the repository or bucket. Alternatively, you can have CodePipeline periodically poll for changes. To add a GitHub repository as a source, you’ll have to grant CodePipeline permission to access your repositories. Whenever you update the repository, GitHub creates a webhook that notifies CodePipeline of the change.

Build
Not all applications require build actions. An interpreted language such as those used in shell scripts and declarative code such as CloudFormation templates doesn’t require compiling. However, even noncompiled languages may benefit from a build stage that analyzes the code for syntax errors and style conventions. The build action type can use AWS CodeBuild as well as third-party providers CloudBees, Jenkins, Solano CI, and TeamCity. AWS CodeBuild is a managed build service that lets you compile source code and perform unit tests. CodeBuild offers on-demand build environments for a variety of programming languages, saving you from having to create and manage your own build servers.

Test
The test action type can also use AWS CodeBuild as a provider. For testing against smartphone platforms, AWS DeviceFarm offers testing services for Android iOS and web applications. Other supported providers are BlazeMeter, Ghost Inspector, HPE StormRunner Load, Nouvola, and Runscope.

Approval
The approval action type includes only one action: manual approval. When pipeline execution reaches this action, it awaits manual approval before continuing to the next stage. If there’s no manual approval within seven days, the action is denied, and pipeline execution halts. You can optionally send an SNS notification, which includes a link to approve or deny the request and may include a URL for the approver to review.

Deploy
For deployment, CodePipeline offers integrations with CodeDeploy, CloudFormation, Elastic Container Service, Elastic Beanstalk, OpsWorks Stacks, Service Catalog, and XebiaLabs.
Recall that CodeDeploy doesn’t let you specify a CodeCommit repository as a source for your application fi les. But you can specify CodeCommit as the provider for the source action and CodeDeploy as the provider for the Deploy action.

Invoke
If you want to run a custom Lambda function as part of your pipeline, you can invoke it using the invoke action type. For example, you can write a function to create an EBS snapshot, perform application testing, clean up unused resources, and so on.

Artifacts
When you create a pipeline, you must specify an S3 bucket to store the fi les used during different stages of the pipeline. CodePipeline compresses these fi les into a ZIP fi le called an artifact . Different actions in the pipeline can take an artifact as an input, generate it as an output, or both.
The first stage in your pipeline must include a source action specifying the location of your application fi les. When your pipeline runs, CodePipeline compresses the fi les to create a source artifact. Suppose that the second stage of your pipeline is a build stage, CodePipeline then unzips the source artifact and passes the contents along to the build provider. The build provider uses this as an input artifact. The build provider yields its output; let’s say it’s a binary file. CodePipeline takes that file and compresses it into another ZIP file, called an output artifact.
This process continues throughout the pipeline. When creating a pipeline, you must specify a service role for CodePipeline to assume. It uses this role to obtain permissions to the S3 bucket. The bucket must exist in the same region as the pipeline. You can use the same bucket for multiple pipelines, but each pipeline can use only one bucket for artifact storage.

AWS Systems Manager
AWS Systems Manager, formerly known as EC2 Systems Manager and Simple Systems Manager (SSM), lets you automatically or manually perform actions against your AWS resources and on-premises servers. From an operational perspective, Systems Manager can handle many of the maintenance tasks that often require manual intervention or writing scripts. For on-premises and EC2 instances, these tasks include upgrading installed packages, taking an inventory of installed software, and installing a new application. For your AWS resources, such tasks may include creating an AMI golden image from an EBS snapshot, attaching IAM instance profiles, or disabling public read access to S3 buckets. Systems Manager provides the following two capabilities:
■ Actions
■ Insights

Actions
Actions let you automatically or manually perform actions against your AWS resources, either individually or in bulk. These actions must be defined in documents, which are divided into three types:
■ Automation—actions you can run against your AWS resources
■ Command—actions you run against your Linux or Windows instances
■ Policy

Automation
Automation enables you to perform actions against your AWS resources in bulk. For example, you can restart multiple EC2 instances, update CloudFormation stacks, and patch AMIs. Automation provides granular control over how it carries out its individual actions. It can perform the entire automation task in one fell swoop, or it can perform one step at a time, enabling you to control precisely what happens and when. Automation also offers rate control, so you can specify as a number or a percentage how many resources to target at once.

Run Command
While automation enables you to automate tasks against your AWS resources, Run Commands let you execute tasks on your managed instances that would otherwise require logging in or using a third-party tool to execute a custom script. Systems Manager accomplishes this via an agent installed on your EC2 and on-premises managed instances. The Systems Manager agent is installed on all Windows Server and Amazon Linux AMIs. By default, Systems Manager doesn’t have permissions to do anything on your instances. You first need to apply an instance profile role that contains the permissions in the AmazonEC2RoleforSSM policy. AWS offers a variety of preconfigured command documents for Linux and Windows instances; for example the AWS-InstallApplication document installs software on Windows, and the AWS-RunShellScript document allows you to execute arbitrary shell scripts against Linux instances. Other documents include tasks such as restarting a Windows service or installing the CodeDeploy agent. You can target instances by tag or select them individually. As with automation, you optionally may use rate limiting to control how many instances you target at once.

Session Manager
Session Manager lets you achieve interactive bash and PowerShell access to your Linux and Windows instances, respectively, without having to open up inbound ports on a security group or network ACL or even having your instances in a public subnet. You don’t need to set up a bastion host or worry about SSH keys. All Linux versions and Windows Server 2008 R2 through 2016 are supported. You open a session using the web console or AWS CLI. You must first install the Session Manager plugin on your local machine to use the AWS CLI to start a session. The Session Manager SDK has libraries for developers to create custom applications that connect to instances. This is useful if you want to integrate an existing configuration management system with your instances without opening ports in a security group or network ACL. Connections made via Session Manager are secured using TLS 1.2. Session Manager can keep a log of all logins in CloudTrail and store a record of commands run within a session in an S3 bucket.

Patch Manager
Patch Manager helps you automate the patching of your Linux and Windows instances. It doesn’t support all operating systems supported by other Systems Manager features but does support the following:
■ All recent Amazon Linux versions
■ CentOS/RHEL 6.5–7.5
■ SUSE Linux Enterprise Server 12.0–12.9 ■ Ubuntu Server 14.04 LTS, 16.04 LTS, and 18.04 LTS
■ Windows Server 2003–2016
You can individually choose instances to patch, patch according to tags, or create a patch group. A patch group is a collection of instances with the tag key Patch Group. For example, if you wanted to include some instances in the Webservers patch group, you’d assign tags to each instance with the tag key of Patch Group and the tag value of Webservers. Keep in mind that the tag key is case-sensitive. Patch Manager uses patch baselines to define which available patches to install, as well as whether the patches will be installed automatically or require approval. AWS offers default baselines that differ according to operating system but include patches that are classified as security-related, critical, important, or required. The patch baselines for all operating systems except Ubuntu automatically approve these patches after seven days. This is called an auto-approval delay. For more control over which patches get installed, you can create your own custom baselines. Each custom baseline contains one or more approval rules that define the operating system, the classification and severity level of patches to install, and an auto-approval delay. You can also specify approved patches in a custom baseline configuration. For Windows baselines, you can specify knowledgebase and security bulletin IDs. For Linux baselines, you can specify CVE IDs or full package names. If a patch is approved, it will be installed during a maintenance window that you specify. Alternatively, you can forego a maintenance window and patch your instances immediately. Patch Manager executes the AWSRunPatchBaseline document to perform patching.

State Manager
While Patch Manager can help ensure your instances are all at the same patch level, State Manager is a configuration management tool that ensures your instances have the software you want them to have and are configured in the way you define. More generally, State Manager can automatically run command and policy documents against your instances, either one time only or on a schedule. For example, you may want to install antivirus software on your instances and then take a software inventory. To use State Manager, you must create an association that defines the command document to run, any parameters you want to pass to it, the target instances, and the schedule. Once you create an association, State Manager will immediately execute it against the target
instances that are online. Thereafter, it will follow the schedule. There is currently only one policy document you can use with State Manager: AWSGatherSoftwareInventory. This document defines what specific metadata to collect from your instances. Despite the name, in addition to collecting software inventory, you can also have it collect network configurations, file information, CPU information, and, for Windows, registry values.

Insights
Insights aggregate health, compliance, and operational details about your AWS resources
into a single area of AWS Systems Manager. Some insights are categorized according to AWS resource groups, which are collections of resources in an AWS region. You define
a resource group based on one or more tag keys and optionally tag values. For example,
you can apply the same tag key to all resources related to a particular application—EC2
instances, S3 buckets, EBS volumes, security groups, and so on. Insights are categorized as
covered next.

Built-in Insights
Built-in insights are monitoring views that Systems Manager makes available to you by default. Built-in insights include the following:
AWS Config Compliance This insight shows the total number of resources in a resource group that are compliant or noncompliant with AWS Config rules, as well as compliance by resource. It also shows a brief history of configuration changes tracked by AWS Config.
CloudTrail Events This insight displays each resource in the group, the resource type, and the last event that CloudTrail recorded against the resource.
Personal Health Dashboard The Personal Health Dashboard contains alerts when AWS experiences an issue that may impact your resources. For example, some service APIs occasionally experience increased latency. It also shows you the number of events that AWS resolved within the last 24 hours.
Trusted Advisor Recommendations The AWS Trusted Advisor tool can check your AWS environment for optimizations and recommendations around cost optimization, performance, security, and fault tolerance. It will also show you when you’ve exceeded 80 percent of your limit for a service.

Business and Enterprise support customers get access to all Trusted Advisor checks. All AWS customers get the following security checks for free:
■ Public access to an S3 bucket, particularly upload and delete access
■ Security groups with unrestricted access to ports that normally should be restricted, such as TCP port 1433 (MySQL) and 3389 (Remote Desktop Protocol)
■ Whether you’ve created an IAM user
■ Whether multifactor authentication is enabled for the root user
■ Public access to an EBS or RDS snapshot

Inventory Manager
The Inventory Manager collects data from your instances, including operating system and application versions. Inventory Manager can collect data for the following:
■ Operating system name and version
■ Applications and filenames, versions, and sizes
■ Network configuration including IP and MAC addresses
■ Windows updates, roles, services, and registry values
■ CPU model, cores, and speed

You choose which instances to collect data from by creating a region-wide inventory association by executing the AWS-GatherSoftwareInventory policy document. You can choose all instances in your account or select instances manually or by tag. When you choose all instances in your account, it’s called a global inventory association, and new instances you create in the region are automatically added to it. Inventory collection occurs at least every 30 minutes.
When you configure the Systems Manager agent on an on-premises server, you specify a region for inventory purposes. To aggregate metadata for instances from different regions and accounts, you may configure Resource Data Sync in each region to store all inventory data in a single S3 bucket.

Also read this topic: Introduction to Cloud Computing and AWS -1

People also ask this Questions

  1. What is a defense in depth security strategy how is it implemented?
  2. What is AWS Solution Architect?
  3. What is the role of AWS Solution Architect?
  4. Is AWS Solution Architect easy?
  5. What is AWS associate solutions architect?
  6. Is AWS Solutions Architect Associate exam hard?

Infocerts, 5B 306 Riverside Greens, Panvel, Raigad 410206 Maharashtra, India
Contact us – https://www.infocerts.com

Linkedin - Free social media icons

Leave a Comment

Your email address will not be published. Required fields are marked *

Open Whatsapp chat
Whatsapp Us
Chat with us for faster replies.