CloudTrail, CloudWatch, and AWS Config-1

CloudTrail, CloudWatch, and AWS Config-1

CloudTrail
An event is a record of an action that a principal performs against an AWS resource. CloudTrail logs read and write actions against AWS services in your account, giving you a detailed record including the action, the resource affected and its region, who performed the action, and when. CloudTrail logs both API and non-API actions. Non-API actions include logging into the management console. API actions include launching an instance, creating a bucket in S3, and creating a virtual private cloud (VPC). These are API events regardless of whether they’re performed in the AWS management console, with the AWS Command Line
Interface, with an AWS SDK, or by another AWS service. CloudTrail classifies events into management events and data events.


Management Events
Management events include operations that a principal executes (or attempts to execute) against an AWS resource. AWS also calls management events control plane operations. Management events are further grouped into write-only and read-only events. Writeonly events include API operations that modify or might modify resources. For example, the RunInstances API operation may create a new EC2 instance and would be logged, regardless of whether the call was successful. Write-only events also include logging into the management console as the root or an IAM user. CloudTrail does not log unsuccessful root logins. Read-only events include API operations that read resources but can’t make changes, such as the DescribeInstances API operation that returns a list of EC2 instances.

Data Events
Data events track two types of data plane operations that tend to be high volume: S3 object-level activity and Lambda function executions. For S3 object-level operations, CloudTrail distinguishes read-only and write-only events. GetObject is a read-only event, while DeleteObject and PutObject are write-only events.

Event History
By default, CloudTrail logs 90 days of management events and stores them in a viewable, searchable, and downloadable database called the event history. The event history does not include data events. CloudTrail creates a separate event history for each region containing only the activities that occurred in that region. But events for global services such as IAM and Route 53 are included in the event history of every region.


Trails
If you want to store more than 90 days of event history or if you want to customize the types of events CloudTrail logs—for example, by excluding specific services or actions, or including data events—you can create a trail. A trail is a configuration that records specified events and delivers them as CloudTrail
log files to an S3 bucket of your choice. A log file contains one or more log entries in JavaScript Object Notation (JSON) format. A log entry represents a single action against a resource and includes detailed information about the action including, but not limited to, the following:
eventTime The date and time of the action, given in universal coordinated time (UTC).Log entries in a log file are sorted by timestamp, but events with the same timestamp are not necessarily in the order in which the events occurred.
userIdentity Detailed information about the principal that initiated the request. Thismay include the type of principal (e.g., IAM role or user), its Amazon resource name(ARN), and IAM username.
eventSource The global endpoint of the service against which the action was taken
eventName The name of the API operation
awsRegion The region the resource is located in. For global services, this is always us-east-1.
sourceIPAddress The IP address of the requester


Creating a Trail
You can choose to log events from a single region or all regions. If you apply the trail to all regions, then whenever AWS launches a new region, they’ll automatically add it to your trail. You can create up to five trails for a single region. A trail that applies to all regions will count against this limit in each region. For example, if you create a trail in us-east-1 and then create another trail that applies to all regions, CloudTrail will consider you to have two trails in the us-east-1 region. After creating a trail, it can take up to 15 minutes between the time CloudTrail logs an event and the time it writes a log file to the S3 bucket.


Logging Management and Data Events
When you create a trail, you can choose whether to log management events, data events, or both. If you log management events, you must choose whether to log read-only events, write-only events, or both. This allows you to log read-only and write-only events to separate trails. Complete Exercise 7.1 to create a trail that logs write-only events in all regions.

Also read this topic: Introduction to Cloud Computing and AWS -1

Log File Integrity Validation
CloudTrail provides a means to ensure that no log fi les were modifi ed or deleted after creation. During quiet periods of no activity, it also gives you assurance that no log fi les were delivered, as opposed to being delivered and then maliciously deleted. This is useful in forensic investigations where someone with access to the S3 bucket may have tampered with the log fi le. For example, when an attacker hacks into a system, it’s common for them to delete or modify log fi les to cover their tracks. With log fi le integrity validation enabled, every time CloudTrail delivers a log fi le to the S3 bucket, it calculates a cryptographic hash of the fi le. This hash is a unique value derived from the contents of the log fi le itself. If even one byte of the log fi le changes, the entire hash changes. Hashes make it easy to detect when a fi le has been modifi ed. Every hour, CloudTrail creates a separate fi le called a digest fi le that contains the cryptographic hashes of all log fi les delivered within the last hour. CloudTrail places this fi le in the same bucket as the log fi les but in a separate folder. This allows you to set different permissions on the folder containing the digest fi le to protect it from deletion. CloudTrail also cryptographically signs the digest fi le using a private key that varies by region and places the signature in the fi le’s S3 object metadata. Each digest fi le also contains a hash of the previous digest fi le, if it exists. If there are no events to log during an hour long period, CloudTrail still creates a digest fi le. This lets you assert that no log fi les were delivered during the period. You can validate the integrity of CloudTrail log and digest fi les by using the AWS CLI. You must specify the ARN of the trail and a start time. The AWS CLI will validate all log fi les from the starting time to the present. For example, to validate all log fi les written from January 1, 2018, to the present, you’d issue the following command: aws cloudtrail validate-logs –trail-arn arn:aws:cloudtrail:us-east-1:account-id:trail/ benpiper-trail –start-time 2018-01-01T00:00:00Z .

CloudWatch
CloudWatch functions as a metric repository that lets you collect, retrieve, and graph numeric performance metrics from AWS and non-AWS resources. All AWS resources automatically send their metrics to CloudWatch. These metrics include EC2 instance CPU utilization, EBS volume read and write IOPS, S3 bucket sizes, and DynamoDB consumed read and write capacity units. Optionally, you can send custom metrics to CloudWatch from your applications and on-premises servers. CloudWatch Alarms can send you a notification or take an action based on the value of those metrics. CloudWatch Logs lets you collect, store, view, and search logs from AWS and non-AWS sources. You can also extract custom metrics from logs, such as the number of errors logged by an application or the number of bytes served by a web server.
CloudWatch Metrics
CloudWatch organizes metrics into namespaces. Metrics from AWS services are stored in AWS namespaces and use the format AWS/service to allow for easy classification of metrics. For example, AWS/EC2 is the namespace for metrics from EC2, and AWS/S3 is the namespace for metrics from S3. You can think of a namespace as a container for metrics. Namespaces help prevent confusion of metrics with similar names. For example, CloudWatch stores the WriteOps metric from the Relational Database Service (RDS) in the AWS/RDS namespace, while the EBS metric VolumeWriteOps goes in the AWS/EBS namespace. You can create custom namespaces for custom metrics. For example, you may store metrics from an Apache web server under the custom namespace Apache. Metrics exist only in the region in which they were created. A metric functions as a variable and contains a time-ordered set of data points. Each data point contains a timestamp, a value, and optionally a unit of measure. Each metric is uniquely defined by a namespace, a name, and optionally a dimension. A dimension
is a name-value pair that distinguishes metrics with the same name and namespace from one another. For example, if you have multiple EC2 instances, CloudWatch creates a CPUUtilization metric in the AWS/EC2 namespace for each instance. To uniquely identify each metric, AWS assigns it a dimension named InstanceId with the value of the instance’s resource identifier.

Basic and Detailed Monitoring
How frequently an AWS service sends metrics to CloudWatch depends on the monitoring type the service uses. Most services support basic monitoring, and some support basic monitoring and detailed monitoring. Basic monitoring sends metrics to CloudWatch every five minutes. EC2 provides basic monitoring by default. EBS uses basic monitoring for gp2 volumes. EC2 collects metrics every minute but sends only the five-minute average to CloudWatch. How EC2 sends data points to CloudWatch depends on the hypervisor. For instances using the Xen hypervisor, EC2 publishes metrics at the end of the five-minute interval. For example, between 13:00 and 13:05 an EC2 instance has the following CPUUtilization metric values measured in percent: 25, 50, 75, 80, and 10. The average CPUUtilization over the five-minute interval is 48. Therefore, EC2 sends the CPUUtilization metric to CloudWatch with a timestamp of 13:00 and a value of 48.
For instances using the Nitro hypervisor, EC2 sends a data point every minute during a five-minute period, but a data point is a rolling average. For example, at 13:00, EC2 records a data point for the CPUUtilization metric with a value of 25. EC2 sends this data point to CloudWatch with a timestamp of 13:00. At 13:01, EC2 records another data point with a value of 50. It averages this new data point with the previous one to get a value of 37.5. It then sends this new data point to CloudWatch, but with a timestamp of 13:00. This process continues for the rest of the five-minute interval. Services that use detailed monitoring publish metrics to CloudWatch every minute. More than 70 services support detailed monitoring including EC2, EBS, RDS, DynamoDB, ECS, and Lambda. EBS defaults to detailed monitoring for io1 volumes.

Regular and High-Resolution Metrics
The metrics generated by AWS services have a timestamp resolution of no less than one minute. For example, a measurement of CPUUtilization taken at 14:00:28 would have a timestamp of 14:00. These are called regular-resolution metrics. For some AWS services, such as EBS, CloudWatch stores metrics at a five-minute resolution. For example, if EBS delivers a VolumeWriteBytes metric at 21:34, CloudWatch would record that metric with a timestamp of 21:30. CloudWatch can store custom metrics with up to one second resolution. Metrics with a resolution of less than one minute are high-resolution metrics. You can create your own custom metrics using the PutMetricData API operation. When publishing a custom metric, you can specify the timestamp to be up to two weeks in the past or up to two hours into the future. If you don’t specify a timestamp, CloudWatch creates one based on the time it received the metric in UTC.

Expiration
You can’t delete metrics in CloudWatch. Metrics expire automatically, and when a metric expires depends on its resolution. Over time, CloudWatch aggregates higher-resolution metrics into lower-resolution metrics. A high-resolution metric is stored for three hours. After this, all the data points from each minute-long period are aggregated into a single data point at one-minute resolution. The high-resolution data points simultaneously expire and are deleted. After 15 days, five data points stored at one-minute resolution are aggregated into a single data point stored at five-minute resolution. These metrics are retained for 63 days. At the end of this retention period, 12 data points from each metric are aggregated into a single 1-hour resolution metric and retained for 15 months. After this, the metrics are deleted. To understand how this works, consider a VolumeWriteBytes metric stored at five-minute resolution. CloudWatch will store the metric at this resolution for 63 days, after which it will convert the data points to one-hour resolution. After 15 months, CloudWatch will delete those data points permanently.

People also ask this Questions

  1. What is a defense in depth security strategy how is it implemented?
  2. What is AWS Solution Architect?
  3. What is the role of AWS Solution Architect?
  4. Is AWS Solution Architect easy?
  5. What is AWS associate solutions architect?
  6. Is AWS Solutions Architect Associate exam hard?

Infocerts, 5B 306 Riverside Greens, Panvel, Raigad 410206 Maharashtra, India
Contact us – https://www.infocerts.com

Linkedin - Free social media icons

Leave a Comment

Your email address will not be published. Required fields are marked *

Open Whatsapp chat
Whatsapp Us
Chat with us for faster replies.