Amazon Simple Storage Service and Amazon Glacier Storage-2

Accessing S3 Objects
If you didn’t think you’d ever need your data, you wouldn’t go to the trouble of saving it to S3. So, you’ll need to understand how to access your S3-hosted objects and, just as important, how to restrict access to only those requests that match your business and security needs.

Access Control
Out of the box, new S3 buckets and objects will be fully accessible to your account but to no other AWS accounts or external visitors. You can strategically open up access at the bucket and object levels using access control list (ACL) rules, finer-grained S3 bucket policies, or Identity and Access Management (IAM) policies. There is more than a little overlap between those three approaches. In fact, ACLs are really leftovers from before AWS created IAM. As a rule, Amazon recommends applying S3 bucket policies or IAM policies instead of ACLs. S3 bucket policies—which are formatted as JSON text and attached to your S3 bucket— will make sense for cases where you want to control access to a single S3 bucket for multiple external accounts and users. On the other hand, IAM policies—because they exist at the account level within IAM—will probably make sense when you’re trying to control the way individual users and roles access multiple resources, including S3. The following code is an example of an S3 bucket policy that allows both the root user and the user Steve from the specified AWS account to access the S3 MyBucket bucket and its contents. Both users are considered principals within this rule.
{
“Version”: “2012-10-17”,
“Statement”: [
{
“Effect”: “Allow”,
“Principal”: {
“AWS”: [“arn:aws:iam::xxxxxxxxxxxx:root”,
“arn:aws:iam::xxxxxxxxxxxx:user/Steve”]
},
“Action”: “s3:“, “Resource”: [“arn:aws:s3:::MyBucket”, “arn:aws:s3:::MyBucket/“]
}
]
}

When it’s attached to an IAM entity (a user, group, or role), the following IAM policy will accomplish the same thing as the previous S3 bucket policy:
{
“Version”: “2012-10-17”,
“Statement”:[{
“Effect”: “Allow”,
“Action”: “s3:“, “Resource”: [“arn:aws:s3:::MyBucket”, “arn:aws:s3:::MyBucket/“]
}
]
}

Presigned URLs
If you want to provide temporary access to an object that’s otherwise private, you can generate a presigned URL. The URL will be usable for a specified period of time, after which it will become invalid. You can build presigned URL generation into your code to provide object access programmatically.
The following AWS CLI command will return a URL that includes the required authentication string. The authentication will become invalid after 10 minutes (600 seconds). The default expiration value is 3,600 seconds (one hour). aws s3 presign s3://MyBucketName/PrivateObject –expires-in 600

Static Website Hosting
S3 buckets can be used to host the HTML files for entire static websites. A website is static when the system services used to render web pages and scripts are all client rather than server-based. This architecture permits simple and lean HTML code that’s designed to be executed by the client browser. S3, because it’s such an inexpensive yet reliable platform, is an excellent hosting environment for such sites. When an S3 bucket is configured for static hosting, traffic directed at the bucket’s URL can be automatically made to load a specified root document, usually named index.html. Users can click links within HTML pages to be sent to the target page or media resource. Error handling and redirects can also be incorporated into the profile. If you want requests for a DNS domain name (like mysite.com) routed to your static site, you can use Amazon Route 53 to associate your bucket’s endpoint with any registered name. This will work only if your domain name is also the name of the S3 bucket. You’ll learn more about domain name records in Chapter 8, “The Domain Name System (DNS) and Network Routing: Amazon Route 53 and Amazon CloudFront.” You can also get a free SSL/TLS certificate to encrypt your site by requesting a certificate from Amazon Certificate Manager (ACM) and importing it into a CloudFront distribution that specifies your S3 bucket as its origin.

S3 and Glacier Select
AWS provides a different way to access data stored on either S3 or Glacier: Select. The feature lets you apply SQL-like queries to stored objects so that only relevant data from within objects is retrieved, permitting significantly more efficient and cost-effective operations. One possible use case would involve large CSV files containing sales and inventory data from multiple retail sites. Your company’s marketing team might need to periodically analyze only sales data and only from certain stores. Using S3 Select, they’ll be able to retrieve exactly the data they need—just a fraction of the full data set—while bypassing the bandwidth and cost overhead associated with downloading the whole thing.

Also read this topic: Introduction to Cloud Computing and AWS -1

Amazon Glacier
At first glance, Glacier looks a bit like just another S3 storage class. After all, like most S3 classes, Glacier guarantees 99.999999999 percent durability and, as you’ve seen, can be incorporated into S3 lifecycle configurations. Nevertheless, there are important differences. Glacier, for example, supports archives as
large as 40 TB rather than the 5 TB limit in S3. Its archives are encrypted by default, while encryption on S3 is an option you need to select; and unlike S3’s “human-readable” key names, Glacier archives are given machine-generated IDs. But the biggest difference is the time it takes to retrieve your data. Getting the objects in an existing Glacier archive can take a number of hours, compared to nearly instant access from S3. That last feature really defines the purpose of Glacier: to provide inexpensive long-term storage for data that will be needed only in unusual and infrequent circumstances.

Storage Pricing
To give you a sense of what S3 and Glacier might cost you, here’s a typical usage scenario. Imagine you make weekly backups of your company sales data that generate 5 GB archives. You decide to maintain each archive in the S3 Standard Storage and Requests class for its first 30 days and then convert it to S3 One Zone (S3 One Zone-IA), where it will remain for 90 more days. At the end of those 120 days, you will move your archives once again, this time to Glacier, where it will be kept for another 730 days (two years) and then deleted. Once your archive rotation is in full swing, you’ll have a steady total of (approximately) 20 GB in S3 Standard, 65 GB in One Zone-IA, and 520 GB in Glacier. Of course, storage is only one part of the mix. You’ll also be charged for operations including data retrievals; PUT, COPY, POST, or LIST requests; and lifecycle transition requests. Full, up-to-date details are available at https://aws.amazon.com/s3/pricing/.

Other Storage-Related Services
It’s worth being aware of some other storage-related AWS services that, while perhaps not as common as the others you’ve seen, can make a big difference for the right deployment.

Amazon Elastic File System
The Elastic File System (EFS) provides automatically scalable and shareable file storage. EFS-based files are designed to be accessed from within a virtual private cloud (VPC) via Network File System (NFS) mounts on EC2 instances or from your on-premises servers through AWS Direct Connect connections. The object is to make it easy to enable secure, low-latency, and durable file sharing among multiple instances.

AWS Storage Gateway
Integrating the backup and archiving needs of your local operations with cloud storage service scan be complicated. AWS Storage Gateway provides software gateway appliances (based on VMware ESXi, Microsoft Hyper-V, or EC2 images) with multiple virtual connectivity interfaces. Local devices can connect to the appliance as though it’s a physical backup device like a tape drive, while the data itself is saved to AWS platforms like S3 and EBS.

AWS Snowball
Migrating large data sets to the cloud over a normal Internet connection can sometimes require far too much time and bandwidth to be practical. If you’re looking to move terabyte or even petabyte-scaled data for backup or active use within AWS, ordering a Snowball device might be the best option. When requested, AWS will ship you a physical, 256-bit, encrypted Snowball storage device onto which you’ll copy your data. You then ship the device back to Amazon where its data will be uploaded to your S3 bucket(s).

AWS CLI Example
This example will use the AWS CLI to create a new bucket and recursively copy the sales-docs directory to it. Then, using the low-level s3api CLI (which should have installed along with the regular AWS CLI package), you’ll check for the current lifecycle configuration of your new bucket with the get-bucket-lifecycle-configuration subcommand, specifying your bucket name. This will return an error, of course, as there currently is no configuration. Next, you’ll run the put-bucket-lifecycle-configuration subcommand, specifying the bucket name. You’ll also add some JSON code to the –lifecycle-configuration argument. The code (which could also be passed as a file) will transition all objects using the sales-docs prefix to the Standard-IA after 30 days and to Glacier after 60 days. The objects will be deleted (or “expire”) after a full year (365 days). Finally, you can run get-bucket-lifecycle-configuration once again to confirm that your configuration is active. Here are the commands you would need to run to make all this work:

$ aws s3 mb s3://bucket-name
$ aws s3 cp –recursive sales-docs/ s3://bucket-name
$ aws s3api get-bucket-lifecycle-configuration \
–bucket bucket-name
$ aws s3api put-bucket-lifecycle-configuration \

–bucket bucket-name \
–lifecycle-configuration ‘{
“Rules”: [
{
“Filter”: {
“Prefix”: “sales-docs/”
},
“Status”: “Enabled”,
“Transitions”: [
{
“Days”: 30,
“StorageClass”: “STANDARD_IA”
},
{ “Days”: 60,
“StorageClass”: “GLACIER”
}
],
“Expiration”: {
“Days”: 365
},
“ID”: “Lifecycle for bucket objects.”
}
]
} ’
$ aws s3api get-bucket-lifecycle-configuration \
–bucket bucket-name

People also ask this Questions

  1. What is a defense in depth security strategy how is it implemented?
  2. What is AWS Solution Architect?
  3. What is the role of AWS Solution Architect?
  4. Is AWS Solution Architect easy?
  5. What is AWS associate solutions architect?
  6. Is AWS Solutions Architect Associate exam hard?

Infocerts, 5B 306 Riverside Greens, Panvel, Raigad 410206 Maharashtra, India
Contact us – https://www.infocerts.com

Linkedin - Free social media icons

Leave a Comment

Your email address will not be published. Required fields are marked *

Open Whatsapp chat
Whatsapp Us
Chat with us for faster replies.