The Definitive Guide to Amazon S3

The definitive guide to Amazon S3

Amazon S3 or Simple Storage Service is probably the most widely used and understood service provided by AWS. The console interface is very familiar as it looks just like your average file explorer. However S3 is anything but a simple way to store files, as it provides an outstanding number of functionalities, many of which came free of use, to help you comply with the most strict regulations and to implement complex and scalable applications very easily.

Since I'm now studying for be an AWS Certified Solution Architect, I will use this blog to collect notes and examples from books and websites around the web, trying to give you the most complete and dense informations about the AWS services that I came by.

Amazon S3 What Amazon S3 is

The Amazon Simple Storage Service is

Simple, durable, massively scalable object storage.

Let's ignore the official AWS definition for a second. My definition for S3 is

a secure, massive place to store your data, with powerful tools like versioning, event triggers, automatic replication and a truly impressive data durability. Oh and it's incredibly cheap too!

The first and probably most important detail of a storage system is its guaranteed durability. By default S3 provides 99.999999999% (that's eleven 9s) object durability. It means that if you store 1.000.000.000 (1 billion) objects into it, you can expect to lose 1 object every 10 years on average.

If that seems unreasonable to you, well, I'm sorry because no other storage system on the Earth provides the same guarantees.

The way S3 achieves that impressive number of 9s is by replicating the object automatically in multiple Availability Zones, of which I want to talk in another post.

Another very important characteristic of a Web service today is availability. AWS S3 guarantees 99.99% availability, which translates to ~52 minutes of downtime per year. If you are building a large scale e-commerce website you may want to protect your website from downtimes using a CDN like CloudFront or CloudFlare. For many other use cases you may tolerate 52 minutes.

Going back to the official definition, the most important part to understand is object storage. Many have compared S3 to a large FTP server, NFS or other Internet based file storage solutions. The issue here is that Amazon S3 does not store files, but objects.

It may look like a semantic issue, but there is more under the mere words!

S3 is not a Network filesystem and as such it provides no hierarchy of directories. Rather it stores objects labeled with a key. Objects are created and updated atomically and in their entirety. You cannot partially overwrite an object, you must replace the whole object.

When you store an object into S3 you assign it a key, just like you would give a file a name. Often the keys are constructed to simulate an unix filesystem path, but still, there's no directory there. If an object is keyed with dir/subdir/file.txt, no dir or subdir have been created. We just have an object whose name (key) is literally dir/subdir/file.txt.

Almost all S3 clients and the AWS S3 own console provide you the illusion of directories, which is very nice and very deceiving at the same time.

Why you should avoid thinking by directories

Let's say you have a bunch of images with the following keys:

  • files/images/image1.png
  • files/images/image2.png
  • files/images/image3.png
  • files/images/[...]

After a while, with thousands of objects stored, you realize that you really want to change the files "directory" to media. On a normal filesystem you would just navigate to the root directory and rename the files directory to media. On S3 instead you just have a bunch of keys whose prefix is files. What you will have to do is rename all the single image files changing the key prefix.

For the same reason, there is no way to have an "empty directory" in S3. If you decide to be deceived into the three structure provided by the console, you will notice that as soon as you delete all the files in a "directory" you lose the directory itself.

Buckets and Objects Buckets and Objects

Files - I mean, objects - are logically stored in a huge flat container named Bucket. You can have up to 100 buckets per account. A bucket and it's objects are entirely contained in a single AWS Region.

When creating a new bucket you must choose the region and a name, that must be globally unique across all AWS accounts and must be DNS friendly. These restrictions exists because the bucket name is then used to address the single objects as URLs like this:

https://_bucketname_.s3.amazonaws.com/file1.txt

You share the bucket namespace with all other users. If you delete a bucket, another user may take its name and you will not be able to take it back.

The region you choose for the bucket cannot be changed without deleting the bucket and it's contents and creating it again. It is very important that you make a sensible choice for the bucket region. You may want to select a region that is geographically close to your users to deliver contents as fast as possible. Or you may be legally forced to store sensible data into a particular region for compliance reasons.

Once you have a bucket you can start uploading files. As said before, you assign an object a key during the upload, along with metadata and access control settings.

S3 does not analyze or alter the content of the uploaded file in any way (except when applying cryptography). The file content are treated like an meaningless blob of bytes, no matter whether it is a video of your cat or a 300 GB Virtual Machine image.

You can have an unlimited number of objects in a bucket, but a single object cannot exceed 5TB in size. This means that an S3 bucket is virtually unlimited in its capacity. Object keys cannot be longer than 1024 bytes (which creates a theoretical limit to objects in a bucket, 81024, but for the purpose of the certification, you can have infinite objects).

Amazon S3 use cases

  • Media repository for the web: while there are better solutions to this particular problem like CloudFront, S3 can be used to store image and video uploads for your web application and to distribute them to clients.
  • Backup and logs archive: probably the most common use of S3 is to store database and disk backups along with your server log files after rotation. By leveraging the Infrequent Access storage class, you can get prices as low as 0.0135$ per GB.
  • Personal data backup and sharing: once upon a time all data stored in Dropbox was kept in S3. It is a great way to maintain your own data and provides interesting solutions to sharing objects with third parties in a secure way.
  • Bundle distribution: it can also used to serve compiled/packaged software like Rubygems or .deb files for your Ubuntu desktop.
  • Static website hosting: if you have a plain boring website, with no authentication or server side dynamics whatsoever, you can upload the static HTML, CSS and JS files to a bucket and serve your whole website from S3.

Wrong uses of Amazon S3

  • Mount it as a Network File System: there exist some drivers for Windows and Linux that will integrate mount an S3 bucket your machine, like a DVD or a network share. While this can work for a single user and for particularly small files, things will break apart very fast if you try to do this on many servers as a mean of having a shared directory. You may want to checkout Amazon Elastic FileSystem.
  • Use it as a CDN: while you can surely use it to distribute web contents, it's not the best tool for the job. First of all, you may want to consider geography. If you run a global website with customer all over the world, you don't want to serve everyone from the same bucket located in London. Second, while very flexible, S3 starts to slow down around 100 GET requests per second. As said before, you may want to use CloudFront.

Data Consistency

Like many other AWS Services, S3 is eventually consistent. It means that in some occasions it may respond with data that is not what you expected. In particular:

  • For new objects, read-after-write consistency is provided. It means that if you create a new object and then try to download it immediately after, you will success 100% of the times.
  • When you replace or delete an existing object, you may get stale copies for a few seconds. A delete object may look still present for example.

Anyways, all operations are always atomic. If you download an object while it's being replaced, you will not get half of the old file and half of the new, like it would happen on a traditional file system. You either get the old one or the new one in full.

API Operations S3 API Operations

S3 presents a REST interface to its internals. You can use this API to perform CRUD operations on buckets and objects, as well as to change permissions and configurations on both. The standard HTTP verbs GET/POST/PUT/DELETE are used exactly as you would expect, and requests are authenticated with the custom AWS Signature scheme.

Beside the standard operations, there are 3 non-standard operations on object entities that you must know and may come handy from time to time:

  • First, and most important, Multipart Upload. While objects in S3 are atomically present (which means, either they're present in their entirety, or they're not), you can upload a single object in pieces. This allows you to instantiate multiple connections and use parallel transfers, as well as to pause and resume. You are advised to use Multipart Uploads for files larger than 100MB and you must use it for files larger than 5GB.
  • Range GETs: the only part of the API where atomicity is broken is this. You can download a part of a file using HTTP standard Range GETs. This allows you to use S3 for video streaming and to download only a portion of a very large file.
  • Pre-Signed URLs: sometimes you want to share a link with someone. Sharing a link to someone outside AWS requires you to make the object public, which translates to making it available to anyone on the Internet who can find the URL. With Pre-Signed URLs you can generate an URL valid for a limited amount of time and only scoped to chosen objects and request methods. The generated URLs will not contain your security credentials as normal private URLs do.

As is the case with all services, AWS provides SDKs for many common programming languages that will hide all the nitty gritty details of HTTP calls and authentication.

All S3 endpoints are server under both HTTP and HTTPS. Remember to configure your client to use the HTTPS version.

Object Metadata

Beside the key and the content, objects in S3 also have Metadata. By default every created objects has some S3 generated metadata like the MD5 hash or the last update date. Some default metadata can be overridden, for example the Content-Type header that will be used in GET requests so that Clients can parse the object content appropriately.

When you upload a new object to a bucket you can send some custom Metadata as HTTP headers, by prefixing them with x-amz-meta-. Metadata can then be retrieved using GET and HEAD HTTP methods as HTTP headers.

Once an object is created, metadata cannot be replaced. You must re-upload or "copy" the old object with new informations.

Users Access Control

When it comes to the topic of Authorization, S3 is probably one of the most complex services provided by AWS. The reason for these intricacies is that S3 buckets can be shared between AWS Accounts. Yes, you read that right, you can have another AWS Account upload and manage objects to a bucket owned by your Account.

You can set permissions at the user level, bucket level or at the object level. For this purpose, S3 permissions go through different tools to allow or deny an operation:

  • The user must be allowed to perform an operation by its IAM Policy.
  • Bucket policies are used to set generic permissions to all objects in the bucket. For example, if you want to host your Static Website here or use it as a media storage for your website, you may want to use a Bucket Policy to allow anonymous read access to all objects. Also you can use this to grant access to third-party AWS Accounts. Bucket policies can be specified with an ACL like interface or an actual IAM like JSON Policy. You can use both and they are merged in order to allow or deny a request.
  • Object ACL are a coarse-grained access control system to set object specific authorizations.

From the official guidebook:

ACLs are a legacy access control mechanism, created before IAM existed. ACLs are best used today for a limited set of use cases, such as enabling bucket logging or making a bucket that hosts a static website be world-readable.

If you do not specify any of the possible Access Control policies, S3 buckets are tightly locked. Only the bucket owner (the account that created the bucket) has access to it and its objects.

Storage S3 Storage classes

At this point, you may think that Amazon S3 is just a dumb repository for your data, and while you could keep on using it like this, you would be missing the best. S3 has a number of features that make it unique and invaluable. One of these are the Storage classes.

When you store an object on a bucket, you should think at the object access patterns, choosing the Storage Class the suits the best to reduce costs. You can also change the storage class of previously created objects if you used the wrong one.

The Standard Storage Class is the default storage class. It is great as it is and is better suited towards hot data (frequently accessed objects).

The Infrequent Access Storage Class has been introduced a few years ago. This is the storage class that you should use for backups, logs and cold data (rarely accessed objects). You retain the same eleven 9s of durability, but only 99.9% of availability (up to 8h of downtime per year). The big difference here is in the pricing: you will get a ~45% discount on the cost per GB (depending on the AWS region), while you pay double for GET and POST requests. Objects stored in this Storage Class must be kept for a minimum of 30 days. If you delete an object before, you are billed for 30 days anyway. So, again, great for long-lived, rarely accessed data.

The Reduced Redundancy Storage Class is a storage class that promises to make you pay way less sacrificing data durability, suffering from 1 object loss on 10.000 every year (99.99%). In this Storage Class the object is not replicated in other AZs. While the official AWS Documentation and the Certification course material insist on this Storage Class, it is de-facto deprecated, since it's pricing is now higher than the Standard storage class, giving it no real advantage. It was suited for derivative data like size reduced versions of an image. If you decide to take the AWS Certification, remember that officially, this Storage Class is still alive and kicking and it's, or it's supposed to be, cheaper!

Glacier: The glacier is the final step in low-cost data retention. When you have some data that you are legally obliged to conserve, like financial records, but you don't really care how long it takes to access it, Glacier is the answer. It provides an extremely low-cost price per GB (~20% of Standard pricing), but it may take up to 12 hours to access a file. Amazon Glacier is an AWS Service on it's own, but it is tightly integrated with S3. In fact you move data between S3 and Glacier, both manually and automatically. And that brings us to the next section.

Object Lifecycle Management

The access patterns to data change wildly depending on the age of the data itself. A very common pattern is:

  • Newly created data is stored, and it's frequently accessed;
  • As time goes by, the information gets less valuable or out of date and it's thus used less and less;
  • The information is completely out of date and can be deleted. Sometimes you must retain the data for compliance reasons anyway.

You can see a parallelism with S3 Storage Classes. It is not uncommon that an Object will fit into the intended use case of all the Storage Classes. Issuing hundreds of thousands of API calls to move each object to the more appropriate storage class every week can be a quite complex task.

Luckily S3 provides Lifecycle Management. This is just a configuration on the bucket that will move objects from one storage class to another according to it's age and key patterns.

Let's say we have a bucket where we store nginx Access logs. We want the data to be readily accessible if needed in the first period and then your main issue is reducing the costs of keeping old log files. You may use the following Lifecycle configuration:

  1. Upload using the Standard Storage class;
  2. After 30 days, move to Infrequent Access;
  3. After 90 days expire (delete) the object.

In some countries, financial records must be kept for 10 years. Here's a Lifecycle configuration suited for the case:

  1. Upload using the Standard Storage class;
  2. After 3 months move to Infrequent Access;
  3. After 1 year move to Glacier;
  4. After 10 years expire the object;

You get the idea. One of the responsibilities of a Solution Architect is to find a good balance of costs management, compliance and data access.

Cleanup Incomplete Multipart Uploads

For a number of reasons, sometimes Multipart Uploads will fail. It may happen because you hit Ctrl-C or because your server crashed. While the underlying cause does not matter, what does is that you will be paying for the uploaded data as if it was stored. An incomplete object is not available in the console or by any other means, but still you will be paying for the associated storage.

To avoid paying for dangling uploads for years, always set a Lifecycle Policy to expire incomplete Multipart Uploads after a number of days.

You made it this far! If you are enjoying the article, do not miss out the next!
* I promise to keep your email safe and to not send spam

S3 Event Notifications

Wow, this guide is getting very long and I still haven't told you about S3 Events.

Amazon does its best not only to create incredible tools, but also takes time to make sure that these service integrate nicely between them. That's where S3 Event Notifications come into play.

The Event notifications bring your bucket objects to life, triggering events to other AWS services and causing a chain of reaction that you can leverage to build highly scalable infrastructures.

Let's explore one of the use cases that I have encountered in my career. We had an application receiving hundreds of PDF files per hour. Each of these could have of any size from just 1 page to hundreds. We needed to split the file in many JPG images, one per page. Traditionally I would have used ghostscript on one or many very powerful dedicated server and uploaded the results to a bucket.

By leveraging events we ended up with this solution:

  • Upload the source PDF file to an S3 bucket. The job of the server ends here.
  • S3 sends an ObjectCreated event, triggering a Lambda function to Split the PDF file in many PDF files, one per page. This lambda script also upload each new PDF page to another bucket.
  • On the second bucket, S3 sends an event for every page incoming, triggering a second Lamba function to convert the 1-page PDF file to JPG and uploading it to a third bucket.
  • The third bucket is configured to trigger an SNS notification to inform the server that the processing for a given page has terminated.

This is just an example, but you can see how these events can be used to build complex architectures. Another use case can be transcoding a large video to make it available on many devices.

Website Static Website Hosting

As listed in the common use cases, you can serve a static website - that is, any website that can work 100% using only HTML, CSS and JS - from an S3 bucket.

To do so you need a bucket named exactly as your website, like www.example.com. Once you have it, use your DNS provider to point the www subdomain to the CNAME www.example.com.s3-website-<aws-region>.amazonaws.com.

When you upload this static assets to S3, make sure to select Public access control, to allow anyone to download these objects. Once you have uploaded the entire website, you can enable the Static Website Hosting. You have to choose an Index document (the Homepage) and an Error document (Page Not Found - 404).

Going beyond Static Websites

More than once, when talking with a friend or colleague about this feature, I received this answer: what is the purpose of a completely static website in replace-with-current-year? You will always need some kind of backend processing.

That is completely true, but saying that your website is served only from static files does not mean not having a backend at all. While at first this may seem like an oxymoron, it's actually very doable, and you are probably doing it already if you are one of those Single-Page-Application developer.

Imagine this: you can build the backend of a web application using AWS Lambda function, paying nearly 0$ until your website starts growing like crazy, and serve the frontend part of the website from S3, where carefully written Javascript will interact with the Lambda provided APIs.

Congratulations, you have an extremely scalable and almost free hosting for your SPA website. Consider adding CloudFront to distribute your static files across the globe if needed.

Log files Bucket Logging

S3 is excellent to store server logs, thanks to it's storage classes that make it ridiculously cheap. But AWS S3 can also generate logs on its own. This may be also required for compliance and auditing purposes.

When you activate logging in an S3 bucket, it will record every API request (downloads, uploads and any other operation), along with the originating IP address.

And guess what, S3 logs are stored in S3 itself. You can choose a secondary bucket, or store them in the same bucket, with a prefix (or directory?).

Encryption Object Encryption

The AWS Cloud provides a high level of security. If used correctly, by setting strict Access Control policies, you can rest assured that no-one unauthorized will access your data.

Still you may want to further secure your data by applying encryption for data at rest. This can be for regulatory or compliance purposes, or you may simply not trust Amazon employees enough. *

When using encryption on S3 you have to decide when it has to be applied.

Client-Side Encryption

The first naive approach is to handle all of the encryption yourself. This means that you encrypt your binary blobs before sending them to AWS, and decrypt them accordingly when you download them. For the purpose of AWS Certifications, this is called Client Side Encryption.

This solution imposes some costs on your side both in terms of development and of actual hardware. A clear disadvantage is that you cannot simply open an URL anymore to download an object, as you would have a meaningless blob of random-like bytes.

This also means that you have to manage keys and most important, key rotations.

The only cases when you want to use CSE is for legal and compliance obligations or for data that you don't want even State level actors to read. Let's say the FBI asks AWS to access an object on your bucket. With SSE could, potentially help the FBI and find a solution to decrypt your data. With CSE, Amazon is never in contact with encryption keys, which means that your data is safer.

Server-Side Encryption (SSE)

If you decide to follow the easy path of SSE, you have multiple choices regarding the Keys.

You can use SSE-S3 which is a completely managed solution. You just enable it on a bucket and S3 will take care of the rest. AWS will encrypt each file with a different key, which is in turn encrypted with a master key. Master keys are rotated monthly.

Alternatively SSE-KMS provides a good balance of security and control. You generate and rotate master keys on the Amazon Key Management Service and S3 uses the provided master key as in SSE-S3. By using KMS you retain features like audit logs and failed access attempts.

Finally with SSE-C (Customer key), you generate a master key on your machine and provide it to AWS. You retain full control over the Keys and only let S3 do the encryption.

All encryption operation operated Server side use the AES 256 algorithm.

* Apart from adering to compliance and laws, it is very likely that encrypting data at rest on AWS S3 provides no real protection from feasible real-world attacks against an AWS datacenter. You can read more on the question on the Security StackExchange.

Bucket Versioning

Another advanced feature of Buckets is versioning. When versioning is enabled on a bucket, updates to an object do not completely delete the previous version which is kept hidden. By default only the latest version is visible, but you can restore an object to a previous version.

Bucket versioning is also a nice protection against accidental deletion. In fact it will preserve all of the deleted object versions allowing you to recover it. You will pay for storage used by all of your version, keep this in mind before blinding enabling it.

You can use Lifecycle management to manage old versions expiration to prevent storage pricing from exploding.

Once you enable Versioning on a bucket it cannot be removed. It may be suspended, which means that new updates will not preserve the previous copy, but all other versioned files will continue to keep old versions

Replication Cross-Region Replication

The new kid in the block of S3 features is Cross Region Replication. While at first it may look like a backup feature to save yourself from the 1E10-9% file loss, it is actually not. The intended use cases for CRR are:

  • bringing your content closer to your used, much like a CDN
  • obeying some data management laws where a second copy of data must be kept far away from the first (thus, multi-AZ replication cannot be considered).

When using CRR, you must create a new empty bucket in the target region. You will pay for storage in both buckets, as well as for data transit between the two buckets.

To use CRR you must enable Versioning on the source bucket first.

Tags Object Tags

You can use S3 Object Tags to categorize your objects into one or more group. When you tag objects with tags, you receive detailed pricing informations grouped by tag.

You can also specify Lifecycle Policies for objects having a given tag which allows you to set Storage Class Transitions and expirations with complex patterns.

Object tags are a paid feature, although very cheap.

Advanced features

If you were thinking that S3 was just a Dropbox alternative for developers, at this point of the article, you will have changed your mind. Here are some even more advanced features that have somewhat limited use cases, but may be required for the AWS Certified Solutions Architect Exam. And they definitely make AWS different from any other storage you heard of.

Dual-Stack Endpoints (IPv6)

Up to a few months ago, S3 endpoints where only available under IPv4. This may have created a few limitations for your IPv6 only server.

Luckily AWS released the new Dual-Stack Endpoints. These endpoints are available under both IPv4 and IPv6, which means that you can use it with both your old and new network stacks. To use them, change the endpoints you are using to match:

_bucketname_.s3.dualstack._aws-region_.amazonaws.com

Transfer Acceleration

Are you uploading very large files from the other side of the glob? You may appreciate this somewhat expensive feature. Instead of establishing a connection over 15.000 km to the region, Transfer Acceleration will leverage the CloudFront global network. Your upload will be received from a nearby CloudFront enpoint and routed on the blazing fast AWS internal network to the S3 target bucket.

BitTorrent support

This little gem looks more like an easter egg than a feature. Most developers I've talked with never heard of it. If you take any public S3 object endpoint and append ?torrent to it, you will download a .torrent file to be opened in your favorite BitTorrent client.

The file will be downloaded in pieces as per the BitTorrent protocol. The best part is that once you have completed the first download, you automatically became part of the P2P network and help other clients to download the file faster.

This can be pretty useful if you distribute large public files. The only example I can think of is the Ubuntu website that distributes ISO images for the latest version.

Object MFA Delete

An advanced feature of S3 is Multi-Factor Authenticated Delete. Its purpose is to prevent accidental deletion of objects at any cost. The cost here is that to delete a file a Time-based One-Time Password must be provided. You usually would generate it with a smartphone app or better, an hardware token.

If you are using the APIs to delete files, the API client must ask the user for the TOTP and pass it to the S3 API server.

This feature is also enabled on a per-bucket basis.

Requester Pays

Usually the owner of a bucket pays for all the associated costs to a bucket, be it storage, transfer or API requests. With Requester Pays you can offload the price of transfers and requests to the client making the request.

This implies that all objects in your bucket are private (if they were public, there would be no way to identify the requester) and that the client downloading the data must be an Authenticated AWS user, which somewhat restricts the utility of the feature itself.

Recap

  • The Simple Storage Service provides a fast, distributed and robust solutions to store your binary blobs, that you human call file, but you should really call them objects.
  • Objects are stored into buckets.
  • A bucket has a name, which must be globally unique in a namespace shared with all AWS accounts and DNS-friendly. The name of the bucket is used to build the DNS endpoint to access objects in the bucket.
  • Objects are referenced by a key, which must unique in the bucket. Keys are often constructed to simulate a directory structure, which is totally fictional and has no influence in how S3 actually stores the data.
  • Objects are replicated into many AZs in the same Region to achieve eleven 9s of durability.
  • Multipart Uploads allow you to parallelize the upload and to recover from interrupted partial uploads. Should use for objects larger than 100MB. Remember to set a Lifecycle Policy to expire incomplete uploads.
  • Range GETs allow you to stream and download parts of a larger file.
  • Pre-Signed URLs give you the possibility to share a private file with a third party in a secure, time-framed fashion.
  • Event notifications can be triggered on a number of S3 events. You can use them to trigger Lambda functions, SNS notifications or SQS jobs.
  • Use Object Tags to group objects into logically close groups. Tags allow to receive detailed cost reports and to set custom Lifecycle Policies.

Limits

  • Your account cannot own more than 100 buckets globally. This limit can be increased contacting AWS Support.
  • An object cannot be larger than 5TB.
  • There's no limit on the number of files in a bucket, providing you with virtually unlimited storage (as long as you can split the data into objects no larger than the largest hard disk you've ever seen).
  • Object keys cannot be longer than 1024 bytes.
  • You must use Multipart Uploads for objects larger than 5GB.

Access Control

  • Buckets can be configured to allow access from third-party AWS Accounts.
  • The bucket owner is automatically granted all operation permissions.
  • Use classic IAM policies to allow or deny specific user operation.
  • Use Bucket Policy to specify fine-grained Access Control rules, from own and third-party accounts, as well as anonymous access.
  • Use Object ACL for coarse-grained access control granted to the object level.

Storage Classes

  • Storage classes provide you with the choice of sacrificing some requirements to greatly reduce storage costs.
  • Objects default to the Standard storage class, with all S3 features.
  • The Infrequent Access (IA) storage class is suited for cold data, having a lower price for GB, but a higher price to read the data.
  • Glacier is a service on its own that is used for long term storage of cold data. Access times can be up to 12 hours, but you pay less than 20% compared to Standard.
  • Reduced Redundancy Storage (RRS), sacrifices durability for a lower price. Only for non-essential, reproducible data. Really it costs more...
  • You can use Object Lifecycle Management to setup automatic transitions of objects between Storage Classes and to expire them.

Static Website Hosting

  • Provides an easy way to distribute a static website without having to maintain a server.
  • The bucket name must match the domain name and all files must be Publicly accessible.
  • You can actually have a dynamic website using with static JS making AJAX calls to dynamic APIs. Bonus points if the APIs are served by AWS Lambda.

Logging

  • Logging can be enabled on a per-bucket basis. Logs can be used to audit access to a bucket.
  • Log files can be stored in the bucket itself or another bucket.
  • Logs are provided asynchronously with a slight delay.

Bucket Versioning

  • Keep history of file changes and deletion;
  • Allow to use Cross Region Replication;
  • Once enabled, cannot be removed, only suspended;
  • Use Lifecycle Management to expire old versions.

MFA Delete

  • Enable for data that cannot ever be deleted.
  • Requires a one-time password to confirm deletion.

Cross Region Replication

  • Maintain two copies of the same data in separated geographic locations.
  • Can be used to bring web content closer to your users.
  • Maintain a backup far from natural disaster in a single geographic location, obeying some data management laws.
  • Costs will probably skyrocket.