Cecy Correa

Developer who enjoys writing and speaking about tech

#diversity #latinx #womenintech

Basics of Deploying a Static Site to S3

1 January 2018

I started working on the web back in the days where deploying meant uploading files to a server via an FTP client. I now use things like Heroku for Rails apps, and Github pages for static sites, but I wanted to venture into using Amazon S3 for hosting my sites from now on, like a big girl.

Hosting a site on S3 is pretty easy, but there are some gotchas you need to look out for if you’ve never done it before. This guide should get you what you need to get started.


There’s some initial setup you need to get out of the way:

  1. Install the AWS CLI
  2. Set up an IAM User role to access S3
  3. Configure your AWS CLI with your User role key and secret

I used Homebrew to install the AWS CLI:

brew install awscli

Best practice: You will need an AWS key and secret to use thr AWS CLI. You do not want to use your root user key and secret! It is a best practice to delete any root key and secret credentials you might have, and set up an IAM User role instead.

Creating an IAM user

  1. From the AWS Dashboard, search for and click on “IAM” to go to the IAM Dashboard
  2. Click on “Users”
  3. Click on “Add User”
  4. Give the user a name. A good IAM name is one that is representative of the access the user has. For example, I named my user “S3-user”, meaning this user has access to S3. It is a best practice when using IAM to only give the user as much permission as it needs to do a task. No more, no less.
  5. Under “Access Type” check “Programmatic access”
  6. Click on “Next: Permissions”
  7. In this screen, you would select the permissions Group the user has access to. I am assuming here that a new permissions Group needs to be created. If you already have a permissions Group, select that.
  8. Click on “Create Group”
  9. Give the group a descriptive name. I named mine “S3Group”.
  10. Under “Policy Type”, search of “S3”, select “AmazonS3FullAccess” (we want to read and write)
  11. The Group should now appear in a list at the bottom of the “Set permissions for user” page.
  12. Put a checkmark next to your newly created Group
  13. Click “Next: Review” then “Create user”

Configuring your AWS CLI

Now that you have an IAM User with access to S3, we need to create a key and secret to configure our AWS CLI tool. These credentials will allow us to upload data to an S3 bucket (which we will create later).

If you’re following along with the tutorial, you should be in the Users tab of the IAM Dashboard (if not, navigate to it).

  1. Click on the User you just created.
  2. From the User summary page, click on the “Security credentials” tab
  3. Under “Access keys” click on “Create access key”.
  4. A popup will appear with your key and secret. Keep this popup window open for now.
  5. In your terminal, type “aws configure”
  6. You will be prompted to enter your AWS Key and Secret. Copy and paste these values from the open popup, then close the popup.
  7. For “region”, press enter to select the default region (us-east-1). This should be good if you’re in the US. If you’re not in the US, enter your region.
  8. For “default output format” just press enter to select the default (json)

Great! You have just set up a User that can access S3 and added those credentials to your AWS CLI tool. We are now ready to deploy our site.

Deploying your static files

Create a bucket

Here’s a bit of a gotcha. If you’re using a custom domain you got from Amazon Route 53, the name of your bucket must match the name of your website exactly.

For example, for this website “cecycorrea.com”, I must name my bucket “cecycorrea.com”, yes even with the TLD included in the bucket name! This is important because: a) You can’t rename a bucket later, and b) if you do not name your bucket to match your Route 53 domain exactly, you won’t be able to link your bucket to your Route 53 domain.

Let’s create the bucket in S3:

aws s3 mb s3://your-website.com

NOTE: Your bucket name must be unique! Bucket names are shared across all users of the system. If your bucket name is not unique, you will see this message:

An error occurred (BucketAlreadyExists) when calling the CreateBucket operation: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.

If you’re using the naming convention I mention above, to name your bucket name the same as your Route 53 domain, chances are you’re not going to run into a “Bucket Already Exists” error. But if you’re just creating a test bucket to try things out, just keep in mind bucket names must be unique.

Best practice: Since bucket names must be unique across all users and regions, it is typically a best practice to use dates or even a timestamp when creating a bucket name.

Now run:

aws s3 ls

This should list all the buckets you have. You should see the bucket we just created in this list.

Uploading content

Now in your terminal, navigate to the directory of the website you want to upload.

You can upload the entire directory of your website with the following command:

aws s3 cp . s3://your-website.com --recursive

If you’re using a framework like Jekyll, you’ll want to upload the contents of the “_site” directory. If you’re using a framework like Gatsby, you’ll want to copy the “src” directory.

The command is pretty simple. We use “cp” to copy. We then use “.” to copy your current directory, we then specify the name of the bucket to upload to. Lastly, we use the “–recursive” flag to copy all of the contents of your current directory, even the ones inside other folders. Pretty simple!

Wrapping up your deploy

Go to your AWS dashboard and navigate to S3 to view all your buckets. Click on your newly created website bucket. You should see all of the files you just uploaded listed there.

If you try to access the files though, you won’t be able to. This is because, by default, all objects in S3 are set to private.

There’s a couple things we need to do to ensure our assets are viewable to the public as a website. We need to tell S3 that a) the bucket is a website, and b) set up a policy to ensure objects are viewable publicly.

Set up your S3 bucket as a website

Setting up your S3 bucket as a website is easy, simply run this command from your terminal.

aws s3 website s3://your-website.com —index-document index.html

The “website” command marks the S3 bucket specified in the command as a website. The flag “–index-document” tells S3 which file is the entry point of your site. Obviously this may vary depending on your site.

Alternatively, you can do this via the AWS Dashboard by clicking on your bucket, then clicking on the “Properties” tab, then clicking on “Static website hosting”.

Set up a Public policy for your bucket

Lastly, you need to set a policy for your bucket. A policy essentially sets up the permissions for resources in AWS. There are different types of policies. I’m not going to go into policies in detail for this guide. What you need to know, is that you need to set up a “Public” policy for your entire bucket, so that all objects in the bucket are viewable on the internet.

To do this, go to your AWS console, make sure you are on your website bucket, then click on “Permissions”, then click on “Bucket Policy”.

Copy and paste the following in its entirety in the Bucket Policy Editor text window, then click Save.

    "Version": "2012-10-17",
    "Statement": [
            "Sid": "PublicReadGetObject",
            "Effect": "Allow",
            "Principal": "*",
            "Action": "s3:GetObject",
            "Resource": "arn:aws:s3:::your-bucket-name.com/*"

You’ll notice a “Version” field. DO NOT CHANGE THIS. This refers to the AWS policy version, not the version of your bucket.

The version must be “2012-10-17”, as that is the version when variables were introduced, so it is a good default to use.

Optional: Linking your bucket to a Route 53 domain

If you are using a custom domain you got from Route 53, linking your bucket to your DNS should be pretty easy, but it is a bit tricky if you don’t know what you’re doing.

As I stated earlier, your bucket name must match your domain name in Route 53. This means if your domain is “puppyparty.com”, your bucket name must be “puppyparty.com” exactly, otherwise, AWS will not be able to recognize the bucket name and link it to your domain.

If you named your bucket name correctly, you should be able to:

  1. Navigate to the Route 53 dashboard in AWS
  2. Click on “Hosted Zones”
  3. Click on your domain
  4. Leave “name” blank, assuming you want the root of your website to default to the root of your bucket. If you want your bucket to default to a subdomain, enter that for “Name”.
  5. Under “Alias” click “Yes”
  6. Under “Target name” you should see your bucket name under the available S3 options. Select it.
  7. Click “Create”

And you’re done!

It will take some time to populate your site depending on the TTL (Time to Live) you set when you created your DNS record.

Enjoy your new S3 hosted website!