A crash course on AWS

Ramp up on AWS in minutes via a lightning quick, hands-on crash course where you learn by doing.

Yevgeniy Brikman
Gruntwork
Published in
16 min readJul 11, 2022

--

This is part 3 of the Docker, Kubernetes, Terraform, and AWS crash course series. In part 2, you learned how to run Kubernetes locally, which is great for learning and testing. This post will teach you the basics of AWS, including how to run Kubernetes in AWS for production use cases, by going through a lightning quick crash course where you learn by doing. This course is designed for newbies, starting at zero, and building your mental model step-by-step through simple examples you run on your computer to do something useful with AWS — in minutes. If you want to go deeper, there are also links at the end of the post to more advanced resources.

  1. What is AWS (the 60 second version)
  2. Create an AWS account
  3. Deploy a virtual server
  4. Deploy a Kubernetes cluster
  5. Deploy apps into your Kubernetes cluster
  6. Further reading
  7. Conclusion

What is AWS (the 60 second version)

Amazon Web Services (AWS) is an on-demand cloud computing platform. Instead of building your own data center—which requires a huge amount of time and expense (both up-front and on-going) to set up servers, networking, cooling, etc—you can run your software in the cloud: that is, in one of the hundreds of data centers around the world managed by a public cloud provider such as AWS.

AWS Global Infrastructure

The most basic way to use AWS is to rent virtual servers: in just a few clicks in the AWS console, you can spin up a Linux or Windows server in the Elastic Compute Cloud (EC2), connect to it over the network (e.g., via SSH or RDP), and use that server to run whatever software you want. However, AWS goes far beyond just replacing servers, as it also offers dozens of higher-level services: for example, instead of running your own load balancer, you can use an Elastic Load Balancer; instead of building your own data stores, you can use RDS, Redshift, or ElastiCache; instead of building your own file stores, you can use S3, EBS, or EFS; instead of writing your own machine learning algorithms, you can use Rekognition, Textract, or Transcribe; instead of building your own Kubernetes cluster, you can use EKS; and so on.

All of these services offer pay-as-you-go pricing. For example, EC2 allows you to rent virtual servers for as low as $.0059 / hr; there’s also a generous free tier. Moreover, most services are elastic, which means you can scale up or down quickly: for example, if traffic spikes, you can spin up hundreds of EC2 instances in just a few minutes; when traffic dies down, you can scale back down to just a handful of instances to save money. So not only is AWS cheaper than building your own data center, but the powerful high level services and elasticity allow you to go far beyond what most companies can do in their own data centers.

AWS is the leader in cloud computing, as per Gartner.

AWS is widely considered the leader in cloud computing: they have the biggest market share, the most mature offering (AWS was the first modern cloud provider), the most complete offering (more services with more features than any other provider), the most secure offering, and the largest community. So it’s well worth your time to learn how to use it. Let’s get started!

Create an AWS account

If you don’t already have an AWS account, head over to https://aws.amazon.com and sign up. When you first register for
AWS, you initially sign in as the root user. This user account has access permissions to do absolutely anything in the account, so from a security perspective, it’s not a good idea to use the root user on a day-to-day basis. In fact, the only thing you should use the root user for is to create other user accounts with more-limited permissions, and then switch to one of those accounts immediately.

To create a more-limited user account, you will need to use the Identity and Access Management (IAM) service. IAM is where you manage user accounts as well as the permissions for each user. To create a new IAM user, go to the IAM Console, click Users, and then click the Add Users button. Enter a name for the user and make sure both “Access key — Programmatic access” and “Password — AWS Management Console access” are selected (note, AWS occasionally makes changes to its web console, so what you see may look slightly different than the screenshots in this blog post).

Click the Next button. AWS will ask you to add permissions to the user. By default, new IAM users have no permissions whatsoever, and cannot do anything in an AWS account. To give your IAM user the ability to do something, you need to associate one or more IAM Policies with that user’s account. An IAM Policy is a JSON document that defines what a user is or isn’t allowed to do. You can create your own IAM Policies or use some of the predefined IAM Policies built into your AWS account, which are known as Managed Policies.

To run the examples in this blog post, the easiest way to get started is to add the AdministratorAccess Managed Policy to your IAM user (search for it and click the checkbox next to it):

Add AdministratorAccess

Click Next a couple more times and then the Create user button. AWS will show you the security credentials for that user, which include:

  1. Web Console credentials: console URL, username, and password.
  2. Command line credentials: Access Key ID, Secret Access Key.

You must save these immediately because they will never be shown again, and you’ll need both sets of credentials later on in this tutorial. Remember that these credentials give access to your AWS account, so store them somewhere secure (e.g., a password manager such as 1Password, LastPass, or macOS Keychain) and never share them with anyone.

After you’ve saved your credentials, click the Close button. Log out of your AWS account as the root user and log back in as the IAM user by using the console URL, username, and password you just saved.

Deploy a virtual server

To get a feel for using AWS, let’s start with one of the most popular services, EC2, and use it to launch a virtual server.

A note on cost: The examples in this part of the tutorial use the AWS free tier, so if you haven’t used up all your credits, and clean up as instructed, this shouldn’t cost you anything.

First, head over to the EC2 Console, and in the top-right corner, pick Ohio (us-east-2) as the AWS region to use (this is a relatively new region that is convenient for this tutorial):

Next, create a Key Pair, which will allow you to SSH to your virtual server. Head over to the Key Pairs page in the EC2 Console and click Create Key Pair:

Give the Key Pair a unique name and click Create Key Pair:

AWS will store the public key and prompt you to save the private key: make sure to save the private key in a secure location on your computer.

Next, head over to the instances page in the EC2 Console and click the Launch Instances button:

On the following page, configure your EC2 instance as follows (note: the AWS Console UI recently collapsed all this content onto a single page; if you’re still using the older Console UI, it may be spread across multiple pages):

Add a Name tag. This is useful so you can tell at a glance what this server is being used for.

Select Ubuntu as the AMI. The AMI (Amazon Imagine Image) specifies the operating system and other software that will be installed on your virtual server. For this tutorial, I recommend picking the Ubuntu AMI from Canonical, which is free to use, and gives us a nice Linux operating system to experiment with:

Select t2.micro as the instance type. The instance type determines the CPU, memory, networking, and other (virtualized) hardware that will be available on your server, as well as the price you will pay for that server (see this page for more info). For this tutorial, I recommend picking t2.micro, which is part of the AWS free tier (in fact, it may already be selected for you by default):

Pick the Key Pair you created earlier. AWS will add the public key for your selected Key Pair to the known_hosts file on your server, so you’ll be able to SSH to that server with your private key:

Create a security group that allows SSH traffic. By default, EC2 instances don’t allow any inbound or outbound network traffic. A security group is a firewall that controls what network traffic can go in and out of your server. For this tutorial, I recommend creating a new security group for your server that allows inbound traffic on port 22 (the default SSH port) from all IPs (0.0.0.0/0 in CIDR notation). Note that for production use cases, you’ll want to lock down SSH access much more, but for this simple, short-lived learning exercise, this will suffice, and make it easy for you to SSH to your server from your own computer:

Leave all other settings at their defaults and click Launch Instance. On the next page, click View All Instances and you should see your new server booting up:

Give it a minute or so to boot, then click the checkmark to the left of the server, and click the Connect button. You should see SSH instructions:

Copy the example command, open a terminal, go into the folder where you saved the private key from your Key Pair, and run the command (note, if you’re on Windows, you may need to use an SSH client such as PuTTY):

ssh -i <PRIVATE KEY> ubuntu@<IP ADDRESS>

Your SSH client will tell you it can’t establish the authenticity of the host and prompt you if you want to continue. Enter yes and you should see something like this:

Welcome to Ubuntu 22.04.2 LTS!ubuntu:$

Alright, you are now connected to a virtual server running in AWS! Try some commands such as ls or tree or use apt-get install to install and run whatever software you want (you have root access as the ubuntu user).

When you’re done experimenting with your server, to clean up (and ensure you don’t accidentally rack up an AWS bill), go back to the instances page in the EC2 Console, click the checkbox to the left of your instance, click the Instance State button, and select Terminate Instance:

And there you go, you’re now able to fire up and shut down a virtual server in the cloud in minutes!

Deploy a Kubernetes cluster

You’ve now seen how you can launch basic AWS services with a few clicks. Let’s now try something a bit more complicated and launch a Kubernetes cluster. If you’re not familiar with Kubernetes, make sure to check out part 2 of this series for a crash course. The easiest way to run Kubernetes in AWS is to use Amazon’s managed Elastic Kubernetes Service (EKS). Let’s give it a shot.

A note on cost: EKS is not part of the AWS free tier. Moreover, the worker nodes you’ll deploy in this tutorial are also not part of the AWS free tier. Therefore, this part of the tutorial may cost you money, albeit, not too much: as of July, 2022, EKS costs $0.10 per hour, and the two worker nodes you’ll launch in this tutorial are about $0.01 per hour each, so even if you run this code for 4 hours, it’ll still cost you less than 50 cents.

The first step is to create two IAM roles: one to give the Kubernetes cluster’s control plane permissions to make certain API calls in your AWS account and one to give the cluster’s worker nodes permissions to make a different subset of API calls. To create the IAM role for the control plane, head over to the IAM Roles Page and click Create Role:

Pick “AWS service” as the Trusted Entity Type, select “EKS” as the Use Case in the drop-down, and then pick the “EKS — Cluster” radio button:

Click the Next button a couple times, enter a name for this IAM role such as aws-learning-eks-cluster-role, and then click Create Role:

Now, go through the same process a second time to create an IAM role for the worker nodes. This time, pick “AWS service” as the entity type and “EC2” as the use case and click Next:

On the Add Permissions page, you will need to add three policies:

  • AmazonEKSWorkerNodePolicy
  • AmazonEC2ContainerRegistryReadOnly
  • AmazonEKS_CNI_Policy

Working with this page is a little awkward: you need to enter each of these policy names into the filter box, hit enter, click the checkbox to the left of the policy, then click Clear Filters, and repeat the process with the next policy name. In the end, you should have 3 policies selected:

Click Next, enter a name for this IAM role such as aws-learning-eks-nodes-role and click Create Role:

Once you’ve created these two IAM roles, you can move on to using EKS to fire up a fully-managed Kubernetes control plane for you. Head over to the EKS Console and click Add Cluster and then click Create:

Enter a name for the cluster such as aws-learning, pick the IAM role you created earlier for the control plane (aws-learning-eks-cluster-role) in the Cluster Service Role drop-down, and click Next:

Leave the networking and logging settings at their defaults, clicking Next a few more times, and then click the Create button to create the control plane:

It can take 5–10 minutes for AWS to spin up the control plane, so be patient, and wait until the Status switches from Creating to Active.

Once the control plane shows Active, the next step is to deploy the worker nodes. EKS supports several different types of worker nodes: in this tutorial, you’ll use a Managed Node Group, where AWS deploys and manages a set of EC2 instances for you. Click the Compute tab and then click the Add Node Group button:

Enter a name for the node group such as aws-learning, pick the IAM role you created for worker nodes (aws-learning-eks-nodes-role), and click Next:

On the next page, use Amazon Linux 2 as the AMI, pick t3.small as the instance type, and set the cluster min/max/desired size to two (ideally, we’d use t2.micro as the instance type, as it’s part of the AWS free tier, but the micro instance types are limited to 4 ENIs, all of which will be used up by Kubernetes system services (e.g., kube-proxy), leaving nothing to actually deploy any Pods, so the smallest instance type that works with EKS is t3.small):

Leave all other settings (including networking settings) at their defaults, click Next a few times, and then click Create to create the node group:

It can take 2–3 minutes for AWS to spin up the worker nodes, so be patient, and wait until the Status switches from Creating to Active.

Once the status is Active, you can start deploying apps into your Kubernetes cluster, as described next.

Deploy apps into your Kubernetes cluster

Let’s now deploy the same “Hello, World” app from part 2 of this series into the EKS cluster. The first step is to authenticate kubectl to the EKS cluster, rather than your local Kubernetes cluster running in Docker Desktop.

To do this, first, install the aws CLI. Next, you need to authenticate to AWS from the command line using the command line credentials (Access Key ID, Secret Access Key) you saved earlier. One of the easiest ways to do this is to configure those credentials using the environment variables AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY (see here for other ways to authenticate to AWS from the CLI). For example, here is how you do it in a Unix/Linux/macOS terminal:

$ export AWS_ACCESS_KEY_ID=(your access key id)
$ export AWS_SECRET_ACCESS_KEY=(your secret access key)

And here is how you do it in a Windows command terminal:

$ set AWS_ACCESS_KEY_ID=(your access key id)
$ set AWS_SECRET_ACCESS_KEY=(your secret access key)

A note on authentication: Initially, EKS grants access to the EKS cluster solely to the IAM entity who created it—and no one else. Therefore, when you first start using the cluster, you must authenticate on the command line as the exact same IAM entity you used in the AWS Web Console (after authenticating, you can grant access to additional users): e.g., if you authenticated as an IAM user when creating the EKS cluster in the Console, you must authenticate as that same IAM user on the command-line; if you used MFA in the Console, you must use MFA on the command-line; if you used SSO in the Console, you must use SSO on the command-line; and so on. If there is any difference in how you authenticate, the kubectl commands below won’t work, giving you an error like this: error: You must be logged in to the server (Unauthorized).

Now you can use the aws eks update-kubeconfig command to automatically update your $HOME/.kube/config file to authenticate to your EKS cluster. The basic form of the command is:

$ aws eks update-kubeconfig \
--region <REGION>
--name <EKS_CLUSTER_NAME>

Where REGION is the AWS region you deployed into and EKS_CLUSTER_NAME is the name of your EKS cluster. At the start of the tutorial, you picked Ohio as the region, which uses the identifier us-east-2. Therefore, to authenticate to the EKS cluster called aws-learning, you would run:

$ aws eks update-kubeconfig \
--region us-east-2
--name aws-learning

To see if things are working, use the get nodes command to explore your EKS cluster:

$ kubectl get nodes
NAME STATUS AGE VERSION
xxx.us-east-2.compute.internal Ready 3m24s v1.22.9-eks
yyy.us-east-2.compute.internal Ready 3m19s v1.22.9-eks

If everything is working, you should see the two worker nodes you deployed in the node group.

You can now use the deployment.yml and service.yml from part 2 of this series to deploy a “Hello, World” app in your EKS cluster:

$ kubectl apply -f deployment.yml
deployment.apps/simple-webapp created

$ kubectl apply -f service.yml
service/simple-webapp created

After a few seconds, use the get deployments command to check on the status of the Deployment:

$ kubectl get deployments
NAME READY UP-TO-DATE AVAILABLE AGE
simple-webapp 2/2 2 2 3m

Next, run get pods to see the Pods:

$ kubectl get pods
NAME READY STATUS RESTARTS AGE
simple-webapp-7b4cc85b88-r6m2l 1/1 Running 0 3m21s
simple-webapp-7b4cc85b88-xnb9c 1/1 Running 0 3m21s

As expected, there are two Pods running. Run get services to see the Service:

$ kubectl get services
NAME TYPE EXTERNAL-IP AGE
kubernetes ClusterIP <none> 15m
simple-webapp LoadBalancer xxx.us-east-2.elb.amazonaws.com 3m

Copy the EXTERNAL-IP entry for simple-webapp and try it out using curl or a web browser:

$ curl xxx.us-east-2.elb.amazonaws.com
Hello world!

And there you go! Your Dockerized app is running in a fully-managed Kubernetes cluster in AWS, with a load balancer distributing traffic across two app replicas.

Once you’re done experimenting, make sure to clean up after yourself so AWS doesn’t keep charging you for the EKS cluster and worker nodes. First, terminate the worker nodes by going to the Compute tab in the EKS cluster, clicking the radio button to the left of your node group, and clicking the Delete button. Once that completes, click the Delete Cluster button at the top of the page:

Further reading

This crash course only gives you a tiny taste of AWS. There are many other things to learn: dozens of other AWS services (e.g., VPC, S3, ELB, RDS, etc.), AWS best practices (e.g., Well Architected, CIS AWS Foundations Benchmark, etc.), EKS features that you’ll need to use it in production (e.g., ingress controllers, secret envelope encryption, security groups, OIDC authentication, RBAC mapping, VPC CNI, kube-proxy, CoreDNS, etc.), and so on. If you want to go deeper, here are some recommended resources:

  1. Gruntwork Infrastructure as Code Library: Off-the-shelf, commercially supported & maintained modules for configuring AWS services for production, including production-grade modules for EKS, EC2, ELB, RDS, VPC, and much more.
  2. Gruntwork Production Framework: An opinionated, step-by-step framework for successfully going to production on the public cloud.
  3. How to Build an End to End Production-Grade Architecture on AWS: A guided tour of a modern production-grade architecture for AWS that includes Kubernetes, AWS VPCs, data stores, CI/CD, secrets management, and a whole lot more.
  4. AWS Training: Training programs and certifications built by AWS.
  5. AWS edX courses: Free online courses from AWS itself.
  6. The Open Guide to Amazon Web Services: A free, open source guide to AWS by and for engineers who use AWS.
  7. Terraform: Up & Running. Much of the content from this blog post series comes from the 3rd edition of this book.

Conclusion

You’ve now seen how to use AWS to deploy EC2 instances and even a Kubernetes cluster. But you did all of these deployments manually, by clicking around the AWS Console (ClickOps), which is great for learning and testing, but has many drawbacks for production: it’s slow, error prone, and hard to reproduce, review, or test. Let’s now move on to part 4 of this series, where you’ll learn how to solve these problems by using Terraform to deploy and manage all of your infrastructure as code.

--

--

Co-founder of Gruntwork, Author of “Hello, Startup” and “Terraform: Up & Running”