AWS vs Azure vs Google Free Tier Comparison

AWS vs Azure vs Google Free Tier Comparison

Whether you’re new to public cloud altogether or already use one provider and are interested in trying another, you may be interested in a comparison of the AWS vs Azure vs Google free tier.  The big three cloud providers – AWS, Azure and Google Cloud – each have a free tier available that’s designed to give users the cloud experience without all the costs. They include free trial versions of numerous services so users can test out different products and learn how they work before they make a huge commitment. While they may only cover a small environment, it’s a good way to learn more about each cloud provider. For all of the cloud providers, the 12-month free trials are available to only new users.

AWS Free Tier Offerings

AWS free tier includes more than 60 products. There are two different types of free options that are available depending on the product used: always free and 12 months free. To help customers get started on AWS, the services that fall under the free 12-months are for new trial customers and give customers the ability to use the products for free (up to a specific level of usage) for one year from the date the account was created. Keep in mind that once the free 12 months are up, your services will start to be charged at the normal rate. Be prepared and review this checklist of things to do when you outgrow the AWS free tier. 

Azure Free Tier Offerings

The Azure equivalent of a free tier is referred to as a free account. As a new user in Azure, you’re given a $200 credit that has to be used in the first 30 days after activating your account. When you’ve used up the credit or 30 days have expired, you’ll have to upgrade to a paid account if you wish to continue using certain products. Ensure that you have a plan to reduce Azure costs in place. If you don’t need the paid products, there’s also the always free option. 

Some of the ways people choose to use their free account are to gain insights from their data, test and deploy enterprise apps, create custom mobile experiences and more. 

Google Cloud Free Tier Offerings

The Google Cloud Free Tier is essentially an extended free trial that gives you access to free cloud resources so you can learn about Google Cloud services by trying them on your own. 

The Google Cloud Free Tier has two parts – a 12-month free trial with a $300 credit to use with any Google Cloud services and always free, which provides limited access to many common Google Cloud resources, free of charge. Google Cloud gives you a little more time with your credit than Azure, you get the full 12 months of the free trial to use your credit. Unlike free trials from the other cloud providers, Google does not automatically charge you once the trial ends – this way you’re guaranteed that the free tier is actually 100% free. Keep in mind that your trial ends after 12 months or once you’ve exhausted the $300 credit. Any usage beyond the free monthly usage limits are covered by the $300 free credit – you must upgrade to a paid account to continue using Google Cloud. 

Free Tier Limitations

It’s important to note that the always-free services vary widely between the cloud providers and there are usage limitations. Keep in mind the cloud providers’ motivations: they want you to get attached to the services so you start paying for them. So, be aware of the limits before you spin up any resources, and don’t be surprised by any charges. 

In AWS, when your free tier expires or if your application use exceeds the free tier limits, you pay standard, pay-as-you-go service rates. Azure and Google both offer credits for new users that start a free trial, which are a handy way to set a spending limit. However, costs can get a little tricky if you aren’t paying attention. Once the credits have been used you’ll have to upgrade your account if you wish to continue using the products. Essentially, the credit that was acting as a spending limit is automatically removed so whatever you use beyond the free amounts, you will now have to pay for. In Google Cloud, there is a cap on the number of virtual CPUs you can use at once – and you can’t add GPUs or use Windows Server instances.

For 12 months after you upgrade your account, certain amounts of popular products are free. After 12 months, unless decommissioned, any products you may be using will continue to run, and you’ll be billed at the standard pay-as-you-go rates.

Another limitation is that commercial software and operating system licenses typically aren’t available under the free tiers.

These offerings are “use it or lose it” – if you don’t use all your credits or utilize all your usage, there will be no rollover into future months. 

Popular Services, Products, and Tools to Check Out for Free

AWS has 33 products that fall under the one-year free tier – here are some of the most popular: 

  • Amazon EC2 Compute: 750 hours per month of compute time, per month of Linux, RHEL, SLES t2.micro or t3.micro instance and Windows t2.micro or t3.micro instance dependent on region.
  • Amazon S3 Storage: 5GB of standard storage
  • Amazon RDS Database: 750 hours per month of db.t2.micro database usage using MySQL, PostgreSQL, MariaDB, Oracle BYOL, or SQL Server, 20 GB of General Purpose (SSD) database storage and 20 GB of storage for database backups and DB Snapshots. 

For the always-free option, you’ll find a number of products as well, some of these include:

  • AWS Lambda: 1 million free compute requests per month and up to 3.2 million seconds of compute time per month.
  • Amazon DynamoDB: 25 GB of database storage per month, enough to handle up to 200M requests per month.
  • Amazon CloudWatch: 10 custom metrics and alarms per month, 1,000,000 API requests, 5GB of Log Data Ingestion and Log Data Archive and 3 Dashboards with up to 50 metrics.

Azure has 19 products that are free each month for 12 months – here are some of the most popular:

  • Linux and Windows virtual machines: 750 hours (using B1S VM) of compute time 
  • Managed Disk Storage: 64 GB x 2 (P6 SSD) 
  • Blob Storage: 5GB (LRS hot block) 
  • File Storage: 5GB (LRS File Storage) 
  • SQL databases: 250 GB

For their always free offerings, you’ll find even more popular products – here are a few:

  • Azure Kubernetes Service: no charge for cluster management, you only pay for the virtual machines and the associated storage and networking resources consumed.
  • Azure DevOps: 5 users for open source projects and small projects (with unlimited private Git repos). For larger teams, the cost ranges from $6-$90 per month.
  • Azure Cosmos DB (400 RU/s provisioned throughput)

Unlike AWS and Azure, Google Cloud does not have a 12 months free offerings. However, Google Cloud does still have a free tier with a wide range of always free services – some of the most popular ones include:

  • Google BigQuery: 1 TB of queries and 10 GB of storage per month.
  • Kubernetes Engine: One zonal cluster per month
  • Google Compute Engine: 1 f1-micro instance per month only in U.S. regions. 30 GB-months HDD, 5 GB-months snapshot in certain regions and 1 GB of outbound network data from North America to all region destinations per month.
  • Google Cloud Storage: 5 GB of regional storage per month, only in the US. 5,000 Class A, and 50,000 Class B operations, and 1 GB  of outbound network data from North America to all region destinations per month.


Check out these blog posts on free credits for each cloud provider to see how you can start saving:

How AWS Reserved Instance Discounts Are Applied

How AWS Reserved Instance Discounts Are Applied

AWS Reserved Instance discounts can be confusing and opaque. We talk to AWS customers who don’t know what instances their discounts are being applied to – or even whether their Reserved Instances are being utilized. Here are some notes to help you understand AWS RIs.

What Do Reserved Instances Actually Reserve?

There is an understandable misconception that a “reserved instance” always reserves capacity – a specific VM that is “yours” for the month. If that’s what you want, you’re looking for a Zonal Reserved Instance (locked to a specific instance type and Availability Zone) or an AWS Capacity Reservation. Both of these are specific to an availability zone and instance attributes, including the instance type, tenancy, and platform/OS. If you purchase a “zonal” reserved instance, you will also get a capacity reservation in the designated availability zone, but regional reserved instances do not reserve capacity. Standalone capacity reservations do not provide a discount – but then, they do not require a 1-year or 3-year contract, either. Note that even a Capacity Reservation is not guaranteed to be associated with a specific instance unless that instance was launched into that reservation.

Reserved Instances might be better named something to the effect of “pre-paid credits”. You are not purchasing a specific instance, but rather, a discount that will be applied to instances in your account, as long as they match certain attributes.

How AWS Reserved Instance Discounts Are Applied

Ultimately, you won’t know which specific instance your RI discount is applied to until you look at your bill, and if you want to find out, you’ll sift through hundreds of thousands of rows on a Cost and Usage Report. And there may well be hundreds of rows for any single RI, because the instance that receives the reserved instance discount can actually change minute to minute. Or at least, by the fraction of the hour. 

The “scope” of your Reserved Instance affects how the discount is applied. Zonal Reserved instances are for a specific availability zone, reserve capacity in that zone, and do not have instance size flexibility. Therefore, if you have reserved an m5.large in Availability Zone us-east-1a, your discount will only apply to that specific instance type, size, and zone. Zonal RIs are less expensive than Regional RIs – but since they are also much more constrained, it’s easier for them to accidentally go unused if you do not have a matching instance running.

Meanwhile, a Regional Reserved instance does not reserve capacity, but it does allow flexibility in both availability zone and instance size within the same family. That’s where the normalization factor comes in. Obviously, you wouldn’t want to be charged the same amount for a t3.nano as a t3.2xlarge. The normalization factor balances these sizes, so that a reservation for a t3.xlarge could count for one t3.xlarge instance, or two t3.large instances, or 32 t3.nano instances, or 50% of a t3.2xlarge, or some combination of sizes. The benefit is applied from the smallest to largest instance size within the family. 

If you use consolidated billing, your Reserved Instance discount will be applied first to applicable usage within the purchasing account, then to other accounts in the organization. 

Should You Even Use AWS Reserved Instances?

AWS is aware of this confusion – which is part of the reason they released Savings Plans last year. Instead of the instance size and normalization factor calculations, you simply have a spend commitment, which is flexible across instance family, size, OS, tenancy or AWS Region, and also applies to AWS Fargate and AWS Lambda usage. Hopefully, Savings Plans will be coming soon for the other resources that support RIs, such as RDS, Elasticahce, and so on.

Savings plans can provide the same level of discount, without the rigidity of Reserved Instances, so we recommend them. If you already have Reserved Instance commitments, just ensure that you are fully utilizing them, or else (for EC2 RIs) sell them on the RI marketplace.

And consider whether you have resources that just don’t need to be running 24×7. Turning them off can save more than either of these options.

Spot Instances Can Save Money – But Are Cloud Customers Too Scared to Use Them?

Spot Instances Can Save Money – But Are Cloud Customers Too Scared to Use Them?

Spot instances and similar “spare capacity” models are frequently cited as one of the top ways to save money on public cloud. However, we’ve noticed that fewer cloud customers are taking advantage of this discounted capacity than you might expect.

We say “spot instances” in this article for simplicity, but each cloud provider has their own name for the sale of discounted spare capacity – AWS’s spot instances, Azure’s spot VMs and Google Cloud’s preemptible VMs.

Spot instances are a type of purchasing option that allows users to take advantage of spare capacity at a low price, with the possibility that it could be reclaimed for other workloads with just brief notice. 

In the past, AWS’s model required users to bid on Spot capacity. However, the model has since been simplified so users don’t actually have to bid for Spot Instances anymore. Instead, they pay the Spot price that’s in effect for the current hour for the instances that they launch. The prices are now more predictable with much less volatility. Customers still have the option to control costs by providing a maximum price that they’re willing to pay in the console when they request Spot Instances.

Spot Instances in Each Cloud

Variations of spot instances are offered across different cloud providers. AWS has Spot Instances while Google Cloud offers preemptible VMs and as of March of this year, Microsoft Azure announced an even more direct equivalent to Spot Instances, called Azure Spot Virtual Machines. 

Spot VMs have replaced the preview of Azure’s low-priority VMs on scale sets – all eligible low-priority VMs on scale sets have automatically been transitioned to Spot VMs. Azure Spot VMs provide access to unused Azure compute capacity at deep discounts. Spot VMs can be evicted at any time if Azure needs capacity. 

AWS spot instances have variable pricing. Azure Spot VMs offer the same characteristics as a pay-as-you-go virtual machine, the differences being pricing and evictions. Google Preemptible VMs offer a fixed discounting structure. Google’s offering is a bit more flexible, with no limitations on the instance types. Preemptible VMs are designed to be a low-cost, short-duration option for batch jobs and fault-tolerant workloads.

Adoption of Spot Instances 

Our research indicates that less than 20% of cloud users use spot instances on a regular basis, despite spot being on nearly every list of ways to reduce costs (including our own).

While applications can be built to withstand interruption, specific concerns remain, such as loss of log data, exhausting capacity and fluctuation in the spot market price.

In AWS, it’s important to note that while spot prices can reach the on-demand price, since they are driven by long-term supply and demand, they don’t normally reach on-demand price.

A Spot Fleet, in which you specify a certain capacity of instances you want to maintain, is a collection of Spot Instances and can also include On-Demand Instances. AWS attempts to meet the target capacity specified by using a Spot Fleet to launch the number of Spot Instances and On-Demand Instances specified in the Spot Fleet request.

To help reduce the impact of interruptions, you can set up Spot Fleets to respond to interruption notices by hibernating or stopping instances instead of terminating when capacity is no longer available. Spot Fleets will not launch on-demand capacity if Spot capacity is not available on all the capacity pools specified.

AWS also has a capability that allows you to use Amazon EC2 Auto Scaling to scale Spot Instances – this feature also combines different EC2 instance types and pricing models. You are in control of the instance types used to build your group – groups are always looking for the lowest cost while meeting other requirements you’ve set. This option may be a popular choice for some as ASGs are more familiar to customers compared to Fleet, and more suitable for many different workload types. If you switch part or all of your ASGs over to Spot Instances, you may be able to save up to 90% when compared to On-Demand Instances.

Another interesting feature worth noting is Amazon’s capacity-optimized spot instance allocation strategy. When customers diversify their Fleet or Auto Scaling group, the system will launch capacity from the most available capacity pools, effectively decreasing interruptions. In fact, by switching to capacity-optimized allocation users are able to reduce their overall interruption rate by about 75%. 

Is “Eviction” Driving People Away?

There is one main caveat when it comes to spot instances – they are interruptible. All three major cloud providers have mechanisms in place for these spare capacity resources to be interrupted, related to changes in capacity availability and/or changes in pricing.

This means workloads can be “evicted” from a spot instance or VM. Essentially, this means that if a cloud provider needs the resource at any given time, your workloads can be kicked off. You are notified when an  AWS  spot instance is going to be evicted:  AWS emits an event two minutes prior to the actual interruption. In Azure, you can opt to receive notifications that tell you when your VM is going to be evicted. However, you will have only 30 seconds to finish any jobs and perform shutdown tasks prior to the eviction making it almost impossible to manage. Google Cloud also gives you 30 seconds to shut down your instances when you’re preempted so you can save your work for later. Google also always terminates preemptible instances after 24 hours of running. All of this means your application must be designed to be interruptible, and should expect it to happen regularly – difficult for some applications, but not so much for others that are rather stateless, or normally process work in small chunks.

Companies such as Spot – recently acquired by NetApp (congrats!) – help in this regard by safely moving the workload to another available spot instance automatically.

Our research has indicated that fewer than one-quarter of users agree that their spot eviction rate was too low to be a concern – which means for most, eviction rate is a concern. Of course, it’s certainly possible to build applications to be resilient to eviction. For instance, applications can make use of many instance types in order to tolerate market fluctuations and make appropriate bids for each type. 

AWS also offers an automatic scaling feature that has the ability to increase or decrease the target capacity of your Spot Fleet automatically based on demand. The goal of this is to allow users to scale in conservatively in order to protect your application’s availability.

Early Adopters of Spot and Other Innovations May be One and the Same

People who are hesitant to build for spot more likely use regular VMs, perhaps with Reserved Instances for savings. It’s likely that people open to the idea of spot instances are the same who would be early adopters for other tech, like serverless, and no longer have a need for Spot. 

For the right architecture, spot instances can provide significant savings. It’s a matter of whether you want to bother.

How to Use the New AWS Cost Categories for Better Cost Allocation

How to Use the New AWS Cost Categories for Better Cost Allocation

Last week, AWS announced the general release of AWS Cost Categories. Anyone involved in managing AWS costs within your organization should ensure they understand this new feature and start using it for better cost allocation and billing management. 

What are AWS cost categories?

AWS Cost Categories are now visible in the console on the billing dashboard.

AWS cost categories are a new way to create custom groups that transcend tags and allow you to better manage costs according to your organizational structure, for example, by application group, team, or location.

Cost categories allow you to write rules and create custom groups of billing line items, which you can then use in AWS Cost Explorer, AWS Budgets, the AWS Cost and Usage Report, etc. You can group by account, tag, service, and charge types. 

How are cost categories different from tags?

At first, this “category” structure may seem similar to the tagging structure you already use in Cost Explorer, but there are a few key differences. 

First, you can create logic to categorize costs from specific services to all belong to the same team. For example, you may have RDS resources belong to a category for the DBA team; Redshift belong to a FinOps category, or CodePipeline belong to a DevOps category. Categories also allow you to include resources that are not taggable, such as AWS support costs and some Reserved Instance charges.

Why should you use AWS cost categories?

The ability to create your own categorization rules is what makes this new option powerful. You can do this through the Cost Categories rule builder, JSON editor, or API. The rule builder is straightforward and has some built-in logic options such as “is, is not, contains, starts with” and “ends with”. 

For organizations with many AWS accounts, categories of accounts into business units, products, applications, or production vs. non-production are helpful in allocating and evaluating costs.

Ensure costs are in control

Of course, whenever you take a closer look at your current costs, you’ll find areas that can be better optimized. Make sure you are only paying for the AWS resources you actually need, and schedule idle resources to turn off using ParkMyCloud – now supporting EKS and soon to support Redshift. 

8 Things You Should Know About AWS Redshift Pricing

8 Things You Should Know About AWS Redshift Pricing

If you use AWS, it’s likely you’ll use or at least run across Amazon Redshift – so make sure you know these eight things about how AWS Redshift Pricing works.

Amazon Redshift Overview

AWS calls Redshift the “most popular and fastest” cloud data warehouse. It is fully-managed, and scalable to petabytes of data for storage and analysis. You can use Redshift to analyze your data using SQL and business intelligence tools. It features:

  • Integration with data lakes and other AWS services that allows you to query data and write data back to data lake in open formats
  • High performance – fast query performance, columnar storage, data compression, and zone maps. 
  • Petabyte scale – “virtually unlimited” according to AWS, with scalable number and type of nodes, with limitless concurrency.

There are three node types, two current and one older:

  • RA3 nodes with managed storage – scale and pay for compute and managed storage independently. You choose the number of nodes based on performance requirements and pay only for the storage you used. Managed storage uses SSDs in each RA3 nodes for local storage, and Amazon S3 for longer-term storage. If the data in your node exceeds the local SSDs, data will be automatically offloaded to S3. 
  • DC2 nodes – these compute-intensive data warehouses include local SSD storage. You choose the number of nodes based on data size and performance requirements. Data is stored locally, and as it grows, you can add more compute nodes. AWS recommends this for datasets under 1TB uncompressed, otherwise, use RA3 for the S3 offloading capability to keep costs down.
  • DS2 nodesthese use HDDs, and AWS recommends you use RA3 instead.

Where did the name come from? In astronomy, “redshift” refers to the lengthening of electromagnetic radiation toward longer wavelengths as an object moves away from the observer – the light equivalent of the change in an ambulance siren pitch as it passes you, collectively known as the Doppler Effect. Or, if you’re into gossip, it’s a thumb to the nose at “big red” Oracle.

AWS Redshift Pricing Structure

So, how much does Amazon Redshift cost? Like EC2 and other services, the core cost is on-demand by the hour, based on the type and number of nodes in your cluster. 

Core On-Demand Pricing

  • RA3 – as of this writing, prices range from $3.26 per hour for an ra3.4xlarge in US East (N. Virginia) to $5.195 for the same type in Sao Paulo. Price increases linearly with size, with the ra3.16xlarge costing 4 times the ra3.4xlarge.
    • Data stored in managed storage is billed separately based on actual data stored in the RA3 node types, at the same rate whether your data is in SSDs or S3.
  • DC2 – the dc2.large currently costs $0.25 per hour in US East (N. Virginia) up to $0.40 in Sao Paulo.

Note: We’ve omitted DS2 as those are no longer recommended.

Pricing for Additional Capabilities

Amazon Redshift Spectrum Pricing – Redshift Spectrum allows you to run SQL queries directly against Amazon S3 data. You are charged for the number of bytes scanned by Spectrum, rounded up by megabyte, with a 10MB minimum per query. Compressed and columnar data will keep costs down. Current pricing is $5 per terabyte of data scanned.

Concurrency Scaling Pricing – you can accumulate one hour of concurrency scaling cluster credits every 24 hours while your main cluster is running. Past that, you will be charged the per-second on-demand rate

Redshift Managed Storage Pricing – managed storage for RA3 node types is at a fixed GB-month rate for the region you are using. It is calculated per hour based on total data present in the managed storage. 

8 Things to Keep in Mind

Of course, there’s always more to know.

  • Free trial – for your first Redshift cluster, you can get a two month free trial of a DC2.Large node. It’s actually a trial of 750 hours per month, so if you use more than 160GB of compressed SSD storage or run multiple nodes, you will exhaust that in less than one month, at which point you’ll be charged the on-demand rate unless you spin down the cluster. This is separate from the AWS Free Tier
  • Reserved Instances are available – by paying upfront, you can pay less overall – 3% to 63% depending on the payment options you choose. But should you use them?
  • Billed per-secondfor partial hours, you will only be billed at a per-second rate. Surprisingly, this was only released in February of this year.
  • You can pause – you can pause and resume to suspend billing, but you will still pay for backup storage while a cluster is paused.
  • Redshift Spectrum pricing does not include costs for requests made against your S3 buckets – see S3 pricing for those rates. 
  • Redshift managed storage pricing does not include backup storage due to snapshots, and once the cluster is terminated, you will still be charged for your backups. Don’t let these get orphaned!
  • Data transfer – there is no charge for data transferred between Amazon Redshift and Amazon S3 within the same region for backup, restore, load, and unload operations – but for all other data transfers in and out, you will be billed at standard data transfer rates. 
  • RA3 is not available in all regions

Keep Your AWS Redshift Costs In Control

There are a few things you can do to optimize your AWS Redshift costs:

  • Use Reserved Instances where you have predictable needs, and where the savings over on-demand is high enough
  • Delete orphaned snapshots – like all backups, ensure that you are managing your snapshots and deleting when clusters are deleted
  • Schedule on/off times – for Redshift clusters used for development, testing, staging, and other purposes not needed 24×7, make sure you schedule them to turn off when not needed – now possible with the announcement from last month that Redshift clusters can now be paused. Automated scheduling coming soon in ParkMyCloud!
5 Things You Need to Know About AWS Regions and Availability Zones

5 Things You Need to Know About AWS Regions and Availability Zones

Anytime you provision infrastructure from Amazon Web Services (AWS), you will need to choose which of the AWS Regions and Availability Zones it will live in. Here are 5 things you need to know about these geographic groupings, including tips on how to choose, and things to watch out for. 

1. What are AWS Regions and How Many are There? 

AWS Regions are the broadest geographic category that define the physical locations of AWS data centers. Currently, there are 22 regions dispersed worldwide across North America, South America, Europe, China, Africa, Asia Pacific and the Middle East. All Regions are isolated and independent of one another. 

Every region consists of multiple, separate Availability Zones within a geographic area. AWS offers Regions with a multiple AZ design – unlike other cloud providers who see a region as one single data center. 

AWS has a larger footprint around the globe than all the other cloud providers, and to support their customers and ensure they maintain this global footprint, AWS is constantly opening new Regions. 

Here’s a look at the different regions and their AWS code. 

US East (Ohio)us-east-2
US East (N. Virginia)us-east-1
US West (N. California)us-west-1
US West (Oregon)us-west-2
US GovCloud Westus-gov-west-1
US GovCloud Eastus-gov-east-1
Asia Pacific (Hong Kong)ap-east-1
Asia Pacific (Mumbai) ap-south-1
Asia Pacific (Seoul)ap-northeast-2
Asia Pacific (Singapore)ap-southeast-1
Asia Pacific (Sydney)ap-southeast-2
Asia Pacific (Tokyo)ap-northeast-1
Canada (Central)ca-central-1
Europe (Frankfurt)eu-central-1
Europe (Ireland)eu-west-1
Europe (London)eu-west-2
Europe (Paris)eu-west-3
Europe (Stockholm)eu-north-1
Middle East (Bahrain)me-south-1
South America (São Paulo)sa-east-1
China (Beijing)cn-north-1
China (Ningxia)cn-northwest-1

**Note: AWS manages US GovCloud, AWS China, and the general AWS as distinct “partitions”. An account must be set up within the partition to get access to any of the regions in that partition.**

2. What are AWS Availability Zones and How Many are There? 

An Availability Zone (AZ) consists of one or more data centers at a location within an AWS Region. Each AZ has independent cooling, power, and physical security. Additionally, they are connected through redundant, ultra-low-latency networks. 

In AZ’s, customers are able to operate production applications and databases that are more fault tolerant, scalable, and highly available than you would see from a single data center. 

Every AZ in an AWS Region is interconnected with high-bandwidth, low-latency networking, fully redundant, metro fiber in order to provide high-throughput, low-latency networking between AZ’s. All AZ’s are physically separated by a significant distance from any other AZ,  although all are within 60 miles of each other.

Around the world, there are currently 69 Availability Zones. Here’s a breakdown of each Availability Zones you can find within a Region.

NameCode# of Availability ZonesAvailability Zones
US East (Ohio)us-east-23us-east-2a 



US East (N. Virginia)us-east-16us-east-1a






US West (N. California)us-west-13us-west-1a



US West (Oregon)us-west-24us-west-2a




US GovCloud Westus-gov-west-13us-gov-west-1a



US GovCloud Eastus-gov-east-13us-gov-east-1a



Asia Pacific (Hong Kong)ap-east-13ap-east-1a



Asia Pacific (Mumbai) ap-south-13ap-south-1a



Asia Pacific (Seoul)ap-northeast-23ap-northeast-2a



Asia Pacific (Singapore)ap-southeast-13ap-southeast-1a



Asia Pacific (Sydney)ap-southeast-23ap-southeast-2a



Asia Pacific (Tokyo)ap-northeast-14ap-northeast-1a




Canada (Central)ca-central-12ca-central-1a


Europe (Frankfurt)eu-central-13eu-central-1a



Europe (Ireland)eu-west-13eu-west-1a



Europe (London)eu-west-23eu-west-2a



Europe (Paris)eu-west-33eu-west-3a



Europe (Stockholm)eu-north-13eu-north-1a



Middle East (Bahrain)me-south-13me-south-1a



South America (São Paulo)sa-east-13sa-east-1a



China  (Beijing)cn-north-12cn-north-1a


China (Ningxia)cn-northwest-13cn-northwest-1a



There is also an option available called AWS Local Zones. This allows deployment of latency-sensitive portions of applications close to large populations where no AWS region currently exists. The first one is Los Angeles, which belongs to the Oregon region (and not the California region).

3. How to Choose a Region/AZ

So that’s what they are – now how do you choose a region and availability zone for 

  • Distance – choose regions close to you and your customers to keep latency low
  • Service availabilityas we’ll discuss more below, there are some regions that offer more services than others, and new services will tend to be introduced in these regions first.
  • Cost – always check the AWS pricing calculator to compare the cost between regions. N. Virginia is usually the least expensive among others. Sao Paulo is typically the most expensive.
  • ComplianceGDPR, government contracting, and other regulated industries may require a specific region or multiple regions

4. What Sorts of Functions are Defined by Region and Availability Zone?

Regionless Services

Some services, like AWS IAM, do not support Regions. Therefore, the endpoints for those services do not include a Region. Other services, such as Amazon EC2, support Regions but, you are able to specify an endpoint that does not include a Region. Additionally, Amazon Simple Storage Service (Amazon S3), supports cross-Region replication.

AWS Regions introduced before March 20, 2019 are enabled by default. You can begin working in these Regions immediately. Regions introduced after March 20, 2019 are disabled by default – you must enable these Regions before you can use them. Administrators for an account can enable and disable Regions and use a policy condition that controls who can have access to AWS services in a particular AWS Region.

There are some less popular services such as Alexa for Business, Amazon Augmented AI (A2I), Amazon Fraud Detector, and Amazon Mobile Analytics are only available in the US East (N. Virginia) Region.

Region Differences Across Major Services

Amazon S3

Amazon Simple Storage Service (S3) is storage for the internet. You can use Amazon S3 to store and retrieve any amount of data at any time, from anywhere on the web. 

You specify an AWS Region when you create your Amazon S3 bucket. For S3 Standard, S3 Standard-IA, and S3 Glacier storage classes, your objects are automatically stored across multiple devices ranging on a minimum of three Availability Zones, each separated across an AWS Region. Objects stored in the S3 One Zone-IA storage class are stored redundantly within a single Availability Zone in the AWS Region you select. 

S3 operates in a minimum of three AZs, each separated by miles to protect against local events like fires, floods, etc. S3 is available in all Regions in North America, South America, Europe, Africa, China, Asia Pacific and the Middle East.

Amazon Elastic Compute Cloud

Amazon Elastic Compute Cloud (EC2) provides resizable, scalable computing capacity in the cloud. Each Amazon EC2 Region is designed to be isolated from the other Amazon EC2 Regions. This achieves the greatest possible fault tolerance and stability.

When you view your resources, you see only the resources that are tied to the Region that you specified. Why does this happen? Because Regions are isolated from each other, and resources are not automatically replicated across Regions.

When you launch an EC2 instance, you must select an AMI that’s in the same Region. If the AMI is in another Region, you can copy the AMI to the Region you’re using. When you launch an instance, you can select an Availability Zone or let Amazon choose one for you.

EC2 is available in all Regions in North America, South America, Europe, Africa, China, Asia Pacific and the Middle East.

AWS Lambda

AWS Lambda runs your code in response to triggers and automatically manages the compute resources for you. Lambda maintains compute capacity across multiple AZ’s in each Region in order to help protect code against individual machine or data center facility failures. 

AWS Lambda is available in all Regions in North America, South America, Europe, Africa, China, Asia Pacific and the Middle East. The only region Lambda is not available in is Osaka, which is a local region. This type of region is new and is made up of an isolated fault-tolerant infrastructure design located in a single data center. 

Amazon Simple Notification Service

Amazon SNS is a highly available, durable, secure, fully managed messaging service that allows you to decouple distributed systems, microservices, and serverless applications. SNS uses cross availability zone message storage to provide high message longevity. 

Amazon SNS is available in all Regions in North America, South America, Europe, Africa, China, Asia Pacific and the Middle East.

Amazon EBS

Amazon Elastic Block Store (EBS) is AWS’s block-level, persistent local storage solution for Amazon EC2 that allows you to minimize data loss and recovery time while being able to regularly back up your data and log files across different geographic regions.

EBS volumes are replicated within an Availability Zone (AZ) and can easily scale to petabytes of data. Each volume is designed to protect against failures by replicating within the Availability Zone (AZ), offering 99.999% availability and an annual failure rate (AFR) of between 0.1%-0.2%. You can also quickly restore new volumes to launch applications in new regions.

EBS Snapshots can be used to quickly restore new volumes across a region’s Availability Zones, enabling rapid scale.

EBS is available in all Regions in North America, South America, Europe, Africa, China, Asia Pacific and the Middle East.

5. Transferring Data Between Regions Can Matter Too

Transferring data between AWS services within a region costs differently depending on whether you’re transferring data within or across AZs.

Data transfers are free if you are within the same region, same availability zone, and use a private IP address. Data transfers within the same region, but in different availability zones, have a cost associated with them.

So, to summarize, AWS Regions are separate geographic areas and within these regions are isolated locations that are known as Availability Zones (AZ). It’s important to pay attention to the services offered in each Region and AZ so you can make sure you are getting the most optimal service in your area.