EC2 Instance Types Comparison (and how to remember them)

EC2 Instance Types Comparison (and how to remember them)

AWS offers a range of EC2 instance types optimized for various purposes. It’s great that they provide so much variety, but of course, it means one more thing that you have to learn. It’s worth taking the time to do so, as ⅔ of IaaS spend goes toward compute – that’s a lot of EC2.

Check out a brief breakdown in this video, which also compares EC2 purchasing options. Check it out here:

Or, read on for a look into each of the AWS instance types. Remember that within each type, you’ll still need to choose the AWS instance sizes that suit your specific needs. Additionally, older generations within each instance types are available for purchase – for example, c5 is the latest “c” instance, but c4 and c3 are still available – but as the newer types tend to perform better at a cheaper price, you’ll only want to use the older types if you have an AMI or other dependency. The differences matter for some users… but you probably already know who you are. 

Note: a version of this blog was originally published in July 2018. It has been rewritten and updated for 2020. New EC2 instance types since our last writeup include A1, T3, z1d, high memory, R5, G4, and F1. 

Quick EC2 Instance Info 

This chart shows a quick summary of what we’ll cover. We’re including a brief description and mnemonic for each (hopefully helpful if you’re studying for an AWS certification!)

EC2 Instance Info

If you’ve taken a look at AWS training materials, you may have seen a couple of overall acronyms to remember all of these – perhaps Dr McGiFT Px or FIGHT Dr McPX. Whether these acronyms are useful at all is perhaps a point of discussion, but to ensure that all the instance types above are in your list, we suggest:

  • Fight Czar MXPD 
  • Fright Camp DXZ
  • March Gift PZXD

(and don’t forget high memory and Inf!)

General Purpose

These general purpose AWS EC2 instance types are a good place to start, particularly if you’re not sure what type to use. There are three general purpose types.

t instance type

The t3 family is a burstable instance type. If you have an application that needs to run with some basic CPU and memory usage, you can choose t3. It also works well if you have an application that gets used sometimes but not others. When the resource is idle, you’ll generate CPU credit, which you’ll utilize when the resource is used. It’s useful for things that come and go a lot, such as websites or development environments, and while generally inexpensive, make sure you understand how the CPU credits work before deploying these. There’s a little bit of math and they may not be as cheap as they look at first glance. 

Make sure you also understand the difference between t3 and the older t2 – t3 are in “unlimited mode” by default, so instead of throttling down to baseline CPU when your instance runs out of credits, you pay for overages.

For each of the EC2 types we cover here, we’ll also add a mnemonic to help you remember the purpose of each instance type.

Mnemonic: t is for tiny or turbo.

m instance type

The m5 instance type is similar, but for more consistent workloads. It has a nice balance of CPU, memory, and disk. It’s not hard to see why almost half of EC2 workloads are on “m” instances. In addition to m5, you also have the option of m6g, which are powered by Arm-based AWS Graviton2 processors, making them more cost-efficient. There’s also m5a, m5n, and m4 – most of which are safe to ignore unless you have a specific use case for one of the other processors besides m5’s Intel Xeon Platinum 8175 processors. If you aren’t sure what to choose, m5 is the most versatile of all the Amazon instance types. 

Mnemonic: m is for main choice or happy medium.

a1 instance type

The a1 instance type was announced in late 2018 and can be a less expensive option than other EC2. They are suited for scale-out workloads such as web servers, containerized microservices, caching fleets, distributed data stores, and development environments.  The instances are powered by Arm processors and suited for Arm-based workloads.

Mnemonic: a is for Arm processor

Compute Optimized

c instance type

The c5 instance type has a high ratio of compute/CPU versus memory. If you have a compute-intensive application – maybe scientific modelling, intensive machine learning, or multiplayer gaming – these instances are a good choice. There is also the c5d option, which is SSD-backed. See also the C5n which have up to 100 Gbps network bandwidth and increased memory compared to equivalent c5 instances. The c4 family is also still available.

Mnemonic: c is for compute (at least that one’s easy!)

Memory Optimized

r instance family

The r instance family is memory-optimized, which you might use for in-memory databases, real-time processing of unstructured big data, or Hadoop/Spark clusters. You can think of it as a kind of midpoint between the m5 and the x1e. In addition to r5, there are r5a which deliver lower cost per GiB memory and r5n which have higher bandwidth for applications that need improved network throughput and packet rate performance.

Mnemonic: r is for RAM.

x1 instance family

The x1 family has a much higher ratio of memory, so this is a good choice if you have a full in-memory application or a big data processing engine like Apache Spark or Presto. X1e are optimized for high-performance databases, in-memory databases, and other memory intensive enterprise applications.

Mnemonic: x is for xtreme, as in “xtreme RAM” seems to be generally accepted, but we think this is a bit weak. If you have any suggestions, comment below.

High Memory instance family

We’re not sure why these didn’t get an alphabet soup name like the rest of the AWS instances, but at least it’s easy to remember and understand. As you might guess, high memory instances run large in-memory databases, including production deployments of SAP HANA. 

Mnemonic: we’ll leave this one up to you.

z1d instance family

The z1d instances combine high compute capacity with a high memory footprint. They have a sustained core frequency of up to 4.0 GHz, the fastest of AWS’s offerings. These are best for electronic design automation (EDA) and some relational database workloads with high per-core licensing costs.

Mnemonic: z is for zippy 

Accelerated Computing

p instance type

If you need GPUs on your instances, p3 instances are a good choice. They are useful for video editing, and AWS also lists use cases of “computational fluid dynamics, computational finance, seismic analysis, speech recognition, autonomous vehicles” – so it’s fairly specialized. p2 instances are also available.

Mnemonic: p is for pictures (graphics).

Inf1 instance type

The Inf1 instances are a specialized EC2 type for machine learning inference applications, such as recommendation engines, forecasting, image and video analysis, advanced text analytics, document analysis, voice, conversational agents, translation, transcription, and fraud detection.

Mnemonic: inf is for inference

g instance type

The g instance type uses Graphics Processing Units (GPUs) to accelerate graphics-intensive workloads, and also designed to accelerate machine learning inference. This could include adding metadata to an image, automated speech recognition, and language translation, as well as graphics workstations, video transcoding, and game streaming in the cloud. 

g4 is the latest family, and g3 are available as well.

Mnemonic: g is for graphics or GPU

F1 instance type

f1 instances offer customizable hardware acceleration with field programmable gate arrays (FPGAs) – hence the “f”. Applications could include genomics research, financial analysis, and real-time video processing.

Mnemonic: f is for FPGA 

Storage Optimize

i3 instance type

The i3 instance type is similar to h1, but it is SSD backed, so if you need an NVMe drive, choose this type. Use it for NoSQL databases, in-memory databases, Elasticsearch, and more. The i3en option has higher network bandwidth with Elastic Network Adapter (ENA)-based enhanced networking. 

Mnemonic: i is for IOPS.

d2 instance type

d2 instances have an even higher ratio of disk to CPU and memory, which makes them a good fit for Massively Parallel Processing (MPP), MapReduce and Hadoop distributed computing, and similar applications.

Mnemonic: d is for dense.

h1 instance type

The h1 type is HDD backed, with a balance of compute and memory. You might use it for distributed file systems, network file systems, or data processing applications.

Mnemonic: h is for HDD.

What EC2 instance types should you use?

As AWS has continued to add options to EC2, there are now EC2 instance types for almost any application. If you have comparison questions around pricing, run them through the AWS monthly calculator. And if you don’t know, then generally starting with t3 or m5 is the way to go.

Looking for info on the other cloud providers?

 

 

 

AWS vs Azure vs Google Cloud Market Share 2020: What the Latest Data Shows

AWS vs Azure vs Google Cloud Market Share 2020: What the Latest Data Shows

Q4 2019 earnings are in for the ‘big three’ cloud providers and you know what that means – it’s time for an AWS vs Azure vs Google Cloud market share comparison. Let’s take a look at all three providers side-by-side to see where they stand.

Note: a version of this post was originally published in April 2018 and 2019. It has been updated for 2020.

AWS vs. Azure vs. Google Cloud Earnings

To get a sense of the AWS vs Azure vs Google Cloud market share breakdown, let’s take a look at what each cloud provider’s reports shared.

AWS 

Amazon reported Amazon Web Services (AWS) revenue of $9.95 billion for Q4 2019, compared to $7.4 billion for Q4 2019. AWS revenue grew 34% in the quarter, compared to a year earlier. 

Across the business, Amazon’s quarterly sales increased to $87.4 billion, beating predictions of $86.02 billion.AWS has been a huge contributor to this growth. AWS revenue made up 11% of total Amazon sales for the quarter. AWS only continues to grow, and bolster the retail giant time after time.

One thing to keep in mind: you’ll see a couple of headlines pointing out that revenue growth is down, quoting that 34% number and comparing it to previous quarters’ growth rates, which peaked at 81% in 2015. However, that metric is of questionable value as AWS continues to increase revenue at this enormous scale, dominating the market (as we’ll see below). 

In media commentary, AWS’s numbers seem to speak for themselves:

Azure

While Amazon specifies AWS revenue, Microsoft only reports on Azure’s growth rate. That number is 62% revenue growth over the previous quarter. This time last year, growth was reported at 76%. As mentioned above, comparing growth rates to growth rates is interesting, but not necessarily as useful a metric as actual revenue numbers – which we don’t have for Azure alone.

Here are the revenue numbers Microsoft does report. Azure is under the “Intelligent Cloud” business, which grew 27% to $11.9 billion. The operating group also includes server products and cloud services (30% growth) and Enterprise Services (6% growth). 

The lack of specificity around Azure frustrates many pundits as it simply can’t be compared directly to AWS, and inevitably raises eyebrows about how Azure is really doing. Of course, it also assumes that IaaS is the only piece of “cloud” that’s important, but then, that’s how AWS has grown to dominate the market. 

A victory for the cloud provider was the October winner of the $10 billion JEDI cloud computing contract (although AWS is actively protesting the contract with claims of political interference).

Here are a few headlines on Microsoft’s reporting that caught our attention:

Google Cloud

This quarter, Google broke out revenue reporting for its cloud business for the first time. For the fourth quarter, Google Cloud generated $2.6 billion in revenue, a growth of 53% from the previous year. For 2019 as a whole, Google Cloud brought in $8.9 billion in revenue, which is less than AWS generated in the fourth quarter alone.

Google CEO Sundar Pichai stated on the earnings report conference call, “The growth rate of GCP was meaningfully higher than that of Cloud overall, and GCP’s growth rate accelerated from 2018 to 2019.”

CFO Ruth Porat also highlighted Google Cloud Anthos, as Google leans into enabling the multi-cloud reality for its customers, something AWS and Azure have avoided.

Further reading on Google’s quarterly reporting:

Cloud Computing Market Share Breakdown – AWS vs. Azure vs. Google Cloud

When we originally published this blog in 2018, we included a market share breakdown from analyst Canalys, which reported AWS in the lead owning about a third of the market, Microsoft in second with about 15 percent, and Google sitting around 5 percent.

In 2019, they reported an overall growth in the cloud infrastructure market of 42%. By provider, AWS had the biggest sales gain with a $2.3 billion YOY increase, but Canalys reported Azure and Google Cloud with bigger percentage increases.

As of February 2020, Canalys reports AWS with 32.4% of the market, Azure at 17.6%, Google Cloud at 6%, Alibaba Cloud close behind at 5.4%, and other clouds with 38.5%. 

Ultimately, it seems clear that in the case of AWS vs Azure vs Google Cloud market share – AWS still has the lead.

Bezos has said, “AWS had the unusual advantage of a seven-year head start before facing like-minded competition. As a result, the AWS services are by far the most evolved and most functionality-rich.”

Our anecdotal experience talking to cloud customers often finds that true, and it says something that Microsoft isn’t breaking down their cloud numbers just yet, while Google leans into multi-cloud.

AWS remains far in the lead for now. With that said, it will be interesting to see how the actual numbers play out, especially as Alibaba catches up.

Cloud Economics: How to Overcome Human Biases to Save Money

Cloud Economics: How to Overcome Human Biases to Save Money

It’s important for cloud customers to understand cloud economics. Cloud costs are dynamic – and hopefully, optimized. However, that’s not always the case. Since optimizing cloud infrastructure is a “technological problem”, there are a number of human biases at play that are not always accounted for. 

What is Cloud Economics? 

Some articles you’ll find jump directly to the idea that “cloud economics” is a synonym for “saving money”. And while the economies of scale and infrastructure on demand mean that public cloud can save you money over traditional infrastructure, the two terms are not interchangeable.

Shmuel Kliger (founder of our parent company, Turbonomic) explains in this video that cloud economics “is the ability to deliver IT in a scalable way with speed, agility, new consumption models, and most importantly, with a high level of elasticity.”

 

He further explains this idea in another video –  that it’s microservices architecture taking the place of monolithic applications that allows this elasticity and rewrites the way cloud economics works.

Rational vs. Behavioral Economics in the Cloud

The concepts described above are exciting – but before assuming these benefits of speed, agility, etc. will be gained naturally upon adopting any type of cloud technology, we need to remember the human context. Taken from the perspective of rational economics, cloud users should always choose the most optimized cloud infrastructure options. If you’ve ever seen a whiteboard diagram of the cloud infrastructure your company uses, or taken a peek at your organization’s cloud bill, you’ll know this is not the case. 

To understand why, it’s beneficial to take a behavioral economics perspective. Through this lens, we can see that individuals and businesses are often not behaving in their own best interests, for a variety of reasons that will vary by the individual and the organization… and perhaps by the day. 

Economics of Cloud Costs

Cost is particularly dependent on where you sit within an organization and the particular lens you look through. For example, the CFO might have a very different view from the engineering team. Here’s a great talk and Twitter thread on the cultural issues at play from cloud economist Corey Quinn.

Examples of cognitive biases impacting cloud cost decision making include:

    • Blind spots – there are always going to be higher priorities than costs – including but not limited to speed of development and performance. Additionally, many engineering and development teams don’t believe it’s their job to care about costs. Or at least, engineering departments are seen less as cost centers and more as profit centers by generating value. Cost optimization is tacked on at the end of a project and doesn’t receive much attention until it spirals out of control. 
    • Choice Overload – the major cloud providers now offer an enormous number of services – AWS had 190 at our last count – more than any one person can easily evaluate to determine if they’re using the best option. Similarly, most users have a poor understanding of the total cost of ownership of their cloud environment and don’t actually know what cloud infrastructure exists. 
    • The IKEA Effect – people place a disproportionately high value on products they partially created. Developers may hang on to unoptimized infrastructure, because they created it, and it would hurt to let it go, even if it’s unnecessary to keep. 

(There are plenty more, but perhaps we’re falling prey to the bias and some of these decisions are perfectly rational.)

The point is that despite the automated buzz of AI and robotic process automation, the cloud doesn’t inherently manage itself to optimize costs. You need to do that. 

Cloud providers’ management environments are confusing, and do not always encourage users to make good decisions. Luckily, the wind has started to blow the other way on this front, as cloud providers realize that providing cost optimization options provides a better user experience and keeps them more customers in the long run. We’ve started to see more options like Google’s Sustained Use discounts and AWS’s new Savings Plans that make it easier to reduce costs without impacting operations. However, it’s up to the customer to find, master, and implement these solutions – and to know when cloud native tools don’t do enough. 

How to Set Yourself Up for Success & Start Saving 

The good news is that being aware of natural tendencies that impact cost optimization is the first step to reducing costs. 

Determine Your Priorities 

First, determine what your goals are. What does “cost saving” mean to you? Does it mean reducing the overall bill by 20%? Does it mean being able to allocate every instance in your AWS account to a team or project so you can budget appropriately? Does it mean eliminating unused infrastructure?

Understand Your Bill

No matter what your goal, you need to understand your cloud bill before you can take action to reduce costs. The best way to do this is with a thorough tagging strategy. All resources need to be tagged. Ideally, you will create a set of tags that is applied to every resource, such as team, environment, application, and expiration date. To enforce this, some organizations have policies to terminate non-compliant instances, effectively forcing users to add these essential tags.  

Then, you can start to slice and dice by tag to understand what your resources are being used for, and where your money is going. 

Review Cost Saving Options 

Once you have a better picture of the resources in your cloud environment, you can start to review opportunities to use pricing options such as Reserved Instances or Savings Plans; places to eliminate unneeded resources such as orphaned volumes and snapshots; schedule non-production resources to turn off outside of working hours; upgrade and resize instances; etc.

Designate a Cost-Responsible Party

While engineering teams can do these reviews as part of their normal processes, many organizations choose to create a “cloud center of excellence” or a similar department, solely focused on cloud expertise and cost management. Sysco shared a great example of how this worked for them, with gamification and a healthy dose of bagels as motivating factors for users throughout the organization to get on board with the team’s mission.

Automate Where You Can

On the flip side, there’s only so far food bribery can go. Since, as we’ve outlined in our cloud economics model, changing user behavior and habits is difficult, the best way to ensure change is by sidestepping the human element altogether. Those on/off schedules for dev and test environments? Automate them. Governance? Automate it. Resizing? Automate.

Interview: Cyber Security Orchestration Vendor Syncurity Optimizes Cloud Costs with ParkMyCloud

Interview: Cyber Security Orchestration Vendor Syncurity Optimizes Cloud Costs with ParkMyCloud

We chatted with JP Bourget, founder and CSO of Syncurity, about how his cybersecurity orchestration company uses ParkMyCloud. 

Hi JP. Can you start off by telling us about Syncurity, what you do, and how big your team is? 

Sure. We’re a cybersecurity orchestration vendor. We are in the cybersecurity product space of SOAR which is security, orchestration, automation, and response. What we do is we facilitate the security alert handling, sometimes called triage, and then use automation to help decide if the alert is concerning, and if necessary kick off a response process for the security operations center or incident response team. We usually launch these processes with alert polling as well as run our automated analysis/enrichment with alert ingesting via security product APIs. 

I’m the founder and CSO. There’s about 25 of us on the team.

What clouds do you use, and how are you using those clouds?

We use Amazon, Azure, Google, Oracle, and Digital Ocean. We do a lot of CI using CircleCI, Travis, and some others.

The reason that we use all those clouds is because we ship images on the different cloud providers for consumption by customers. Our product is subscription-based and we share a private image with our customers, they can then go deploy our product in their environment.

Most of our work is done on Azure VMs and Amazon EC2. We also have another cloud environment which is hosted on bare metal servers that we use for VMware – I don’t get billed per VMguest in that scenario. It’s a per bare metal server cost model. We also now use spot instances quite often based on ParkMyCloud helping us understand the benefit of them, even for longer running instances. 

As for how we’re using them, most of our QA and Proof of Concepts are done in Amazon. Because we do all this automation, we have a huge integration lab up in Amazon. We also do POCs in all the other vendors based on customer requirements.

How did you decide to start using ParkMyCloud? 

We’ve been using ParkMyCloud right from the beginning – we know the team that helped build the product. 

The key benefit of ParkMyCloud for me is that I have about 75 instances at any one time that don’t need to be running all the time because it’s the lab. In some cases, I need to turn on a lab in a fashion that gives me a stack of tools, or I need to run a lab in a fashion where the machines run a schedule. 

There’s certain stuff that is dummy infrastructure or lab infrastructure like windows servers and domains that we want running most of the time, but we turn them off on the weekend. But there are other things that only ever need to be turned on when we’re using them. So what ParkMyCloud gives me is the ability to essentially have an interface that’s multi-cloud for anybody to go in and turn a box on as needed and then automatically turn them off.

How would you describe your experience using ParkMyCloud?

I like being able to see my projected savings right on the platform. The other thing that I really like is the fact that I can see how much a box costs a month instead of hourly. It’s one of those small things that provides huge value. Amazon provides that hourly information but you have to calculate the monthly cost.

We use ParkMyCloud as an alternative to some users logging directly into the AWS console, which is a lot easier. 

How much have you saved using ParkMyCloud?

Our total savings to date is more than $70,000.

Great to hear – thanks JP! 

 

More interviews with ParkMyCloud users:

If you use AWS RIs, you need to use the new queuing option

If you use AWS RIs, you need to use the new queuing option

The AWS reserved instance (AWS RI) offerings got a recent upgrade with the release of a “queue” function. This means that you can now purchase reserved instances that, rather than going into effect immediately, are scheduled for future purchase. (Yes – despite the fact that RI’s have been available for a decade, this is a new feature!)

Back up – what was released? 

If you haven’t used AWS RIs before, it’s worth a brief primer. When you purchase a reservation, you’re not buying a specific instance or even capacity: it’s a billing function. In exchange for a commitment over 1 or 3 years, you get an attractive discount. These discounts are applied on the back end of the billing process, and are allocated against specific instances on an hour-by-hour basis over the course of the month. 

There are a few variations within the AWS RI purchasing options, such as the term; how much you pay upfront vs. monthly; the option for them to be scheduled; whether the scope of the discount covers instances in a single region or in a particular availability zone; etc.

More on those options and whether you should actually be using Reserved Instances, in this post. (TL;DR: RIs are the right choice when you have 24×7 long-term production workloads; otherwise they’re usually not.) 

So, the new feature is the option to purchase these reservation discounts to begin on a future date rather than immediately. This is designed to make it easier for users to have uninterrupted reserved instance coverage. Previously, at the end of a 1- or 3-year term, many users would be unaware that their reservation expired and would have a spike in cost…which they may or may not notice. 

How does queuing work?

Now, when planned correctly, you can avoid the lapse of Reserved Instance coverage for your workloads by scheduling a new reservation purchase to go into effect as soon as the previous one expires. The furthest in advance you can schedule a purchase is three years, which is also the longest RI term available. 

Before queueing was available, customers had the option to either just go ahead and purchase a new reservation a few days/hours/weeks before the previous RI was due to expire, or set a reminder to go in and buy a new reservation after the previous one had lapsed. Either way, there was an extra cost – either a time window with too many RIs, or one with too few. So it is easy to see that RI queueing can save you money. Queueing can also save you some hassle, as you no longer have to set reminders and build your daily/weekly schedule around going in to buy a new RI. (Reminiscent of some late-night eBay sessions, waiting for the end of an auction to roll around.)

There are a few limitations. AWS RI purchases can be queued for regional Reserved Instances, but not zonal Reserved Instances. Regional RIs are the broader option as they cover any availability zone in a region, while zonal RIs are for a specific availability zone and actually reserve capacity as well. 

Cancellation is an option: since payment is processed only at the scheduled purchase time in the queue, you can cancel a purchase at any time before it is processed. 

We find it interesting that these are designed as new purchases rather than a “renewable” RIs – likely due to an idea that users may queue an evolving RI type or purchase profile, instead of the same instance type/duration/payment terms over time.

Beware the AWS RI Black Hole

Of course, the downside to queuing a purchase in advance is that you now have a new commitment to track – and one that may not meet your needs by the time the purchase goes into effect. 

It’s already difficult to shine light on your existing reservations, especially with options in place such as instance size flexibility and the broad applicability of regional RIs.

That’s why ParkMyCloud has released our first support for Reserved Instances this week. You told us that RIs are the next biggest thing that need optimization help on your cloud bills, and we listened. Now, you can see all your AWS RIs – past, present, and queued future purchases – in one place in ParkMyCloud. Next, we’ll be working on more recommendations and optimization – stay tuned!