The Rise of the Enterprise Cloud Manager

The Rise of the Enterprise Cloud Manager

There is a growing job function among companies using public cloud: the Enterprise Cloud Manager. We did a study on ParkMyCloud users which showed that a growing proportion of them have “cloud” or the name of their cloud provider such as “AWS” in their job title. This indicates a growing degree of specialization for individuals who manage cloud infrastructure as demonstrated by their cloud computing job titles.  And, in some companies, there is a dedicated role for cloud management – such as an Enterprise Cloud Manager.

Why would you need an Enterprise Cloud Manager?

The world of cloud management is constantly changing and becoming increasingly complex even for the best cloud manager. Recently, the increased adoption of hybrid and multi-cloud environments by organizations to take advantage of best-of-breed solutions, make it more confusing, expensive, and even harder to control. If someone is not fully versed in this field, they may not always know how to handle problems related to governance, security, and cost control. It is important to dedicate resources in your organization to cloud management and related cloud job roles. This chart from Gartner gives us a look at all the things that are involved in cloud management so we can better understand how many parts need to come together for it to run smoothly.

Having a role in your organization that is dedicated to cloud management allows others, who are not specialized in that field, to focus on their jobs, while also centralizing responsibility.  With the help of an Enterprise Cloud Manager, responsibilities are delegated appropriately to ensure cloud environments are handled according to best practices in governance, security, and cost control.

After all, just because you adopt public cloud infrastructure does not mean you have addressed any governance or cost issues – which seems rather obvious when you consider that there are sub-industries created around addressing these problems, but you’d be surprised how often eager adopters assume the technology will do the work and forget that cloud management is not a technological but a human behavior problem.

And someone has to be there to bring the motivational bagels to the “you really need to turn your instances off” meeting.

A Larger Approach: The Cloud Center of Excellence

Cohesively, businesses with a presence in the cloud, regardless of their size, should also consider adopting the functionalities of a Cloud Center of Excellence (CCoE) – which, if the resources are available, can be like an entire department of  Enterprise Cloud Managers. Essentially, a CCoE brings together cross-functional teams to manage cloud strategy, governance, and best practices, and serve as cloud leaders for the entire organization.

The role of an Enterprise Cloud Manager or cloud center of excellence (or cloud operations center or cloud enablement team, whatever you want to call it)  is to oversee cloud operations. They know all the ins and outs of cloud management so they are able to create processes for resource provisioning and services. Their focus is on optimizing their infrastructure which will help streamline all their cloud operations, improve productivity, and optimize cloud costs. 

Moreover, the Enterprise Cloud Manager can systematize the foundation that creates a CCoE with some key guiding principles like the ones outlined by AWS Cloud Center of Excellence here.  

With the Enterprise Cloud Manager leadership, DevOps, CloudOps, Infrastructure, and Finance teams within the CCoE can ensure that the organization’s diverse set of business units are using a common set of best practices to spearhead their cloud efforts while keeping balanced working relationships, operational efficiency, and innovative thinking needed to achieve organizational goals. 

A Note on Job Titles

It’s worth noting that while descriptive, the “Enterprise Cloud Manager” title isn’t necessarily something widely adopted. We’ve run across folks with titles in Cloud Manager, Cloud Operations Manager, Cloud Project Manager, Cloud Infrastructure Manager, Cloud Delivery Manager, etc.

If you’re on the job hunt, we have a few other ideas for cloud and AWS jobs for you to check out.

Automation Tools are Essential

With so much going on in this space, it isn’t possible to expect just one person or a team to manage all of this by hand – you need automation tools. The great thing is that these tools deliver tangible results that make automation a key component for successful enterprise cloud operations and work for companies of any size. Primary users can be people dedicated to this full time, such as an Enterprise Cloud Manager, as well as people managing cloud infrastructure on top of other responsibilities.

Why are these tools important? They provide two main things: visibility and action to act on those recommendations. (That is, unless you’re willing to let go of the steering wheel and let the platform make the decisions – but most folks aren’t, yet.) Customers that were once managing resources manually are now saving time and money by implementing an automation tool. Take a look at the automation tools that are set up through your cloud vendor, as well as third-party tools that are available for cost optimization and beyond. Setting up these tools for automation will lessen the need for routine check-ins and maintenance while ensuring your infrastructure is optimized. 

Do we really need this role?

To put it simply, if you have more than a handful of cloud instances: yes. If you’re small, it may be part of someone’s job description. If you’re large, it may be a center of excellence. 

But if you want your organization to be well informed and up to date, then it is important that you have the organizational roles in place to oversee your cloud operations – an Enterprise Cloud Manager, CCoE and automation tools.

EC2 Instance Types Comparison (and how to remember them)

EC2 Instance Types Comparison (and how to remember them)

AWS offers a range of EC2 instance types optimized for various purposes. It’s great that they provide so much variety, but of course, it means one more thing that you have to learn. It’s worth taking the time to do so, as ⅔ of IaaS spend goes toward compute – that’s a lot of EC2.

Check out a brief breakdown in this video, which also compares EC2 purchasing options. Check it out here:

Or, read on for a look into each of the AWS instance types. Remember that within each type, you’ll still need to choose the AWS instance sizes that suit your specific needs. Additionally, older generations within each instance types are available for purchase – for example, c5 is the latest “c” instance, but c4 and c3 are still available – but as the newer types tend to perform better at a cheaper price, you’ll only want to use the older types if you have an AMI or other dependency. The differences matter for some users… but you probably already know who you are. 

Note: a version of this blog was originally published in July 2018. It has been rewritten and updated for 2020. New EC2 instance types since our last writeup include A1, T3, z1d, high memory, R5, G4, and F1. 

Quick EC2 Instance Info 

This chart shows a quick summary of what we’ll cover. We’re including a brief description and mnemonic for each (hopefully helpful if you’re studying for an AWS certification!)

EC2 Instance Info

If you’ve taken a look at AWS training materials, you may have seen a couple of overall acronyms to remember all of these – perhaps Dr McGiFT Px or FIGHT Dr McPX. Whether these acronyms are useful at all is perhaps a point of discussion, but to ensure that all the instance types above are in your list, we suggest:

  • Fight Czar MXPD 
  • Fright Camp DXZ
  • March Gift PZXD

(and don’t forget high memory and Inf!)

General Purpose

These general purpose AWS EC2 instance types are a good place to start, particularly if you’re not sure what type to use. There are three general purpose types.

t instance type

The t3 family is a burstable instance type. If you have an application that needs to run with some basic CPU and memory usage, you can choose t3. It also works well if you have an application that gets used sometimes but not others. When the resource is idle, you’ll generate CPU credit, which you’ll utilize when the resource is used. It’s useful for things that come and go a lot, such as websites or development environments, and while generally inexpensive, make sure you understand how the CPU credits work before deploying these. There’s a little bit of math and they may not be as cheap as they look at first glance. 

Make sure you also understand the difference between t3 and the older t2 – t3 are in “unlimited mode” by default, so instead of throttling down to baseline CPU when your instance runs out of credits, you pay for overages.

For each of the EC2 types we cover here, we’ll also add a mnemonic to help you remember the purpose of each instance type.

Mnemonic: t is for tiny or turbo.

m instance type

The m5 instance type is similar, but for more consistent workloads. It has a nice balance of CPU, memory, and disk. It’s not hard to see why almost half of EC2 workloads are on “m” instances. In addition to m5, you also have the option of m6g, which are powered by Arm-based AWS Graviton2 processors, making them more cost-efficient. There’s also m5a, m5n, and m4 – most of which are safe to ignore unless you have a specific use case for one of the other processors besides m5’s Intel Xeon Platinum 8175 processors. If you aren’t sure what to choose, m5 is the most versatile of all the Amazon instance types. 

Mnemonic: m is for main choice or happy medium.

a1 instance type

The a1 instance type was announced in late 2018 and can be a less expensive option than other EC2. They are suited for scale-out workloads such as web servers, containerized microservices, caching fleets, distributed data stores, and development environments.  The instances are powered by Arm processors and suited for Arm-based workloads.

Mnemonic: a is for Arm processor

Compute Optimized

c instance type

The c5 instance type has a high ratio of compute/CPU versus memory. If you have a compute-intensive application – maybe scientific modelling, intensive machine learning, or multiplayer gaming – these instances are a good choice. There is also the c5d option, which is SSD-backed. See also the C5n which have up to 100 Gbps network bandwidth and increased memory compared to equivalent c5 instances. The c4 family is also still available.

Mnemonic: c is for compute (at least that one’s easy!)

Memory Optimized

r instance family

The r instance family is memory-optimized, which you might use for in-memory databases, real-time processing of unstructured big data, or Hadoop/Spark clusters. You can think of it as a kind of midpoint between the m5 and the x1e. In addition to r5, there are r5a which deliver lower cost per GiB memory and r5n which have higher bandwidth for applications that need improved network throughput and packet rate performance.

Mnemonic: r is for RAM.

x1 instance family

The x1 family has a much higher ratio of memory, so this is a good choice if you have a full in-memory application or a big data processing engine like Apache Spark or Presto. X1e are optimized for high-performance databases, in-memory databases, and other memory intensive enterprise applications.

Mnemonic: x is for xtreme, as in “xtreme RAM” seems to be generally accepted, but we think this is a bit weak. If you have any suggestions, comment below.

High Memory instance family

We’re not sure why these didn’t get an alphabet soup name like the rest of the AWS instances, but at least it’s easy to remember and understand. As you might guess, high memory instances run large in-memory databases, including production deployments of SAP HANA. 

Mnemonic: we’ll leave this one up to you.

z1d instance family

The z1d instances combine high compute capacity with a high memory footprint. They have a sustained core frequency of up to 4.0 GHz, the fastest of AWS’s offerings. These are best for electronic design automation (EDA) and some relational database workloads with high per-core licensing costs.

Mnemonic: z is for zippy 

Accelerated Computing

p instance type

If you need GPUs on your instances, p3 instances are a good choice. They are useful for video editing, and AWS also lists use cases of “computational fluid dynamics, computational finance, seismic analysis, speech recognition, autonomous vehicles” – so it’s fairly specialized. p2 instances are also available.

Mnemonic: p is for pictures (graphics).

Inf1 instance type

The Inf1 instances are a specialized EC2 type for machine learning inference applications, such as recommendation engines, forecasting, image and video analysis, advanced text analytics, document analysis, voice, conversational agents, translation, transcription, and fraud detection.

Mnemonic: inf is for inference

g instance type

The g instance type uses Graphics Processing Units (GPUs) to accelerate graphics-intensive workloads, and also designed to accelerate machine learning inference. This could include adding metadata to an image, automated speech recognition, and language translation, as well as graphics workstations, video transcoding, and game streaming in the cloud. 

g4 is the latest family, and g3 are available as well.

Mnemonic: g is for graphics or GPU

F1 instance type

f1 instances offer customizable hardware acceleration with field programmable gate arrays (FPGAs) – hence the “f”. Applications could include genomics research, financial analysis, and real-time video processing.

Mnemonic: f is for FPGA 

Storage Optimize

i3 instance type

The i3 instance type is similar to h1, but it is SSD backed, so if you need an NVMe drive, choose this type. Use it for NoSQL databases, in-memory databases, Elasticsearch, and more. The i3en option has higher network bandwidth with Elastic Network Adapter (ENA)-based enhanced networking. 

Mnemonic: i is for IOPS.

d2 instance type

d2 instances have an even higher ratio of disk to CPU and memory, which makes them a good fit for Massively Parallel Processing (MPP), MapReduce and Hadoop distributed computing, and similar applications.

Mnemonic: d is for dense.

h1 instance type

The h1 type is HDD backed, with a balance of compute and memory. You might use it for distributed file systems, network file systems, or data processing applications.

Mnemonic: h is for HDD.

What EC2 instance types should you use?

As AWS has continued to add options to EC2, there are now EC2 instance types for almost any application. If you have comparison questions around pricing, run them through the AWS monthly calculator. And if you don’t know, then generally starting with t3 or m5 is the way to go.

Looking for info on the other cloud providers?

 

 

 

Why AWS CPU Credits Mean T-Series Instances Aren’t as Cheap as You Think

Why AWS CPU Credits Mean T-Series Instances Aren’t as Cheap as You Think

AWS CPU credits are unique to T-series instances – and they can be a bit tricky to figure out. Whether you’re using the AWS free tier or just trying to use the smallest EC2 compute instance you can, you’ll need to keep track of these credits. These credits are both generated and used by the T2 and T3 instance families to decide how much CPU power you can actually use on those EC2 instances. This can be confusing if you aren’t expecting your virtual machine to have it’s CPU power throttled, or are wondering why the cost is much higher than you thought it would be.

T-series History

AWS first released a “burstable” instance type in the form of the t1.micro instance size in 2010, which was four years after the first EC2 instance size was released (m1.small in 2006, for you historians). Up until 2010, new instance sizes had always been bigger than the m1.small size, but there was demand for a VM size that could accommodate low-throughput or inconsistent workloads.

The t1.micro was the only burstable instance size for another four years, until the t2.medium was released in 2014. Soon, there was a whole range of t2 instances to cover the use case of servers that were low-powered while idle, but could have lots of potential compute resources available for the couple minutes each hour they were needed. In 2018, AWS introduced the t3 family that uses more modern CPUs and the AWS Nitro system for virtualization.

AWS CPU Credits 101

The key reason why T-series instances have a lower list price than corresponding M-series instances (in standard mode, more on that later) is the CPU credits that are tracked and used on each resource. The basic premise is that an idle instance earns credits, while a busy instance spends those credits. A “credit” corresponds to 1 minute’s worth of full 100% CPU usage, but this can be broken down in different ways if the usage is less than 100%.  For instance, 10% of CPU usage for 10 minutes also uses 1 credit. Each T-series machine size not only has a number of CPUs available, but also earns credits at different rates.

Here’s where the math starts getting a little tricky. A t2.micro instance earns 6 credits per hour with 1 available CPU. If you run that instance at 10% utilization for a full hour, it’ll spend 6 credits per hour (or 1 credit every 10 minutes). This means that any time spent under 10% utilization is a net increase in CPU credits, while any time spent above 10% utilization is a net decrease in CPU credits. A t3.large instance has 2 CPUs and earns 36 credits per hour, which means the balancing point where the net credit use is zero will be at 30% utilization per CPU.

So what happens when you run out of credits or never use your credits?

Standard Mode vs. Unlimited Mode

One of the differences between the t2 family and the t3 family is the default way each handles running out of credits. The t2 family defaults to Standard Mode, which means that once the instance has run out of credits to use, the CPU is throttled to the baseline value we calculated above (so 10% for t2.micro) and will continue maxing out at that value until credits have built back up. In practice, this means that your process or application that has burst up to use a lot more CPU than normal will soon be slow and unusable if the load remains high.

In 2017, AWS introduced Unlimited Mode as an option for t2 instances – and later, in 2018, as the default for t3 instances when they were introduced. Unlimited mode means that instead of throttling down to the baseline CPU when your instance runs out of credits, you can continue to run at a high CPU load and just pay for the overages. This price is 5¢ per CPU hour for Linux and 9.6¢ per CPU hour for Windows. In practice, this means that a t2.micro that has run out of credits and is running at 35% CPU utilization for a full 24 hours would cost an additional 30¢ that day on top of the normal 27.84¢ for 24hr usage, meaning the price is more than doubled.

Using T-series Instead of M-series

These overage charges for Unlimited Mode of t2 and t3 instances means that while the list price of the instance is much cheaper than corresponding m4 and m5 instances, you need to figure out if the utilization pattern of your workload makes sense for a burstable instance family. For example, an m5.large in us-east-1 costs 9.6¢/hr and a t3.large with similar specs costs 8.32¢/hr with a 30% CPU baseline. If your t3.large server is going to be running higher than 55.6% CPU for the hour on a consistent basis, then the price of the m5.large is actually lower.

When to Stop T-series and When to Let Them Run

One perk of using the t2 instances in Standard mode is that each time you start the server, you receive 30 launch credits that allow a high level of CPU usage when you first start the instance from a stopped state. These launch credits are tracked separately from accrued credits and are used first, so servers that only need to run short-lived processes when first starting can take advantage of this fact. The downside of stopping t2 servers is that accrued credits are lost when you stop the instance.

On the other hand, t3 servers persist earned credits for 7 days after stopping the instance, but don’t earn launch credits when they are first started. This is useful to know for longer-running processes that don’t have huge spikes, as they can build up credits but you don’t need to worry about losing the credits if you stop the server. 

At ParkMyCloud, we specialize in scheduling servers and databases to turn off on a schedule, which is perfect for non-production servers. We find that lots of users have t2 and t3 instances for these dev and test workloads, but want to know what happens to credits if you park those servers overnight. As we discussed, AWS CPU credits go away in T2 standard mode (but with additional launch credits) but persist in T3 Unlimited mode. Knowing this, you can pick the right instance size for the workload you’re running and confidently save money using ParkMyCloud. 

  • Best for non-production instances that have a quick burst of usage when starting = T2 instance with ParkMyCloud parking schedule
  • Best for non-production instances with unpredictable, but sporadic spikes = T3 instance with ParkMyCloud parking schedule

Try it for free to see how we can make the cost of your t2 and t3 servers even lower.

Further reading on saving money on AWS: 

 

 

 

7 Favorite AWS Training Resources

7 Favorite AWS Training Resources

When it comes to AWS training resources, there’s no shortage of information out there. Considering the wide range of videos, tutorials, blogs, and more, it’s hard knowing where to look or how to begin. Finding the best resource depends on your learning style, your needs for AWS, and getting the most updated information available. Whether you’re just getting started in AWS or consider yourself an expert, there’s an abundance of resources for every learning level. With this in mind, we came up with our 7 favorite AWS training resources, sure to give you the tools you need to learn AWS:

1. AWS Self-Paced Labs

What better way to learn that at your own pace? AWS self-paced labs give you hands-on learning in a live AWS environment, with AWS cloud services, and actual scenarios you would encounter in the cloud. There are two different ways to learn with these labs. You can either take an individual lab or follow a learning quest. Individual labs are intended to help users get familiar with an AWS service as quickly as 15 minutes. Learning quests guide you through a series of labs so you can master any AWS scenario at your own pace. Once completed, you will earn a badge that you can boast on your resume, LinkedIn, website, etc. 

Whatever your experience level may be, there are plenty of different options offered. Among the recommended labs you’ll find an Introduction to Amazon Elastic Compute Cloud (EC2), and for more advanced users, a lab on Maintaining High Availability with Auto Scaling (for Linux).

2. AWS Free Tier

Sometimes the best way to learn something is by jumping right in. With the AWS Free Tier, you can try AWS services for free. This is a great way to test out AWS for your business, or for the developers out there, to try services like AWS CodePipeLine, AWS Data Pipeline, and more. While you are still getting a hands-on opportunity to learn a number of AWS services, the only downside is that there are certain usage limits. You can track your usage with a billing alarm to avoid unwanted charges, or you can try ParkMyCloud and park your instances when they’re not in use so you get the most out of your free tier experience. In fact, ParkMyCloud started its journey by using AWS’s free tier!

3. AWS Documentation and Whitepapers

AWS Documentation is like a virtual encyclopedia of tools, terms, training, and everything AWS. You’ll find case studies, tutorials, cloud computing basics, and so much more. This resource is a one-stop-shop for all of your AWS documentation needs, whether you’re a beginner or advanced user. No matter where you are in your AWS training journey, AWS documentation is always a useful reference and certainly deserves a spot in your bookmarks. 

Additionally, you’ll find whitepapers that give users access to technical AWS content that is written by AWS and individuals from the AWS community, to help further your knowledge of their cloud. These whitepapers include things from technical guides, reference material, and architecture diagrams.

4. YouTube

So far, we’ve gone straight to the source for 3 out of 7 of our favorite AWS training resources. Amazon really does a great job of providing hands-on training, tutorials, and documentation for users with a range of experience. However, YouTube opens up a whole new world of video training that includes contributions from not only Amazon, but other great resources as well. Besides the obvious Amazon Web Services channel, there are also popular and highly rated videos by Edureka, Simplilearn, Eli the Computer Guy, and more.

5. Bloggers

As cloud technology usage continues to expand and evolve, blogs are a great way to stay up to speed with AWS and the world of cloud computing. Of course, in addition to aws labs, a free-trial, extensive documentation, and their own YouTube channel, AWS also has their own blog. Since AWS actually has a number of blogs that vary by region and technology, we recommend that you start by following Jeff Barr – Chief Evangelist at Amazon Web Services, and primary contributor. Edureka was mentioned in our recommended YouTube channels, they also have a blog that covers plenty of AWS topics. The CloudThat blog is an excellent resource for AWS and all things cloud, and was co-founded by Bhaves Goswami – a former member of the AWS product development team. Additionally, AWS Insider is a great source for all things AWS. Here you’ll find blogs, webcasts, how-to, tips, tricks, news articles and even more hands-on guidance for working with AWS. If you prefer newsletters straight to your inbox, check out Last Week in AWS and Inside Cloud

6. Online Learning Platforms

As public cloud computing continues to grow – and AWS continues to dominate the market – people have become increasingly interested in this CSP and what it has to offer. In the last 8-10 years, two massive learning platforms were developed, Coursera and Udemy. These platforms offer online AWS courses, specializations, training, and degrees. The abundance of courses that these platforms provide can help you learn all things AWS and give you a wide array of resources to help you train for different AWS certifications and degrees. 

7. GitHub

GitHub is a developer platform where users work together to review and host code, build software and manage projects. This platform has access to a number of materials that can help further your AWS training. In fact, here’s a great list of AWS training resources that can help you prepare for an Amazon Cloud certification. The great thing about this site is the collaboration among the users. The large number of people in this community brings together people from all different backgrounds so they are able to provide knowledge about their own specialties and experiences. With access to everything from ebooks, video courses, free lectures, and sample tests, posts like these can help you get on the right certification track. 


There’s plenty of information out there when it comes to AWS training resources. We picked our 7 favorite resources for their reliability, quality, and range of information. Whether you’re new to AWS or consider yourself an expert, these resources are sure to help you find what you’re looking for.

AWS vs Azure vs Google Cloud Market Share 2020: What the Latest Data Shows

AWS vs Azure vs Google Cloud Market Share 2020: What the Latest Data Shows

Q4 2019 earnings are in for the ‘big three’ cloud providers and you know what that means – it’s time for an AWS vs Azure vs Google Cloud market share comparison. Let’s take a look at all three providers side-by-side to see where they stand.

Note: a version of this post was originally published in April 2018 and 2019. It has been updated for 2020.

AWS vs. Azure vs. Google Cloud Earnings

To get a sense of the AWS vs Azure vs Google Cloud market share breakdown, let’s take a look at what each cloud provider’s reports shared.

AWS 

Amazon reported Amazon Web Services (AWS) revenue of $9.95 billion for Q4 2019, compared to $7.4 billion for Q4 2019. AWS revenue grew 34% in the quarter, compared to a year earlier. 

Across the business, Amazon’s quarterly sales increased to $87.4 billion, beating predictions of $86.02 billion.AWS has been a huge contributor to this growth. AWS revenue made up 11% of total Amazon sales for the quarter. AWS only continues to grow, and bolster the retail giant time after time.

One thing to keep in mind: you’ll see a couple of headlines pointing out that revenue growth is down, quoting that 34% number and comparing it to previous quarters’ growth rates, which peaked at 81% in 2015. However, that metric is of questionable value as AWS continues to increase revenue at this enormous scale, dominating the market (as we’ll see below). 

In media commentary, AWS’s numbers seem to speak for themselves:

Azure

While Amazon specifies AWS revenue, Microsoft only reports on Azure’s growth rate. That number is 62% revenue growth over the previous quarter. This time last year, growth was reported at 76%. As mentioned above, comparing growth rates to growth rates is interesting, but not necessarily as useful a metric as actual revenue numbers – which we don’t have for Azure alone.

Here are the revenue numbers Microsoft does report. Azure is under the “Intelligent Cloud” business, which grew 27% to $11.9 billion. The operating group also includes server products and cloud services (30% growth) and Enterprise Services (6% growth). 

The lack of specificity around Azure frustrates many pundits as it simply can’t be compared directly to AWS, and inevitably raises eyebrows about how Azure is really doing. Of course, it also assumes that IaaS is the only piece of “cloud” that’s important, but then, that’s how AWS has grown to dominate the market. 

A victory for the cloud provider was the October winner of the $10 billion JEDI cloud computing contract (although AWS is actively protesting the contract with claims of political interference).

Here are a few headlines on Microsoft’s reporting that caught our attention:

Google Cloud

This quarter, Google broke out revenue reporting for its cloud business for the first time. For the fourth quarter, Google Cloud generated $2.6 billion in revenue, a growth of 53% from the previous year. For 2019 as a whole, Google Cloud brought in $8.9 billion in revenue, which is less than AWS generated in the fourth quarter alone.

Google CEO Sundar Pichai stated on the earnings report conference call, “The growth rate of GCP was meaningfully higher than that of Cloud overall, and GCP’s growth rate accelerated from 2018 to 2019.”

CFO Ruth Porat also highlighted Google Cloud Anthos, as Google leans into enabling the multi-cloud reality for its customers, something AWS and Azure have avoided.

Further reading on Google’s quarterly reporting:

Cloud Computing Market Share Breakdown – AWS vs. Azure vs. Google Cloud

When we originally published this blog in 2018, we included a market share breakdown from analyst Canalys, which reported AWS in the lead owning about a third of the market, Microsoft in second with about 15 percent, and Google sitting around 5 percent.

In 2019, they reported an overall growth in the cloud infrastructure market of 42%. By provider, AWS had the biggest sales gain with a $2.3 billion YOY increase, but Canalys reported Azure and Google Cloud with bigger percentage increases.

As of February 2020, Canalys reports AWS with 32.4% of the market, Azure at 17.6%, Google Cloud at 6%, Alibaba Cloud close behind at 5.4%, and other clouds with 38.5%. 

Ultimately, it seems clear that in the case of AWS vs Azure vs Google Cloud market share – AWS still has the lead.

Bezos has said, “AWS had the unusual advantage of a seven-year head start before facing like-minded competition. As a result, the AWS services are by far the most evolved and most functionality-rich.”

Our anecdotal experience talking to cloud customers often finds that true, and it says something that Microsoft isn’t breaking down their cloud numbers just yet, while Google leans into multi-cloud.

AWS remains far in the lead for now. With that said, it will be interesting to see how the actual numbers play out, especially as Alibaba catches up.

AWS Compute Optimizer Review: Not Quite Rightsized for Rightsizing

AWS Compute Optimizer Review: Not Quite Rightsized for Rightsizing

In December, AWS announced a new service called AWS Compute Optimizer that provides recommendations with the goal of properly sizing EC2 virtual machines. Rightsizing is one of AWS’s listed five pillars of cost optimization, and it’s good to see AWS following the trend of cloud providers making it easier for customers to optimize for cost and performance. Actually, this is not the first “rightsizing tool” they’ve promoted. Early last year they pushed what was essentially a collection of Python scripts in the AWS Solutions Portal called “AWS Right Sizing”. 

As cloud cost optimizers here at ParkMyCloud, rightsizing is high on the list of optimization strategies we focus on. The ParkMyCloud platform offers rightsizing recommendations and actions, along with two other cost optimization pillars: “Increase Elasticity” through scheduled shutdown of idle resources, and “Measure, monitor, and improve” through cost and savings reports and an RBAC-enabled user portal. Let’s take a look at what the AWS Compute Optimizer offers, and how it compares to ParkMyCloud’s rightsizing.

AWS Compute Optimizer Overview

The AWS Compute Optimizer service generates size change recommendations based on your existing EC2 servers, including those that are in Auto Scaling groups. Each EC2 virtual machine can get up to 3 recommendations for different families and sizes that you could choose, along with the performance risk and costs associated with each option. While you are browsing the options, the interface will show you what the performance would have looked like over the past 2 weeks if you were running on the selected instance size instead of the current instance size, which is nice for analyzing the options against your organization’s risk profile. However, there is no direct way to take the Rightsizing action, so you must go and adjust the instance settings manually.

AWS Compute Optimizer is free of charge and available on all AWS accounts regardless of support level. You do have to choose to opt-in to use the service before recommendations will be made. A major limiting factor is the region availability: as of February 4, 2020, AWS Compute Optimizer is available in 16 regions, and supports the M, C, R, T and X instance families. It uses only the past 2 weeks’ worth of Cloudwatch data to generate recommendations, which is a small window that may result in odd recommendations if those two weeks include any anomalies. 

If your EC2 instances line up with this subset of instance types and regions, then the AWS Compute Optimizer can provide some suggestions for cost savings. However, if your needs are a little more diverse or robust, read on.

ParkMyCloud Rightsizing Overview

ParkMyCloud has offered scheduling of idle cloud resources since 2015. Last year we announced a major advancement in the platform’s cost optimization capabilities with the release of Rightsizing. 

Similarly to the AWS Compute Optimizer, ParkMyCloud’s Rightsizing capabilities offer up to 3 recommendations for different sizes that your instances could be based on Cloudwatch data. Additionally, ParkMyCloud’s Rightsizing can:

  • ParkMyCloud is multi-cloud, multi-account, and multi-region in a single pane of glass, so you can view recommendations across all of your cloud accounts in one place (including all AWS regions, not just the ones listed above and Azure and Google Clouds)
  • ParkMyCloud can take the Rightsizing action for you once you accept a recommendation, including scheduling that resize action for a future time (such as during a maintenance window).
  • ParkMyCloud’s recommendations are based on data from a period of up to 24 weeks, providing a much more robust recommendation compared to the 2-week data set imposed by Cloudwatch. 
  • ParkMyCloud makes recommendations for and resizes RDS databases, including Aurora instances. RDS databases have an average cost of 75% higher than EC2 instances, which means this is a significant opportunity for cost savings.
  • All AWS instance sizes are supported, not just M/C/R/T/X
  • Users can reject a recommendation and give an explanation, so administrators know why actions weren’t taken.
  • Savings from Rightsizing (and parking) are tracked and reported in ParkMyCloud, so you can show management or the CFO just how much money you’re saving the company.

To summarize:

Optimize Your Rightsizing 

The AWS Compute Optimizer is a great feature that AWS is offering for free to its cloud users, but the limitations and inability to take direct action from the recommendations makes it less useful for serious cost optimization. ParkMyCloud’s features make it the right choice for saving money on your cloud bill while optimizing performance, and the free trial makes it easy to get started today. Feel free to contact us if you have any questions.