Google Sustainability is an effort that ranges across their business, from the Global Fishing Watch to environmental consciousness in the supply chain. Given that cloud computing has been a major draw of global energy in recent years, the amount of computing done in data centers more than quintupled between 2010 and 2018. But, the amount of energy consumed by the world’s data centers grew only six percent during that period, thanks to improvements in energy efficiency. However, that’s still a lot of power. That’s why Google’s sustainability efforts for data centers and cloud computing are especially important.
Google Cloud Sustainability Efforts – As Old as Their Data Centers
Reducing energy usage has been an initiative for Google for more than 10 years. Google has been carbon neutral since 2007, and 2019 marked the third year in a row that they’ve matched their energy usage with 100 percent renewable energy purchases. Google’s innovation in the data center market also comes from the process of building facilities from the ground up instead of buying existing infrastructures and using machine learning technology to monitor and improve power-usage-effectiveness (PUE) and find new ways to save energy in their data centers.
When comparing the big three cloud providers in terms of sustainability efforts, AWS is by far the largest source of carbon emissions from the cloud globally, due to its dominance. However, AWS’s sustainability team is investing in green energy initiatives and is striving to commit to an ambitious goal of 100% use of renewable energy by 2040 to become as carbon-neutral as Google has been. Microsoft Azure, on the other hand, has run on 100 percent renewable energy since 2014 but would be considered a low-carbon electricity consumer and that’s in part because it runs less of the world than Amazon or Google.
Nonetheless, data centers from the big three cloud providers, wherever they are, all run on electricity. How the electricity is generated is the important factor in whether they are more or less favorable for the environment. For Google, reaching 100% renewable energy purchasing on a global and annual basis was just the beginning. In addition to continuing their aggressive move forward with renewable energy technologies like wind and solar, they wanted to achieve the much more challenging long-term goal of powering operations on a region-specific, 24-7 basis with clean, zero-carbon energy.
Why Renewable Energy Needs to Be the Norm for Cloud Computing
It’s no secret that cloud computing is a drain of resources, roughly three percent of all electricity generated on the planet. That’s why it’s important for Google and other cloud providers to be part of the solution to solving global climate change. Renewable energy is an important element, as is matching the energy use from operations and by helping to create pathways for others to purchase clean energy. However, it’s not just about fighting climate change. Purchasing energy from renewable resources also makes good business sense, for two key reasons:
- Renewables are cost-effective – The cost to produce renewable energy technologies like wind and solar had come down precipitously in recent years. By 2016, the levelized cost of wind had come down 60% and the levelized cost of solar had come down 80%. In fact, in some areas, renewable energy is the cheapest form of energy available on the grid. Reducing the cost to run servers reduces the cost for public cloud customers – and we’re in favor of anything that does that.
- Renewable energy inputs like wind and sunlight are essentially free – Having no fuel input for most renewables allows Google to eliminate exposure to fuel-price volatility and especially helpful when managing a global portfolio of operations in a wide variety of markets.
Google Sustainability in the Cloud Goes “Carbon Intelligent”
In continuum with their goals for data centers to consume more energy from renewable resources, Google recently revealed in their latest announcement that it will also be time-shifting workloads to take advantage of these resources and make data centers run harder when the sun shines and the wind blows.
“We designed and deployed this first-of-its-kind system for our hyperscale (meaning very large) data centers to shift the timing of many compute tasks to when low-carbon power sources, like wind and solar, are most plentiful.”, Google announced.
Google’s latest advancement in sustainability is a newly developed carbon-intelligent computing platform that seems to work by using two forecasts – one indicating future carbon intensity of the local electrical grid near its data center and another of its own capacity requirements – and using that data “align compute tasks with times of low-carbon electricity supply.” The result is that workloads run when Google believes it can do so while generating the lowest-possible CO2 emissions.
The carbon-intelligent computing platform’s first version will focus on shifting tasks to different times of the day, within the same data center. But, Google already has plans to expand its capability, in addition to shifting time, it will also move flexible compute tasks between different data centers so that more work is completed when and where doing so is more environmentally friendly. As the platform continues to generate data, Google will document its research and share it with other organizations in hopes they can also develop similar tools and follow suit.
Leveraging forecasting with artificial intelligence and machine learning is the next best thing and Google is utilizing this powerful combination in their platform to anticipate workloads and improve the overall health and performance of their data center to be more efficient. Combined with efforts to use cloud resources efficiently by only running VMs when needed, and not oversizing, resource utilization can be improved to reduce your carbon footprint and save money.
There is an abundance of great resources that cover Google Cloud best practices. To give a little more insight into the most recent practices offered by Google Cloud, here’s a list of 17 recent articles on best practices consisting of different tips and tricks to help you fully utilize and optimize your Google Cloud environment.
1. Ensure You Have Total Visibility of Data
- “Without a holistic view of data and its sources, it can be difficult to know what data you have, where data originated from, and what data is in the public domain that shouldn’t be.”
2. Design Data Loss Prevention Policies in G Suite
- “Data Loss Prevention in G Suite is a set of policies, processes, and tools that are put in place to ensure your sensitive information won’t be lost during a fire, natural disaster or break in. You never know when tragedy will strike, that’s why you should invest in prevention policies before it’s too late.”
3. Have a Logging Policy in Place
- “It is important to create a comprehensive logging policy within your cloud platform to help with auditing and compliance. Access logging should be enabled on storage buckets so that you have an easily accessible log of object access. Administrator audit logs are created by default, but you should enable Data Access logs for Data Writes in all services.”
4. Use Display Names in your Dataflow Pipelines
- “Always use the name field to assign a useful, at-a-glance name to the transform. This field value is reflected in the Cloud Dataflow monitoring UI and can be incredibly useful to anyone looking at the pipeline. It is often possible to identify performance issues without having to look at the code using only the monitoring UI and well-named transforms.”
5. Automate Cost Optimizations
- “The one of the best practices for cost optimization is to automate the tasks and reduce manual intervention. Automation is simplified using a label – which is a key-value pair applied to various Google Cloud services. You can attach a label to each resource (such as Compute instances), then filter the resources based on their labels.”
6. Take Advantage of Committed & Sustained Use Discounts
- “At a commitment of up to 3 years and no upfront payment, customers can save money up to 57% of the normal price with this purchase. Availing these discounts can be one among GCP best practices as these discounts can be utilized for standard, highcpu, highmem and custom machine types and node groups which are sole-tenant.”
- “GCP has a plan called “Sustained Use Discounts” which you can avail when you consume certain resources for a better part of a billing month. As these discounts are applicable to a lot of resource like sole-tenant nodes, GPU devices, custom machine, etc. opting for these discounts would be another best practice on GCP.”
7. Use Preemptible VMs
- “As with most trade-offs, the biggest reason to use a preemptible VM is cost. Preemptible VMs can save you up to 80% compared to a normal on-demand virtual machine. This is a huge savings if the workload you’re trying to run consists of short-lived processes or things that are not urgent and can be done any time.”
8. Purchase Commitments
- “The sustained usage discounts are a major differentiator for GCP. They apply automatically once your instance is online for more than 25% of the monthly billing cycle and can net you a discount of up to 30% depending on instance (“machine”) type. You can combine sustained and committed use discounts but not at the same time. Committed use can get you a discount of up to 57% for most instance types and up to 70% for memory-optimized types.”
9. Apply Compute Engine Rightsizing Recommendations
- “Compute Engine provides machine type rightsizing recommendations to help you optimize the resource utilization of virtual machine (VM) instances. These recommendations are generated automatically based on system metrics gathered by the Stackdriver Monitoring service over the previous eight days. Use these recommendations to resize your computer instance’s machine type to more efficiently use the instance’s resources.”
10. Utilize Cost Management Tools That Take Action
- “Using third-party tools for cloud optimization help with cost visibility and governance and cost optimization. Make sure you aren’t just focusing on cost visibility and recommendations, but find a tool that takes that extra step and takes those actions for you…This automation reduces the potential for human error and saves organizations time and money by allowing developers to reallocate their time to more beneficial tasks. ”
11. Ensure You’re Only Paying for the Compute Resources You Need
- When adopting or optimizing your public cloud use, it’s important to eliminate wasted spend from idle resources – which is why you need to include an instance scheduler in your plan. An instance scheduler ensures that non-production resources – those used for development, staging, testing, and QA – are stopped when they’re not being used, so you aren’t charged for compute time you’re not actually using.
12. Optimize Performance and Storage Costs
- “In the cloud, where storage is billed as a separate line item, paying attention to storage utilization and configuration can result in substantial cost savings. And storage needs, like compute, are always changing. It’s possible that the storage class you picked when you first set up your environment may no longer be appropriate for a given workload.”
13. Optimize Persistent Disk Performance
- “When you launch a virtual machine compute engine in GCP, a disk is attached to perform as the local storage for the application. When you terminate this compute engine, the unattached disk can still be running. Google continues to charge for the full price of the disk, even though the disks are not active. This can significantly increase your cloud costs. Make sure that you don’t have any unattached disks that are still running.”
14. Apply Least Privilege Access Controls /Identity and access management
- “The principle of least privilege is a critical foundational element in GCP security and security more broadly. The principle is the concept of only providing employees with access to applications and resources they need to properly do their jobs.”
15. Manage Unrestricted Traffic and Firewalls
- “Limit the IP ranges that you assign to each firewall to only the networks that need access to those resources. GCP’s advanced VPC features allow you to get very granular with traffic by assigning targets by tag and Service Accounts. This allows you to express traffic flows logically in a way that you can identify later, such as allowing a front-end service to communicate to VMs in a back-end service’s Service Account.”
16. Ensure Your Bucket Names are Unique Across the Whole Platform
- “It is recommended to append random characters to the bucket name and not include the company name in it. An example is “prod-logs-b7b12b36511ac3462d12e62164dfff4e”. This will make it harder for an attacker to locate buckets in a targeted attack.”
17. Set Up a Google Cloud Organizational Structure
- “When you first log into your Google Admin console, everything will be grouped into a single organizational unit. Any settings you apply to this group will apply to all the users and devices in the organization. Planning out how you want to organize your units and hierarchy before diving in will help you save time and create a more structured security strategy.”
You can use the best practices listed above as a quick reference of things to keep in mind when using Google Cloud. Have any Google Cloud best practices you’ve learned recently? Let us know in the comments below!
16 Tips to Manage Cloud Costs
15 AWS Best Practices for 2019
Google Cloud offers services worldwide at locations across 200+ countries and territories, and it’s up to you to pick which of the Google Cloud Regions and Zones your applications will live in. When it comes to Google Cloud resources and services, they can either be zonal, regional or managed by Google across different regions. Here’s what you need to know about these geographic locations along with some tips to help you pick the right one for you.
What are Google Cloud Regions and How Many are There?
In Google Cloud, regions are independent geographic areas that are made up of one or more zones where users can host their resources. There are currently 22 regions around the world, scattered across North America, South America, Europe, Asia, and Australia.
Since regions are independent geographic areas, spreading your resources and applications across different regions and zones provides isolation from different kinds of resources, applications, hardware, software, and infrastructure failures. This provides an even higher level of failure independence meaning the failure of one resource will not affect other resources in different regions and zones.
Within a region, you will find regional resources. Regional resources are resources that are redundantly deployed across all the zones within a region, giving them higher availability.
Here’s a look at the different region names and their region descriptions:
|Region Name||Region Description|
|us-west3||Salt Lake City|
What are Google Cloud Zones and How Many are There?
Zones are isolated locations in a region. Zones are deployment areas for your resources in a region. Zones should be considered a single failure domain within a region. To deploy fault-tolerant applications with high availability and help protect against unexpected failures, deploy your applications across multiple zones in a region. Around the world, there are currently 67 zones.
Zones have high-bandwidth, low-latency network connections to other zones in the same region. As a best practice, Google suggests deploying applications across numerous zones and multiple regions so users can deploy high availability, fault-tolerant applications. This is a key step as it helps protect against unexpected failures of components. Within a Zone, you will find zonal resources that operate within a single zone. If a zone becomes unavailable, all zonal resources in that zone are unavailable until service is restored.
Here’s a closer look at the available Zones broken down by region.
|Region Name||Region Description||Zones|
|us-west3||Salt Lake City||us-west3-a|
Here are Some Things to Keep in Mind When Choosing a Region or Zone
Now that we know what regions and zones are, here are some things to be aware of when you are selecting which region or zone would be the best fit for your infrastructure.
- Distance – choose zones based on the location of your customers and where your data is required to live. It makes more sense to store your resources in zones that are closer to your point of service in order to keep network latency low.
- Communication – It’s important to be mindful of the fact that communication across and within regions will incur different costs and happen at different speeds. Typically, communication within a region will be cheaper than communication across different regions.
- Redundant Systems – As we mentioned above, Google is big on the fact that you should deploy fault-tolerant systems with high availability in case there are unexpected failures. Therefore, you should design any important systems with redundancy across multiple regions zones. This is to mitigate any possible effects if your instances were to experience an unexpected failure.
- Resource Distribution – Zones are designed to be independent of one another so if one zone fails or becomes unavailable, you can transfer traffic to another zone in the same region to keep your services running.
- Cost – always check the pricing to compare the cost between regions.
What Sorts of Features are Defined by Region and Zone?
Each zone supports a combination of Sandy Bridge, Ivy Bridge, Broadwell, Skylake, Haswell, Cascade Lake, or Skylake CPU platforms. Once you’ve created an instance within a zone, your instance will use the default processor that’s supported in that zone. As an alternative option, you could choose what CPU platform you’d like.
For example, take a look at the features offered in the europe-west6 region and us-east4-a Zones to see the similarities and differences.
These are the features that you can find in the europe-west6 region Zone:
- Available CPU Platforms
- Intel Xeon (Skylake) (default)
- N1 machine types with 96 vCPUs when using Skylake
- E2 machine types up to 16 vCPUs and 128 GB of memory
- Local SSDs
- Sole-tenant nodes
And in the us-east4-a Zone the features include:
- Available CPU Platforms
- Intel Xeon E5 v4 (Broadwell)(default)
- Intel Xeon (Skylake)
- N1 machine types with up to 96 vCPUs when using Skylake platform
- N2 machine types with up to 80 vCPUs and 640 GB of memory
- E2 machine types up to 16 vCPU and 128 GB of memory
- C2 machine types with up to 60 vCPUs and 240 GB of memory
- M1 ultramem memory-optimized machine types with 160 vCPUs and 3.75 TB of memory
- M2 ultramem memory-optimized machine types with 416 vCPUs and 11.5 TB of memory
- Local SSDs
- Sole-tenant nodes
As you can see, the europe-west6 region doesn’t have quite as many features as the rest of the Zones.
There are a handful of Google Cloud services that are managed to be redundant and distributed across and within regions. These services optimize performance, resource efficiency and availability. However, these services do require a trade-off – users must choose between either the consistency or latency model.
Note: *These trade-offs are documented on a product-specific basis.*
A key feature of multiregional resources is that the data associated with these resources aren’t tied to a specific region so therefore can be moved between regions. There are seven products that are multiregional are, they are:
- Cloud Storage
- Cloud Spanner
- Cloud Firestore
- Container Registry
- Cloud Key Management Service
- Cloud EKM
Google Cloud’s expansion shows no sign of slowing down, they are continuing to announce new regions and services to help serve their customers worldwide and continue to advance their place in the public cloud market share.
5 Things You Need to Know About AWS Regions and Availability Zones
Google Cloud has always had a knack for non-standard virtual machines, and their option of creating Google preemptible VMs is no different. Traditional virtual machines are long-running servers with standard operating systems that are only shut down when you say they can be shut down. On the other hand, preemptible VMs last no longer than 24 hours and can be stopped on a moment’s notice (and may not be available at all). So why use them?
Google Cloud Preemptible VM Overview
Preemptible VMs are designed to be a low-cost, short-duration option for batch jobs and fault-tolerant workloads. Essentially, Google is offering up extra capacity at a huge discount – with the tradeoff that if that capacity is needed for other (full-priced) resources, your instances can be terminated or “preempted”. Of course, if you’re using them for batch processing, being preempted will slow down your job without completely stopping it.
You can create your preemptible VMs in a managed instance group in order to easily manage a collection of VMs as a single entity – and, if a VM is preempted, the VM will be automatically recreated. Alternatively, you can use Kubernetes Engine container clusters to automatically recreate preempted VMs.
Preemptible VM Pricing
Pricing is fixed, not variable, and you can view the preemptible price alongside the on demand prices in Google’s compute pricing list and/or pricing calculator. Prices are 70-80% off on demand, and upward of 50% off even compared to a 3-year committed use discount.
Google does not charge you for instances if they are preempted in the first minute after they start running.
Note: Google Cloud Free Tier credits for Compute Engine do not apply to preemptible instances.
Use Cases for Google Preemptible VMs
As with most trade-offs, the biggest reason to use a preemptible VM is cost. Preemptible VMs can save you up to 80% compared to a normal on-demand virtual machine. (By the way – AWS users will want to use Spot Instances for the same reason, and Azure users can check out Low Priority VMs). This is a huge savings if the workload you’re trying to run consists of short-lived processes or things that are not urgent and can be done any time. This can include things like financial modeling, rendering and encoding, and even some parts of your CI/CD pipeline or code testing framework.
How to Create a Google Preemptible VM
To create a preemptible VM, you can use the Google Cloud Platform console, the ‘gcloud’ command line tool, or the Google Cloud API. The process is the same as creating a standard VM: you select your instance size, networking options, disk setup, and SSH keys, with the one minor change that you enable the ‘preemptible’ flag during setup. The other change you’ll want to make is to create a shutdown script to decide what happens to your processes and data if the instance is stopped without your knowledge. This script can even perform different actions if the instance was preempted as opposed to shut down from something you did.
One nice benefit of Google preemptible VMs is the ability to attach local SSD drives and GPUs to the instances. This means you can get added extensibility and performance for the workload that you are running, while still saving money. You can also have preemptible instances in a managed instance group for high scalability when the instances are available. This can help you process more of your jobs at once when the preemptible virtual machines are able to run.
FAQs About Google Preemptible Instances
How long do GCP preemptible VMs last?
These instances can last up to 24 hours. If you stop or start an instance, the 24-hour counter is reset because the instance transitions into a terminated state. If an instance is reset or other actions that keep it in a running state, the 24-hour clock is not reset.
Is pricing variable?
No, pricing for preemptible VMs is fixed, so you know in advance what you will pay.
What happens when my instance is preempted?
When your instance is preempted, you will get a 30 second graceful shutdown period. The instance will get a preemption notice in the form of an ACPI G2 Soft Off signal. You can use a shutdown script to complete cleanup actions before the instance stops. If an instance does not stop after 30 seconds, it will get an ACPI G3 Mechanical Off signal to the operating system, and terminate it. You can practice what this looks like by stopping the instance.
By using managed instance groups, you can automatically recreate your instances if capacity is available.
How often are you actually preempted?
Google reports an average preemption rate from 5-15% per day per project, with occasional spikes depending on time and zone. This is not a guarantee, though, and you can be preempted at any time.
How does Google choose which instances to preempt?
Google avoids preempting too many instances from a single customer, and preempts new instances over older instances whenever possible – this is to avoid losing work across your cluster.
How to Use Google Preemptible VMs to Optimize Costs
Our customers who have the most cost-effective use of Google resources often mix Google preemptible VMs with other instance types based on the workloads. For instance, production systems that need to be up 24/7 can buy committed-use discounts for up to 57% savings on those servers. Non-production systems, like dev, test, QA, and staging, can use on-demand resources with schedules managed by ParkMyCloud to save 65%. Then, any batch workloads or non-urgent jobs can use Google preemptible instances to run whenever available for up to 80% savings. Questions about optimizing cloud costs? We’re happy to help – email us or use the chat client on this page (staffed by real people, including me!).
Further reading on Google Cloud cost optimization:
Google Cloud credits are an incentive offered by Google that help you get started on Google’s Cloud Platform for free. Like Amazon and Microsoft, Google is trying to make it easy and in some cases free to get started using their Cloud Platform or certain services on their platform that they believe are “sticky” – which is beneficial if you’d like to try the services out for personal use or for a proof-of-concept. There is both a spend and a time limit for Google’s free credits, but then they also offer “always free” products that do not count against the free credit and can be used forever, or until Google decides to pull the plug, with usage limits.
1. Google Cloud Free Tier
The most basic way to use Google Cloud products is the Google Cloud Free Tier. This extended free trial gives you access to free cloud resources so you can learn about Google Cloud services by trying them on your own.
The Google Cloud Free Tier has two parts:
- A 12-month free trial with a $300 credit to use with any Google Cloud services.
- Always Free, which provides limited access to many common Google Cloud resources, free of charge.
12-Month Free Trial
The Google Cloud 12-month free trial and $300 credit is for new customers/trialers. Be sure to check through the full list of eligibility requirements on Google’s website. (No cryptomining – sorry!)
Before you start spinning up machines, be sure to note the following limitations:
- You can’t have more than 8 cores (or virtual CPUs) running at the same time.
- You can’t add GPUs to your VM instances.
- You can’t request a quota increase.
- You can’t create VM instances that are based on Windows Server images.
Your free trial ends when 12 months have elapsed since you signed up and/or you have spent your $300 in Google Cloud credit. When you use resources covered by Always Free during your free trial period, those resources are not charged against your free trial credit.
At the end of the Free Trial you either begin paying or you lose your services and data, it’s pretty black and white, and you can upgrade at any time during your Free Trial with any remaining credits being applied against your bill.
Google Cloud Always Free
The Always Free program is essentially the “next step” of free usage after a trial. These offerings provide limited access to many Google Cloud resources. The resources are usually provided at monthly intervals, and they are not credits — they do not accumulate or roll over from one interval to the next, it’s use it or lose it. The Always Free is a regular part of your Google Cloud account, unlike the Free Trial.
Not all Google Cloud services offer resources as part of Always Free program. For a full list of the services and usage limits please see here – a few of the more popular services include Compute Engine, Cloud Storage, Cloud Functions, Google Kubernetes Engine (GKE), Big Query and more. Be sure to check the usage limits before spinning up resources, as usage above the Always Free tier will be billed at standard rates.
2. Google Cloud for Startups
Google is motivated to get startups to build their infrastructure on Google Cloud while they’re still early stage, to gain long-term customers. If you work for an early-stage startup, reach out to your accelerator, incubator, or VC about Google Cloud credit. You can get up too $100,000 in credit – but it will come at the price of a large percentage of equity.
Options that don’t require you to give up equity include Founder Friendly Labs, StartX if you happen.
3. Education Offerings
Google offers several options for students, teachers, and researchers to get up and running with Google Cloud.
- GCP Credits for Learning – Faculty can apply for $100 in credits and $50 per student. This offering is intended for students who are learning GCP for career purposes.
- Research credits – Research faculty can apply for $5,000 in credits for Google Cloud resources to support academic research, or $1,000 for PhD candidates. The research can be in any field. Learn more here.
There are also several offerings related to making education accessible without associated credits. See more on the Google Cloud Education page.
4. Vendor Promotions and Events
Various vendors that are Google Cloud partners run occasional promotions, typically in the form of a credit greater than $300 for the Google Cloud Free Trial, although we’ve also seen straight credits offered. For example, CloudFlare offers a credit program for app developers.
Also check out events that might offer credit – for example, TechStars startup weekends offers $3,000 in Google Cloud credits for attendees. Smaller awards of a few hundred dollars can be found through meetups and other events.
Google Cloud Credits do offer people and companies a way to get started quickly, and the Always Free program is a unique way to entice users to try different services at no cost, albeit in a limited way. Be sure to check out the limitations before you get started, and have fun!
Q4 2019 earnings are in for the ‘big three’ cloud providers and you know what that means – it’s time for an AWS vs Azure vs Google Cloud market share comparison. Let’s take a look at all three providers side-by-side to see where they stand.
Note: a version of this post was originally published in April 2018 and 2019. It has been updated for 2020.
AWS vs. Azure vs. Google Cloud Earnings
To get a sense of the AWS vs Azure vs Google Cloud market share breakdown, let’s take a look at what each cloud provider’s reports shared.
Amazon reported Amazon Web Services (AWS) revenue of $9.95 billion for Q4 2019, compared to $7.4 billion for Q4 2019. AWS revenue grew 34% in the quarter, compared to a year earlier.
Across the business, Amazon’s quarterly sales increased to $87.4 billion, beating predictions of $86.02 billion.AWS has been a huge contributor to this growth. AWS revenue made up 11% of total Amazon sales for the quarter. AWS only continues to grow, and bolster the retail giant time after time.
One thing to keep in mind: you’ll see a couple of headlines pointing out that revenue growth is down, quoting that 34% number and comparing it to previous quarters’ growth rates, which peaked at 81% in 2015. However, that metric is of questionable value as AWS continues to increase revenue at this enormous scale, dominating the market (as we’ll see below).
In media commentary, AWS’s numbers seem to speak for themselves:
While Amazon specifies AWS revenue, Microsoft only reports on Azure’s growth rate. That number is 62% revenue growth over the previous quarter. This time last year, growth was reported at 76%. As mentioned above, comparing growth rates to growth rates is interesting, but not necessarily as useful a metric as actual revenue numbers – which we don’t have for Azure alone.
Here are the revenue numbers Microsoft does report. Azure is under the “Intelligent Cloud” business, which grew 27% to $11.9 billion. The operating group also includes server products and cloud services (30% growth) and Enterprise Services (6% growth).
The lack of specificity around Azure frustrates many pundits as it simply can’t be compared directly to AWS, and inevitably raises eyebrows about how Azure is really doing. Of course, it also assumes that IaaS is the only piece of “cloud” that’s important, but then, that’s how AWS has grown to dominate the market.
A victory for the cloud provider was the October winner of the $10 billion JEDI cloud computing contract (although AWS is actively protesting the contract with claims of political interference).
Here are a few headlines on Microsoft’s reporting that caught our attention:
This quarter, Google broke out revenue reporting for its cloud business for the first time. For the fourth quarter, Google Cloud generated $2.6 billion in revenue, a growth of 53% from the previous year. For 2019 as a whole, Google Cloud brought in $8.9 billion in revenue, which is less than AWS generated in the fourth quarter alone.
Google CEO Sundar Pichai stated on the earnings report conference call, “The growth rate of GCP was meaningfully higher than that of Cloud overall, and GCP’s growth rate accelerated from 2018 to 2019.”
CFO Ruth Porat also highlighted Google Cloud Anthos, as Google leans into enabling the multi-cloud reality for its customers, something AWS and Azure have avoided.
Further reading on Google’s quarterly reporting:
Cloud Computing Market Share Breakdown – AWS vs. Azure vs. Google Cloud
When we originally published this blog in 2018, we included a market share breakdown from analyst Canalys, which reported AWS in the lead owning about a third of the market, Microsoft in second with about 15 percent, and Google sitting around 5 percent.
In 2019, they reported an overall growth in the cloud infrastructure market of 42%. By provider, AWS had the biggest sales gain with a $2.3 billion YOY increase, but Canalys reported Azure and Google Cloud with bigger percentage increases.
As of February 2020, Canalys reports AWS with 32.4% of the market, Azure at 17.6%, Google Cloud at 6%, Alibaba Cloud close behind at 5.4%, and other clouds with 38.5%.
Ultimately, it seems clear that in the case of AWS vs Azure vs Google Cloud market share – AWS still has the lead.
Bezos has said, “AWS had the unusual advantage of a seven-year head start before facing like-minded competition. As a result, the AWS services are by far the most evolved and most functionality-rich.”
Our anecdotal experience talking to cloud customers often finds that true, and it says something that Microsoft isn’t breaking down their cloud numbers just yet, while Google leans into multi-cloud.
AWS remains far in the lead for now. With that said, it will be interesting to see how the actual numbers play out, especially as Alibaba catches up.