Q2 2020 earnings are in for the ‘big three’ cloud providers and you know what that means – it’s time for an AWS vs Azure vs Google Cloud market share comparison. Let’s take a look at all three providers side-by-side to see where they stand.
Note: several previous versions of this article have been published. It has been updated for August 2020.
AWS vs. Azure vs. Google Cloud Earnings
To get a sense of the AWS vs Azure vs Google Cloud market share breakdown, let’s take a look at what each cloud provider’s reports shared.
Amazon reported Amazon Web Services (AWS) revenue of $10.8 billion for Q2 2020, compared to $8.3 billion for Q2 2019. AWS revenue grew 29% in the quarter.
Across the business, Amazon’s quarterly sales increased to $88.9 billion, beating predictions of $81.5 billion. The net income of $5.2 billion was the highest in a single quarter yet for the giant, driven by online shopping during COVID-19 – though note that the company is careful to note the $4 billion in costs related to COVID-19. And AWS? It made up 12.1% of Amazon’s revenue for the quarter – and 64% of its profit.
AWS only continues to grow, and bolster the retail giant time after time.
One thing to keep in mind: you’ll see a couple of headlines pointing out that revenue growth is down, quoting that 29% number and comparing it to previous quarters’ growth rates, which peaked at 81% in 2015. However, that metric is of questionable value as AWS continues to increase revenue at this enormous scale, dominating the market (as we’ll see below).
While Amazon specifies AWS revenue, Microsoft only reports on Azure’s growth rate. That number is 47% revenue growth over the previous quarter. This time last year, growth was reported at 64%. As mentioned above, comparing growth rates to growth rates is interesting, but not necessarily as useful a metric as actual revenue numbers – which we don’t have for Azure alone.
Here are the revenue numbers Microsoft does report. Azure is under the “Intelligent Cloud” business, which grew 17% to $13.4 billion. The operating group also includes server products and cloud services (19% growth) and Enterprise Services (flat).
The lack of specificity around Azure frustrates many pundits as it simply can’t be compared directly to AWS, and inevitably raises eyebrows about how Azure is really doing. Of course, it also assumes that IaaS is the only piece of “cloud” that’s important, but then, that’s how AWS has grown to dominate the market. Microsoft’s release noted that “cloud usage and demand increased as customers continued to work and learn from home. Transactional license purchasing continued to slow, particularly in small and medium businesses, and LinkedIn was negatively impacted by the weak job market and reductions in advertising spend.”
However, overall, Microsoft exceeded analyst expectations in the first full quarter of the COVID-19 pandemic, with overall revenue coming in at $38 billion vs. $35.5 billion expected; and Intelligent Cloud revenue earning $13.4 billion vs. $13.1 billion expected.
This is the second quarter that Alphabet broke out revenue reporting for its cloud business. This quarter, Google Cloud, which includes Google Compute Engine and G Suite, generated $3 billion in revenue – a growth of 43% year-over-year.
Overall, Alphabet’s revenue decreased 2% year-over-year to $38.3 billion. CFO Ruth Porat said, “year-on-year declines in our advertising revenues from search and network were offset by growth in Google other and Google Cloud revenues,” continuing their ongoing messaging that cloud is important to the business as a whole. This comes as Google Cloud leans into product offerings intended at capturing the multi-cloud audience, such as the recent release of Big Query Omni that aims to provide data analytics capabilities for workloads that live in AWS and Azure as well as Google Cloud.
Cloud Computing Market Share Breakdown – AWS vs. Azure vs. Google Cloud
When we originally published this blog in 2018, we included a market share breakdown from analyst Canalys, which reported AWS in the lead owning about a third of the market, Microsoft in second with about 15 percent, and Google sitting around 5 percent.
In 2019, they reported an overall growth in the cloud infrastructure market of 42%. By provider, AWS had the biggest sales gain with a $2.3 billion YOY increase, but Canalys reported Azure and Google Cloud with bigger percentage increases.
As of July 2020, Canalys reports AWS with 31% of the market, Azure at 20%, Google Cloud at 6%, Alibaba Cloud close behind at 5%, and other clouds with 37%.
It seems clear that in the case of AWS vs Azure vs Google Cloud market share – AWS still has the lead. However, their overall share of the market is slowly shrinking, while Azure grows.
Bezos has said, “AWS had the unusual advantage of a seven-year head start before facing like-minded competition. As a result, the AWS services are by far the most evolved and most functionality-rich.”
Our anecdotal experience talking to cloud customers often finds that true, and it says something that Microsoft isn’t breaking down their cloud numbers just yet, while Google leans into multi-cloud.
AWS remains far in the lead for now. With that said, it will be interesting to see how the actual market share numbers play out over the coming years.
During its virtual Google Cloud Next ’20 “On Air” series, Google announced the introduction of BigQuery Omni. This is an extension of its existing BigQuery data analytics solution to now analyze data in multiple public clouds, currently including Google Cloud and Amazon Web Services (AWS), with Microsoft Azure coming soon. Powered by Google Cloud’s Anthos, and using a unified interface, BigQuery Omni allows developers to analyze data locally without having to move data sets between the platforms.
BigQuery Engine to Analyze Multi-Cloud Data
Google Cloud’s general manager and VP of engineering, Debanjan Saha, says “BigQuery Omni is an extension of Google Cloud’s continued innovation and commitment to multi-cloud that brings the best analytics and data warehouse technology, no matter where the data is stored.” And that, “BigQuery Omni represents a new way of analyzing data stored in multiple public clouds, which is made possible by BigQuery’s separation of compute and storage.”
According to Google Cloud, this provides scalable storage that can reside in Google Cloud or other public clouds, and stateless, resilient compute that executes standard SQL queries.
Google Cloud reports that BigQuery Omni will:
- Break down silos and gain insights on data with a flexible, multi-cloud analytics solution that doesn’t require moving or copying data from other public clouds into Google Cloud for analysis.
- Get consistent data experience across clouds and datasets with a unified analytics experience across datasets, in Google Cloud, AWS, and Azure (coming soon) using standard SQL and BigQuery’s familiar interface. BigQuery Omni supports Avro, CSV, JSON, ORC, and Parquet.
- Securely run analytics to another public cloud with a fully managed infrastructure, powered by Anthos, so you can query data without worrying about the underlying infrastructure. Users can choose the public cloud region where their data is located, and run the query.
Why is Google Aiming Multi-Cloud?
Many organizations leveraging public cloud are doing so with multiple clouds: 55% of organizations are multi-cloud according to a recent survey from IDG, and 80% according to a recent Gartner survey. (Is this actually necessary? Maybe.)
Google Cloud has been the most open to supporting this multi-cloud reality, and perhaps implicit in releases like Anthos and BigQuery Omni is Google’s recognition that it’s #3 in the market, and many of its customers have a presence in AWS or Azure.
So, BigQuery Omni actually involves physically running BigQuery clusters in the cloud on which the remote data resides. This is something that in the past, could only be done if your data was stored only in Google Cloud. Now with Kubernetes-powered Anthos, as well as the visualization tool gained in Google’s acquisition of Looker, Google is moving toward a middleware strategy. Now, it is offering services to bridge data silos, as a strategy to gain market share from its bigger competitors. Expect to see more similar service offerings coming from Google as they look to break AWS’s lead on public cloud.
Whether you’re new to public cloud altogether or already use one provider and are interested in trying another, you may be interested in a comparison of the AWS vs Azure vs Google free tier. The big three cloud providers – AWS, Azure and Google Cloud – each have a free tier available that’s designed to give users the cloud experience without all the costs. They include free trial versions of numerous services so users can test out different products and learn how they work before they make a huge commitment. While they may only cover a small environment, it’s a good way to learn more about each cloud provider. For all of the cloud providers, the free trials are available to only new users.
AWS Free Tier Offerings
AWS free tier includes more than 60 products. There are two different types of free options that are available depending on the product used: always free and 12 months free. To help customers get started on AWS, the services that fall under the free 12-months are for new trial customers and give customers the ability to use the products for free (up to a specific level of usage) for one year from the date the account was created. Keep in mind that once the free 12 months are up, your services will start to be charged at the normal rate. Be prepared and review this checklist of things to do when you outgrow the AWS free tier.
Azure Free Tier Offerings
The Azure equivalent of a free tier is referred to as a free account. As a new user in Azure, you’re given a $200 credit that has to be used in the first 30 days after activating your account. When you’ve used up the credit or 30 days have expired, you’ll have to upgrade to a paid account if you wish to continue using certain products. Ensure that you have a plan to reduce Azure costs in place. If you don’t need the paid products, there’s also the always free option.
Some of the ways people choose to use their free account are to gain insights from their data, test and deploy enterprise apps, create custom mobile experiences and more.
Google Cloud Free Tier Offerings
The Google Cloud Free Tier is essentially an extended free trial that gives you access to free cloud resources so you can learn about Google Cloud services by trying them on your own.
The Google Cloud Free Tier has two parts – a 90 day free trial with a $300 credit to use with any Google Cloud services and always free, which provides limited access to many common Google Cloud resources, free of charge. Google Cloud gives you a little more time with your credit than Azure, you get the full 90 days of the free trial to use your credit. Unlike free trials from the other cloud providers, Google does not automatically charge you once the trial ends – this way you’re guaranteed that the free tier is actually 100% free. Keep in mind that your trial ends after 90 days or once you’ve exhausted the $300 credit. Any usage beyond the free monthly usage limits are covered by the $300 free credit – you must upgrade to a paid account to continue using Google Cloud.
Free Tier Limitations
It’s important to note that the always-free services vary widely between the cloud providers and there are usage limitations. Keep in mind the cloud providers’ motivations: they want you to get attached to the services so you start paying for them. So, be aware of the limits before you spin up any resources, and don’t be surprised by any charges.
In AWS, when your free tier expires or if your application use exceeds the free tier limits, you pay standard, pay-as-you-go service rates. Azure and Google both offer credits for new users that start a free trial, which are a handy way to set a spending limit. However, costs can get a little tricky if you aren’t paying attention. Once the credits have been used you’ll have to upgrade your account if you wish to continue using the products. Essentially, the credit that was acting as a spending limit is automatically removed so whatever you use beyond the free amounts, you will now have to pay for. In Google Cloud, there is a cap on the number of virtual CPUs you can use at once – and you can’t add GPUs or use Windows Server instances.
For 12 months after you upgrade your account, certain amounts of popular products are free. After 12 months, unless decommissioned, any products you may be using will continue to run, and you’ll be billed at the standard pay-as-you-go rates.
Another limitation is that commercial software and operating system licenses typically aren’t available under the free tiers.
These offerings are “use it or lose it” – if you don’t use all your credits or utilize all your usage, there will be no rollover into future months.
Popular Services, Products, and Tools to Check Out for Free
AWS has 33 products that fall under the one-year free tier – here are some of the most popular:
- Amazon EC2 Compute: 750 hours per month of compute time, per month of Linux, RHEL, SLES t2.micro or t3.micro instance and Windows t2.micro or t3.micro instance dependent on region.
- Amazon S3 Storage: 5GB of standard storage
- Amazon RDS Database: 750 hours per month of db.t2.micro database usage using MySQL, PostgreSQL, MariaDB, Oracle BYOL, or SQL Server, 20 GB of General Purpose (SSD) database storage and 20 GB of storage for database backups and DB Snapshots.
For the always-free option, you’ll find a number of products as well, some of these include:
- AWS Lambda: 1 million free compute requests per month and up to 3.2 million seconds of compute time per month.
- Amazon DynamoDB: 25 GB of database storage per month, enough to handle up to 200M requests per month.
- Amazon CloudWatch: 10 custom metrics and alarms per month, 1,000,000 API requests, 5GB of Log Data Ingestion and Log Data Archive and 3 Dashboards with up to 50 metrics.
Azure has 19 products that are free each month for 12 months – here are some of the most popular:
- Linux and Windows virtual machines: 750 hours (using B1S VM) of compute time
- Managed Disk Storage: 64 GB x 2 (P6 SSD)
- Blob Storage: 5GB (LRS hot block)
- File Storage: 5GB (LRS File Storage)
- SQL databases: 250 GB
For their always free offerings, you’ll find even more popular products – here are a few:
- Azure Kubernetes Service: no charge for cluster management, you only pay for the virtual machines and the associated storage and networking resources consumed.
- Azure DevOps: 5 users for open source projects and small projects (with unlimited private Git repos). For larger teams, the cost ranges from $6-$90 per month.
- Azure Cosmos DB (400 RU/s provisioned throughput)
Unlike AWS and Azure, Google Cloud does not have a 12 months free offerings. However, Google Cloud does still have a free tier with a wide range of always free services – some of the most popular ones include:
- Google BigQuery: 1 TB of queries and 10 GB of storage per month.
- Kubernetes Engine: One zonal cluster per month
- Google Compute Engine: 1 f1-micro instance per month only in U.S. regions. 30 GB-months HDD, 5 GB-months snapshot in certain regions and 1 GB of outbound network data from North America to all region destinations per month.
- Google Cloud Storage: 5 GB of regional storage per month, only in the US. 5,000 Class A, and 50,000 Class B operations, and 1 GB of outbound network data from North America to all region destinations per month.
Check out these blog posts on free credits for each cloud provider to see how you can start saving:
Spot instances and similar “spare capacity” models are frequently cited as one of the top ways to save money on public cloud. However, we’ve noticed that fewer cloud customers are taking advantage of this discounted capacity than you might expect.
We say “spot instances” in this article for simplicity, but each cloud provider has their own name for the sale of discounted spare capacity – AWS’s spot instances, Azure’s spot VMs and Google Cloud’s preemptible VMs.
Spot instances are a type of purchasing option that allows users to take advantage of spare capacity at a low price, with the possibility that it could be reclaimed for other workloads with just brief notice.
In the past, AWS’s model required users to bid on Spot capacity. However, the model has since been simplified so users don’t actually have to bid for Spot Instances anymore. Instead, they pay the Spot price that’s in effect for the current hour for the instances that they launch. The prices are now more predictable with much less volatility. Customers still have the option to control costs by providing a maximum price that they’re willing to pay in the console when they request Spot Instances.
Spot Instances in Each Cloud
Variations of spot instances are offered across different cloud providers. AWS has Spot Instances while Google Cloud offers preemptible VMs and as of March of this year, Microsoft Azure announced an even more direct equivalent to Spot Instances, called Azure Spot Virtual Machines.
Spot VMs have replaced the preview of Azure’s low-priority VMs on scale sets – all eligible low-priority VMs on scale sets have automatically been transitioned to Spot VMs. Azure Spot VMs provide access to unused Azure compute capacity at deep discounts. Spot VMs can be evicted at any time if Azure needs capacity.
AWS spot instances have variable pricing. Azure Spot VMs offer the same characteristics as a pay-as-you-go virtual machine, the differences being pricing and evictions. Google Preemptible VMs offer a fixed discounting structure. Google’s offering is a bit more flexible, with no limitations on the instance types. Preemptible VMs are designed to be a low-cost, short-duration option for batch jobs and fault-tolerant workloads.
Adoption of Spot Instances
Our research indicates that less than 20% of cloud users use spot instances on a regular basis, despite spot being on nearly every list of ways to reduce costs (including our own).
While applications can be built to withstand interruption, specific concerns remain, such as loss of log data, exhausting capacity and fluctuation in the spot market price.
In AWS, it’s important to note that while spot prices can reach the on-demand price, since they are driven by long-term supply and demand, they don’t normally reach on-demand price.
A Spot Fleet, in which you specify a certain capacity of instances you want to maintain, is a collection of Spot Instances and can also include On-Demand Instances. AWS attempts to meet the target capacity specified by using a Spot Fleet to launch the number of Spot Instances and On-Demand Instances specified in the Spot Fleet request.
To help reduce the impact of interruptions, you can set up Spot Fleets to respond to interruption notices by hibernating or stopping instances instead of terminating when capacity is no longer available. Spot Fleets will not launch on-demand capacity if Spot capacity is not available on all the capacity pools specified.
AWS also has a capability that allows you to use Amazon EC2 Auto Scaling to scale Spot Instances – this feature also combines different EC2 instance types and pricing models. You are in control of the instance types used to build your group – groups are always looking for the lowest cost while meeting other requirements you’ve set. This option may be a popular choice for some as ASGs are more familiar to customers compared to Fleet, and more suitable for many different workload types. If you switch part or all of your ASGs over to Spot Instances, you may be able to save up to 90% when compared to On-Demand Instances.
Another interesting feature worth noting is Amazon’s capacity-optimized spot instance allocation strategy. When customers diversify their Fleet or Auto Scaling group, the system will launch capacity from the most available capacity pools, effectively decreasing interruptions. In fact, by switching to capacity-optimized allocation users are able to reduce their overall interruption rate by about 75%.
Is “Eviction” Driving People Away?
There is one main caveat when it comes to spot instances – they are interruptible. All three major cloud providers have mechanisms in place for these spare capacity resources to be interrupted, related to changes in capacity availability and/or changes in pricing.
This means workloads can be “evicted” from a spot instance or VM. Essentially, this means that if a cloud provider needs the resource at any given time, your workloads can be kicked off. You are notified when an AWS spot instance is going to be evicted: AWS emits an event two minutes prior to the actual interruption. In Azure, you can opt to receive notifications that tell you when your VM is going to be evicted. However, you will have only 30 seconds to finish any jobs and perform shutdown tasks prior to the eviction making it almost impossible to manage. Google Cloud also gives you 30 seconds to shut down your instances when you’re preempted so you can save your work for later. Google also always terminates preemptible instances after 24 hours of running. All of this means your application must be designed to be interruptible, and should expect it to happen regularly – difficult for some applications, but not so much for others that are rather stateless, or normally process work in small chunks.
Companies such as Spot – recently acquired by NetApp (congrats!) – help in this regard by safely moving the workload to another available spot instance automatically.
Our research has indicated that fewer than one-quarter of users agree that their spot eviction rate was too low to be a concern – which means for most, eviction rate is a concern. Of course, it’s certainly possible to build applications to be resilient to eviction. For instance, applications can make use of many instance types in order to tolerate market fluctuations and make appropriate bids for each type.
AWS also offers an automatic scaling feature that has the ability to increase or decrease the target capacity of your Spot Fleet automatically based on demand. The goal of this is to allow users to scale in conservatively in order to protect your application’s availability.
Early Adopters of Spot and Other Innovations May be One and the Same
People who are hesitant to build for spot more likely use regular VMs, perhaps with Reserved Instances for savings. It’s likely that people open to the idea of spot instances are the same who would be early adopters for other tech, like serverless, and no longer have a need for Spot.
For the right architecture, spot instances can provide significant savings. It’s a matter of whether you want to bother.
Google Cloud IAM (Identity and Access Management) is the core component of Google Cloud that keeps you secure. By adopting the “principle of least privilege” methodology, you can work towards having your infrastructure be only accessible by those who need it. As your organization grows in size, the idea of keeping your IAM permissions correct can seem daunting, so here’s a checklist of what you should think about prior to changing permissions. This can also help you as you continuously enforce your access management.
1. Who? (The “Identity”)
Narrowing down the person or thing who will be accessing resources is the first step in granting IAM permissions. This can be one of several options, including:
- A Google account (usually used by a human)
- A service account (usually used by a script/tool)
- A Google group
- A G-Suite domain
Our biggest recommendation for this step is to keep this limited to as few identities as possible. While you may need to assign permissions to a larger group, it’s much safer to start with a smaller subset and add permissions as necessary over time. Consider whether this is an automated task or a real person using the access as well, since service accounts with distinct uses makes it easier to track and limit those accounts.
2. What Access? (The “Role”)
Google Cloud permissions often correspond directly with a specific Google Cloud REST API method. These permissions are named based on the GCP service, the specific resource, and the verb that is being allowed. For example, ParkMyCloud requires a permission named “compute.instances.start” in order to issue a start command to Google Compute Engine instances.
These permissions are not granted directly, but instead are included in a role that gets assigned to the identity you’ve chosen. There are three different types of roles:
- Primitive Roles – These specific roles (Owner, Editor, and Viewer) include a huge amount of permissions across all GCP services, and should be avoided in favor of more specific roles based on need.
- Predefined Roles – Google provides many roles that describe a collection of permissions for a specific service, like “roles/cloudsql.client” (which includes the permissions “cloudsql.instances.connect” and “cloudsql.instances.get”). Some roles are broad, while others are limited.
- Custom Roles – If a predefined role doesn’t exist that matches what you need, you can create a custom role that includes a list of specific permissions.
Our recommendation for this step is to use a predefined role where possible, but don’t hesitate to use a custom role. The ParkMyCloud setup has a custom role that specifically lists the exact REST API commands that are used by the system. This ensures that there are no possible ways for our platform to do anything that you don’t intend. When following the “least privilege” methodology, you will find that custom roles are often used.
3. Which Item? (The “Resource”)
Once you’ve decided on the identity and the permissions, you’ll need to assign those permissions to a resource using a Cloud IAM policy. A resource can be very granular or very broad, including things like:
- GCP Projects
- Single Compute Engine instances
- Cloud Storage buckets
Each predefined role has a “lowest level” of resource that can be set. For example, the “App Engine Admin” role must be set at the project level, but the “Compute Load Balancer Admin” can be set at the compute instance level. You can always go higher up the resource hierarchy than the minimum. In the hierarchy, you have individual service resources, which all belong to a project, which can either be a part of a folder (in an organization) or directly a part of the organization.
Our recommendation, as with the Identity question, is to limit this to as few resources as possible. In practice, this might mean making a separate project to group together resources so you can assign a project-level role to an identity. Alternatively, you can just select a few resources within a project, or even an individual resource if possible.
And That’s All That IAM
These three questions provide the crucial decisions that you must make regarding Google Cloud IAM assignments. By thinking through these items, you can ensure that security is higher and risks are lower. For an example of how ParkMyCloud recommends a custom role assigned to a new service account in order to schedule and resize your VMs and databases, check out the documentation for ParkMyCloud GCP access, and sign up for a free trial today to get it connected securely to your environment.
Google Sustainability is an effort that ranges across their business, from the Global Fishing Watch to environmental consciousness in the supply chain. Given that cloud computing has been a major draw of global energy in recent years, the amount of computing done in data centers more than quintupled between 2010 and 2018. But, the amount of energy consumed by the world’s data centers grew only six percent during that period, thanks to improvements in energy efficiency. However, that’s still a lot of power. That’s why Google’s sustainability efforts for data centers and cloud computing are especially important.
Google Cloud Sustainability Efforts – As Old as Their Data Centers
Reducing energy usage has been an initiative for Google for more than 10 years. Google has been carbon neutral since 2007, and 2019 marked the third year in a row that they’ve matched their energy usage with 100 percent renewable energy purchases. Google’s innovation in the data center market also comes from the process of building facilities from the ground up instead of buying existing infrastructures and using machine learning technology to monitor and improve power-usage-effectiveness (PUE) and find new ways to save energy in their data centers.
When comparing the big three cloud providers in terms of sustainability efforts, AWS is by far the largest source of carbon emissions from the cloud globally, due to its dominance. However, AWS’s sustainability team is investing in green energy initiatives and is striving to commit to an ambitious goal of 100% use of renewable energy by 2040 to become as carbon-neutral as Google has been. Microsoft Azure, on the other hand, has run on 100 percent renewable energy since 2014 but would be considered a low-carbon electricity consumer and that’s in part because it runs less of the world than Amazon or Google.
Nonetheless, data centers from the big three cloud providers, wherever they are, all run on electricity. How the electricity is generated is the important factor in whether they are more or less favorable for the environment. For Google, reaching 100% renewable energy purchasing on a global and annual basis was just the beginning. In addition to continuing their aggressive move forward with renewable energy technologies like wind and solar, they wanted to achieve the much more challenging long-term goal of powering operations on a region-specific, 24-7 basis with clean, zero-carbon energy.
Why Renewable Energy Needs to Be the Norm for Cloud Computing
It’s no secret that cloud computing is a drain of resources, roughly three percent of all electricity generated on the planet. That’s why it’s important for Google and other cloud providers to be part of the solution to solving global climate change. Renewable energy is an important element, as is matching the energy use from operations and by helping to create pathways for others to purchase clean energy. However, it’s not just about fighting climate change. Purchasing energy from renewable resources also makes good business sense, for two key reasons:
- Renewables are cost-effective – The cost to produce renewable energy technologies like wind and solar had come down precipitously in recent years. By 2016, the levelized cost of wind had come down 60% and the levelized cost of solar had come down 80%. In fact, in some areas, renewable energy is the cheapest form of energy available on the grid. Reducing the cost to run servers reduces the cost for public cloud customers – and we’re in favor of anything that does that.
- Renewable energy inputs like wind and sunlight are essentially free – Having no fuel input for most renewables allows Google to eliminate exposure to fuel-price volatility and especially helpful when managing a global portfolio of operations in a wide variety of markets.
Google Sustainability in the Cloud Goes “Carbon Intelligent”
In continuum with their goals for data centers to consume more energy from renewable resources, Google recently revealed in their latest announcement that it will also be time-shifting workloads to take advantage of these resources and make data centers run harder when the sun shines and the wind blows.
“We designed and deployed this first-of-its-kind system for our hyperscale (meaning very large) data centers to shift the timing of many compute tasks to when low-carbon power sources, like wind and solar, are most plentiful.”, Google announced.
Google’s latest advancement in sustainability is a newly developed carbon-intelligent computing platform that seems to work by using two forecasts – one indicating future carbon intensity of the local electrical grid near its data center and another of its own capacity requirements – and using that data “align compute tasks with times of low-carbon electricity supply.” The result is that workloads run when Google believes it can do so while generating the lowest-possible CO2 emissions.
The carbon-intelligent computing platform’s first version will focus on shifting tasks to different times of the day, within the same data center. But, Google already has plans to expand its capability, in addition to shifting time, it will also move flexible compute tasks between different data centers so that more work is completed when and where doing so is more environmentally friendly. As the platform continues to generate data, Google will document its research and share it with other organizations in hopes they can also develop similar tools and follow suit.
Leveraging forecasting with artificial intelligence and machine learning is the next best thing and Google is utilizing this powerful combination in their platform to anticipate workloads and improve the overall health and performance of their data center to be more efficient. Combined with efforts to use cloud resources efficiently by only running VMs when needed, and not oversizing, resource utilization can be improved to reduce your carbon footprint and save money.