10 Things You Should Know Before Buying Azure Reserved Instances

10 Things You Should Know Before Buying Azure Reserved Instances

Azure Reserved Instances are a way to reduce Azure costs by committing to a one- or three-year term for a virtual machine, in exchange for a discount of up to 72% compared to pay-as-you-go. Of course, before you lock in such a commitment, there are a few things you should know about this purchasing option – here are 10.

1. Azure Reserved Instances are a purchasing option.

First, you should understand that what you’re “reserving” is the pricing and purchasing option – the virtual machines are the same that you can pay for through pay-as-you-go pricing.  (If this seems counterintuitive to the idea of “that virtual machine I reserved,” recall that a reservation works more like a credit against your bill in retrospect rather than a specific VM with your name on it.)

2. Reservations are “use it or lose it”. 

Important: reservation discounts are “use it or lose it”. If no resources match your reservation for any hour, you lose the reservation for that hour. This is why you should always ensure that you have predictable, full-time usage planned before reserving capacity. 

3. They’re not available for everything… but perhaps more than you’d guess.

Reservations are available for virtual machines, SQL database compute capacity, Azure Cosmos DB throughput. 

Keep in mind what services are covered by your reservation:

  • Reserved Virtual Machine Instance – the reservation covers compute costs, but not software, networking, or storage costs.
  • Azure Cosmos DB reserved capacity – reservations are for the provisioned throughput – not storage or networking charges. 
  • SQL Database reserved vCore – the reservation covers the compute costs, but not licenses.
  • SQL Data Warehouse – reservations cover “compute Data Warehouse Units” (cDWU), or units of CPU, memory, and IO – but not storage or networking charges.
  • App Service stamp fee – reservations cover stamp usage, but not workers and therefore other resources associated with the stamp.

There are some limitations to availability. You cannot purchase reservations for A-series, Av2-series, or G-series VMs; any VM-series or size in preview; Germany or China regions; or in some cases, reservations may be limited due to low capacity in a region.

4. You need to set a “scope” for the Reserved Instance to apply.

Another concept to be familiar with is the concept of “scope” for reservations – in other words, what subscription or resource groups are eligible for the discount you are purchasing. Scope can be limited to a single resource group, a single subscription, or shared scope across multiple eligible subscriptions as long as billing is tied together.

5. Instance sizes are flexible, automatically.

When you purchase Azure Reserved Instances, there is an option to “optimize for instance size flexibility” that will be selected by default. This means the reservation can apply to the VM sizes in the same VM group, which makes each reservation a bit more broadly applicable.

6. Whether you pay upfront or monthly, the cost is the same.

Payment options: Azure just released in September 2019 the ability to pay for reservations through monthly payments – at the same cost that you would pay up front, with no extra fees. There is no “partial upfront” option. This is in contrast to, say, AWS’s Reserved Instance options, which have a variable discount depending on how much you pay upfront. The difference in approach may vary due to the cancellation options – AWS users can resell unused capacity on the Reserved Instance marketplace, while Azure users pay a cancellation fee. Google Cloud offers only a billed-monthly option – with no option to cancel.  

7. Azure recommends Reserved Instances based on your usage history.

Reservation Recommendations and quantity are shown when you purchase a VM reserved instance in the Azure portal, based on the last 30 days of usage and your savings potential.  You can see recommendations in Azure Advisor, at least, for individual subscriptions. For shared scope, you can use the API to get purchase recommendations. 

8. Azure Reserved Instance purchases are used immediately, and don’t renew.

There are two important things to understand regarding terms and renewal. First, the term for your reservation starts immediately: you can’t schedule them for a future date. Second, Azure Reserved Instances do not automatically renew, and when the billing term expires, you’ll pay the pay-as-you-go rate. (We’ll be blogging next week on an option AWS has recently released to queue new reservations in advance.) 

9. There are two solid options if you no longer need a reservation you already purchased.

What happens if you determine that you no longer need an Azure Reserved Instance you’ve purchased? There are two main options:

Exchange – you can exchange a reservation for another of the same type– that is, you can’t return a VM reservation to purchase an SQL reservation. This is only allowed if the total lifetime cost of the new purchase is greater than the leftover payments that are canceled for the returned reservation. 

Cancel – instead, you can choose to cancel the reservation contract and request a refund. However, you are subject to an early termination fee of 12%. Note also that there’s a total refund limit of $50,000 in a rolling 12-month window. 

10. Azure Reserved Instances make sense… in some situations.

For predictable production workloads, where you know you’ll have VMs running 24×7, Azure Reserved Instances can make sense. However, for your non-production workloads, this is likely not the case. You’ll save far more by using pay-as-you-go pricing, and scheduling those VMs to turn off when they’re not needed (ParkMyCloud can help with that.) 

Further reading:

How much do the differences between cloud providers actually matter?

How to save money with Microsoft Azure Enterprise Agreements

Can Azure Dev/Test pricing save you money?

How to Evaluate AWS RDS Pricing and Features

How to Evaluate AWS RDS Pricing and Features

AWS RDS pricing – like all Amazon cloud pricing – can be a bit confusing. In this post, we’ll walk through how RDS pricing works and other features of RDS you should know about.

Since its release, Amazon’s Relational Database Service (RDS) has become increasingly popular among organizations that are looking to simplify setting up, operating and scaling relational databases in AWS. An RDS is an automated service, which when implemented, will take over time consuming, mundane tasks. The ability to automate relational databases in the cloud has made RDS a cost-efficient option for those looking to control their cloud spend. 

Traditional systems administration of servers, applications, and databases used to be a little simpler when it came to choices and costs.  For a long time, there was no other choice than to hook up a physical server, put on your desired OS, and install the database or application software that you needed.  Eventually, you could choose to install your OS on a physical server or on a virtual machine running on a hypervisor. Then, large companies started running their own hypervisor and allowed you to rent your VM for as long as you needed it on their servers.

In October 2009, Amazon started offering the ability to rent databases directly – without having to worry about the underlying OS – in a platform as a service (PaaS) offering called RDS. This service quickly became one more thing systems administrators should take into consideration when looking at their choices for infrastructure management. Let’s take a look at some of the features that come with AWS RDS and explore RDS pricing to get a better understanding of all the RDS costs.

RDS Basics

AWS RDS gives users the ability to run and manage cloud relational databases, changing the way users once interacted with cloud infrastructure. A great thing about RDS is that users don’t have to manage the infrastructure that the database is running on, RDS will take over many of the once tedious tasks that were necessary to manage an AWS relational database. This allows system administrators to focus their time on other, more important projects. Another great feature is that you don’t have to worry about patching off the database software itself. 

The most essential part of Amazon RDS is an AWS DB instance. These instances are isolated database environments in the cloud. The computation and memory capacity of an RDS DB instance depends on its DB instance class. In AWS RDS, you can choose to use on-demand instances or reserved instances. Pricing and features will vary depending on the database engine and instance class you use.

You can currently run RDS on six database engines; MySQL, Aurora (MySQL on steroids), Oracle, Microsoft SQL Server, PostgreSQL, and MariaDB.  The database sizes are grouped into 3 categories: Standard (m4), Memory Optimized (r3), and Micro (t2).  Each family has multiple sizes that have varying numbers of vCPUs, GiBs of memory, levels of network performance, and can be input/output optimized. 

Each RDS instance can be set up to be “multi-AZ”, leveraging replicas of the database in different availability zones within AWS.  This is often used for production databases. If a problem arises in one availability zone, failover to one of the replica databases happens automatically behind the scenes. You don’t have to manage it.  Along with multi-AZ deployments, Amazon offers “Aurora”, which has more fault tolerance and self-healing beyond multi-AZ, as well as additional performance features.

It’s important to ensure that you’re matching your workloads to the instance types that best meet their needs, so you have the best and most cost-efficient option for your database. There are different pricing options for the different rds instance sizes and the databases they are being run on. Here’s a breakdown of AWS RDS Instances types:  

  • General Purpose
    • T3 instances – the latest burstable, general purpose instance type that provides a baseline level of CPU performance, plus the ability to burst CPU usage. Balance of compute, memory, and network.
    • T2 instances – similar to T3, T2 instances are burstable general-purpose performance instances that provide a baseline level of CPU performance with the ability to burst.
    • M5 instances – the latest general purpose instances with a balance of compute, memory, and network resources.
    • M4 instances – balance of compute, memory, and network resources.
  • Memory Optimized
    • R5 instances – latest generation of memory-optimized instances.
    • R4 instances – previous generation of memory-optimized instances.
    • X1e instances – optimized for high-performance databases, offering one of the lowest prices per GiB of RAM.
    • X1 instances – optimized for large-scale, enterprise-class and in-memory applications.

RDS Pricing

RDS is essentially a service running on top of EC2 instances, but you don’t have access to the underlying instances. Therefore, Amazon has set the pricing for an RDS instance in a very similar way to an AWS EC2 instance, which will be familiar once you’ve gotten a grasp on the structure that is already in place for compute.  There are multiple components to the price of an instance, including: the underlying instance size, storage of data, multi-AZ capability, and sending data out (sending data in is free for the transfer). To add another layer of complexity, each database type (MySQL, Oracle, etc) has different prices for each of the factors.  Aurora also charges for I/O on top of the other costs.

When you add all this up, the cost of an RDS instance can go through the roof for a high-volume database.  It also can be hard to predict the usage, storage, and transfer needs of your database, especially for new applications.  Also, the raw performance might be a lot less than what you might expect running on your own hardware or even on your own instances. What makes the price worth it?

What are the Actual Costs?

AWS offers a number of instances that are fit for different engines/databases. Once you determine which instance you want, and what engine you will be running your instance in, you can find a more specific price for your instance. With AWS RDS you only pay for what you use. You can try AWS RDS for free with the AWS Free Tier for no additional fees. 

Pricing of instance types depend on the RDS database engine you are running it on. To give an example of AWS RDS instance pricing, here’s what a memory-optimized R5 large compares to an R5 Extra Large across engines as one example. This is pricing for the US East (N. Virginia) region:

You can see how the cost of an AWS RDS instance doubles, or more, just by going up one size – this is the same across instance types, sizes, and regions.

To further break down what you would be paying, here’s what Amazon will bill you based on:

  • DB instance hours  
  • Storage (per GB per month) 
  • I/O requests per month 
  • Provisioned IOPS per month 
  • Backup Storage
  • Data transfer 

You can use this AWS Monthly Calculator to help calculate what your costs would be. 

RDS vs. Installing a Database on EC2

We often see that the choice comes down to either using RDS for your database backend, or installing your own database on an EC2 instance the “traditional” way. From a purely financial perspective, installing your own database is almost guaranteed to be cheaper if you focus on AWS direct costs alone.  However, there’s more to factor into the decision than just the cost of the services. 

What often gets lost in the use of a service is the time-to-value savings (which includes your time and potentially opportunity cost/benefit for bringing services online, faster).  For example, by using RDS instead of your own database, you avoid the need to install and manage the OS and database software, as well as the ongoing patching of those. You also get automatic backups and recovery through the AWS console or AWS API.  You avoid having to configure storage LUNs and worrying about optimizing striping for better I/O. Resizing instances is much simpler with RDS, both going smaller or bigger if necessary. High-availability (either cold or warm) is available at the click of a button.  All of this means less management for you and faster deployment times, though at a higher price point. If your company competes in a highly competitive market, these faster deployment times can make all the difference in the world to your bottom line.

Keep in mind that, as of fall 2017, you can start/stop RDS instances, which is particularly useful for dev/test environments. With this functionality, businesses will be able to stop RDS instances so they are not running 24/7. However, while they are not getting charged for database hours, you will still have to pay for provisioned storage, manual snapshots, and automated backup storage. 

How to Manage RDS with ParkMyCloud

ParkMyCloud has made “parking” – a.k.a starting and stopping on a schedule – public cloud compute resources as simple as possible. Included is the ability to park RDS instances, helping you save money on non-production databases.  

By using our Logical Groups feature, you can create a simple “stack” containing both compute instances and RDS databases to represent a particular application. A logical group can be used to manage all the constituent parts of an application.

The start/stop times can be sequenced within the group and a single schedule can be used on the group for simplified management. If access is needed during scheduled stop times, then you can override the schedules as needed, through the web app or commands through your chat provider like Slack or Microsoft Teams. You can also set start or stop delays within the Logical Group to customize the order, so if databases need to be started first and stopped last, then you can set that level of granularity. This helps with cost optimization because you have the ability to organize and manage your RDS instances in one place, at one time.

Conclusion

AWS RDS pricing can get a bit tricky and really requires you to know the details of your database in order to accurately predict the bill.  However, there are a ton of benefits to using the service, and can really help streamline your systems administration by handling the management and deployment of your backend database.  For companies that are moving to the cloud, or born in the cloud, RDS might be your choice when compared to running on a separate compute instance or on your own hypervisor, as it allows you to focus on your business and application, not on being a database administrator. For larger, established companies with a large team of DBAs and well-established automation or for IO-intensive applications, an alternative service may be a better option. By knowing the features, benefits, drawbacks, and factors in the cost, you can make the most informed decision for your AWS database needs.

3 Things to Look Forward to at Microsoft Ignite 2019

3 Things to Look Forward to at Microsoft Ignite 2019

The ParkMyCloud team is looking forward to attending our first Microsoft Ignite conference this year! The sold-out event, which will take place November 4-8 in Orlando, is a gathering of more than 25,000 Microsoft users focused on building solutions and managing infrastructures. Here are three things to look forward to at the conference. 

1. Announcements

As with other tech conferences, Microsoft will make plenty of product and service announcements at Ignite 2019. At the 2018 conference, more than 150 announcements covered product and roadmap highlights across AI/Machine Learning, Analytics, Blockchain, Compute, Containers, Databases, Developer Tools, DevOps, Identity, Integration, IoT, Management and Governance, Microsoft Azure Stack, Migration, Mobile, Networking, Security, Storage, Web, and Windows Virtual Desktop. 

Highlights from last year include doing away with passwords using Microsoft Azure Active Directory, Surface Hub 2 whiteboards, Microsoft Teams updates, Azure Digital Twins, and more – so we’re sure 2019 will have some exciting releases in store. 

2. Speakers & Sessions

Featured speakers at the event include leaders from throughout Microsoft – but it doesn’t stop there. There are currently 1445 sessions on the calendar – more than 500 of which are on Azure. Typically when confronted with this volume of options, we recommend that you pick 1-2 goals of things you would like to learn or questions you would like to get answered for your business, and look for relevant sessions from there. Many sessions will be recorded and posted online, so keep that in mind if you are interested in sessions at conflicting times – you can always come back to them. 

That said, here are a few sessions we thought looked particularly interesting:

  • THR1004 –  A real-world smart city: How Richmond VA is transforming citizen services
  • WRK 3017 – Accelerating natural language processing development with Azure Machine Learning
  • UNC1010 – Achieving zero downtime deployments with Azure DevOps and Kubernetes
  • BRK3181 – Advanced monitoring: Five Azure Monitor best practices you should know
  • BRK3190 – Analyze, manage, and optimize your cloud cost with Azure Cost Management
  • BRK1074 – Announcing Bing Maps Geospatial Analytics Platform Preview for Enterprise Business Planning
  • BRK3062 – API management for microservices in a hybrid and multi-cloud world
  • BRK2021 – Architecting and implementing governance across your Azure subscriptions
  • THR2186 – Azure Databricks and Azure Machine Learning better together

Check out also the 140 (!) podcasts that will broadcast live during the event.

3. Fun

Of course, part of the conference experience is the fun surrounding all the sessions. Be sure to spend some time in the expo hall to meet vendors, see product demos, get swag, and enter drawings for the chance to win cool prizes.

Don’t miss Thursday evening after party – this year, it’s at Universal Studios Florida and Universal’s Island of Adventure, which means you can explore Hogsmeade and more with access to the parks and rides, food and drink, and more. 

See You at Microsoft Ignite 2019

We hope to see you at the event! We’ll be joining our parent company Turbonomic at booth #1713 in the expo hall. Schedule a time to stop by – we’d love to chat cost optimization for Azure and hear what you think of the event.

AWS vs Google Cloud Pricing – A Comprehensive Look

AWS vs Google Cloud Pricing – A Comprehensive Look

Since ParkMyCloud provides cost control for Amazon Web Services (AWS) along with Google Cloud Platform (GCP) resources, we thought it might be useful to compare AWS vs Google Cloud pricing. Additionally, we will take a look at the terminology and billing differences. There are other “services” involved, such as networking, storage and load balancing, when looking at your overall bill. I am going to be focused mainly on compute charges in this article.  

Note: a version of this post was originally published in 2017. It has been completely rewritten and updated to include the latest AWS pricing and GCP pricing as of October 2019.

AWS and GCP Terminology Differences

As mentioned before, in AWS, the compute service is called “Elastic Compute Cloud” (EC2). The virtual servers are called “Instances”.

In GCP, the service is referred to as “Google Compute Engine” (GCE). The servers are called also called “instances”. 

A notable difference in terminology are GCP’s there are “preemptible” and non-preemptible instances.  Non-preemptible instances are the same as AWS “on demand” instances.  

Preemptible instances are similar to AWS “spot” instances, in that they are a lot less expensive, but can be preempted with little or no notice. GCP preemptible instances can be stopped without being terminated. In November 2017, AWS introduced a similar feature with spot instance hibernation. Flocks of these instances spun up from a snapshot according scaling rules are called “auto scaling groups” in AWS.

The similar concept can be created within GCP using “instance groups”. However, instance groups are really more of a “stack”, which are created using an “instance group template”. As such, they are more closely related to AWS CloudFormation stacks.

AWS vs. GCP Compute Sizing

Both AWS and GCP have a dizzying array of instance sizes to choose from, and doing an apples-to-apples comparison between them can be quite challenging. These predefined instance sizes are based upon number of virtual cores, amount of virtual memory and amount of virtual disk.

They have different categories.

AWS offers:

  • Free tier – inexpensive, burst performance (t3 family)
  • General purpose (m4/m5 family)
  • Compute optimized (c5 family)
  • GPU instances (p3 family)
  • FPGA instances (f1 family)
  • Memory optimized (x1, r5 family)
  • Storage optimized (i3, d2, h1 family)

GCP offers the following predefined types:

  • Free tier – inexpensive, burst performance (f1/g1 family)
  • Standard, shared core (n1-standard family)
  • High memory (n1-highmem family)
  • High CPU (n1-highCPU family)

However, GCP also allows you to make your own custom machine types, if none of the predefined ones fit your workload. You pay for uplifts in CPU/Hr and memory GiB/Hr. You can also add GPUs and premium processors as uplifts.

With respect to pricing, this is how the two seem to compare, by looking at some of the most common “work horses” and focusing on CPU, memory and cost.  

The bottom line:

In general, for most workloads AWS is less expensive on a CPU/Hr basis. For compute intensive workloads, GCP instances are generally less expensive

Also, as you can see from the table, both providers charge uplifts for different operating systems, and those uplifts can be substantial. You really need to pay attention to the fine print. For example, GCP charges a 4 core minimum for all their SQL uplifts (yikes!). And, in the case of Red Hat Enterprise Licensing (RHEL) in GCP, they charge you a 1 hour minimum for the uplifts and in 1 hour increments after that. (We’ll talk more about how the providers charge you in the next section.)

AWS vs. Google Cloud Platform Pricing – Examining the Differences

Cost per hour is only one aspect of the cloud pricing equation, though. To better understand your monthly bill, you must also understand how the cloud providers actually charge you. AWS prices their compute time by the hour, but charges by the second, with a 1 minute minimum.

Google Compute Engine pricing is also listed by the hour for each instance, but they charge you by the minute, rounded up to the nearest minute, with a 10 minute minimum charge. So, if you run for 1 minute, you get charged for 10 minutes. However, if you run for 61 minutes, you get charged for 61 minutes. 

AWS Reserved Instances vs GCP Committed Use

Both providers offer deeper discounts off their normal pricing, for “predictable” workloads that need to run for sustained periods of time, if you are willing to commit to capacity consumption upfront. AWS offers Reserved Instances. Google offers Committed Use Discounts.  Both involve agreeing to pay for the life of the reservation or commitment, though some have you pay up-front versus paying per month. This model of payment can get you some significant discounts over on-demand workloads, but can limit your flexibility as a trade-off. Check out our other posts on AWS Reserved Instances and Google Committed Use Discounts.

GCP Sustained Use Discounts

In addition to the Committed Use Discounts, GCP also has a unique offering with no direct parallel in AWS: Sustained Use Discounts. These provide you with an automatic discount if you run a workload for more that 25% of the month, with bigger discounts for more usage. These discounts can save up to 30% based on your use and instance size. The Google Cloud Pricing Calculator can help figure out how much this will affect your GCP costs.

Conclusion

If you are new to public cloud, once you get past all the confusing jargon, the creative approaches to pricing and the different ways providers charge for usage, the actual cloud services themselves are much easier to use than legacy on-premise services.

The public cloud services do provide much better flexibility and faster time-to-value. 

When comparing AWS vs. Google Cloud pricing, AWS EC2 on-demand pricing may on the surface appear to be more competitive than GCP pricing for comparable compute engines. However, when you examine specific workloads and factor in Google’s approach to charging for CPU/Hr time and their use of Sustained Use Discounts, GCP may actually be less expensive. 

In the meantime, ParkMyCloud will continue to help you turn off non-production cloud resources, when you don’t need them and help save you a lot of money on your monthly cloud bills, regardless of which public cloud provider you use.

Like this post? See also: How much do the differences between cloud providers actually matter? 

How to Save Money with Microsoft Azure Enterprise Agreements

How to Save Money with Microsoft Azure Enterprise Agreements

As more large enterprises adopt Azure cloud, especially those that have traditionally used Microsoft tools, we have observed growing interested in Microsoft Azure Enterprise Agreements, commonly known as EAs. We thought it would be useful to understand more about Microsoft EA’s, how they work with Azure, and what they mean to both the enterprise and the ISV.

What is an Azure Enterprise Agreement?

While you can create an Enterprise Agreement with Microsoft specifically for Azure, most companies using this option already have an EA in place for use of their software assets like Windows, Office, Sharepoint, System Center, etc. If you have an EA for other products, then you can simply add Azure to that existing agreement by making an upfront monetary commitment. You can then use eligible Azure cloud services throughout the year to meet the commitment. And you can pay for additional usage beyond the commitment, at the same rates. So, like any Enterprise License Agreement (ELA), including AWS’s EDP, you are committing to a contract term and volume to gain additional discounts.

According to Microsoft, the Enterprise Agreement is designed for organizations that want to license software and cloud services for a minimum three-year period. The Enterprise Agreement offers built-in savings ranging from 15 percent to 45 percent based on committed spend – and given how these commitments typically work, it is likely that the more you buy, the better your discount. The minimum listed commitment for an EA is 500 more users or devices for commercial companies (250 for public sector), and they specifically state this minimum does not apply to Server and Cloud Enrollment, an offering aimed at companies with EAs in place to help them standardize on Microsoft server and cloud technologies.

As it turns out, the Azure Enterprise commitment minimum is very low. You are required to make an upfront monetary commitment for each of the three years of the agreement, with a minimum order value of one “Monetary Commitment SKU” of $100 per month ($1,200/year). This low commitment make sense: once an enterprise is on a cloud platform, it’s sticky – land and expand is the name of the game for Azure, AWS, and Google. They expect infrastructure to grow significantly beyond the minimum, and just need to get a foot in the door. And of course,the starting point on the cloud is supposed to be much cheaper and flexible than on prem infrastructure. 

Benefits of an Azure Enterprise Agreement… Beyond Pricing

There are certain Azure-specific EA benefits besides just price to entice users to move off of Pay-As-You-Go. You can create and manage multiple Azure subscriptions with a single EA. You can also roll up and manage all your subscriptions, giving you an enterprise view of how many resource minutes you’re using per subscription. In addition, you can assign subscription burn to accounting departments and cost centers so you can more easily manage budgets and see spend at various roll up levels.

EAs give you access to certain features that you’d otherwise be required to purchase separately.  For example, an Azure EA gives you the option to purchase Azure Active Directory Premium, which will give you access to multi-factor authentication, 99.99% guaranteed uptime, and other features.  Pay-As-You-Go only gives you access to the free version of Azure AD.

Besides getting the best pricing and discounts, what are some of the other added benefit an EA might provide to an enterprise:.

  • A common IT platform deployed across the organization.
  • Minimal up-front costs and the ability to budget more effectively by locking in pricing and spreading payments over three years. 
  • Flexibility to choose from Microsoft cloud services, on-premises software, or a mix of both and migrate on your own terms. 
  • Simplified purchasing with predictable payments through a single agreement for cloud services and software. 
  • Managed licensing throughout the life of your agreement with the help of a Microsoft Certified Partner or a Microsoft representative.

Now, for vendors like ParkMyCloud, that need Azure pricing data to perform our service, how are we affected by the EA? Not adversely: the good news is that Microsoft makes EA pricing available through dedicated APIs and/or the Azure Price Sheet. We can match this information to a customer by using their Offer ID which defines their EA subscription and corresponding pricing (discounts).

How Else Can You Save Money on Azure?

Whether an Azure Enterprise Agreement makes sense for your organization is up to you to decide. Luckily, it’s not the only way to keep Azure costs in check. Here are a few others to explore:

What is Google Cloud Anthos’ promise for hybrid & multi-cloud environments?

What is Google Cloud Anthos’ promise for hybrid & multi-cloud environments?

Earlier this year at the Google Cloud Next event, Google announced the launch of its new managed service offering for multi-cloud environments, Google Cloud Anthos. 

The benefits of public cloud, like cost savings and higher levels of productivity, are often presented as an “all or nothing” choice to enterprises. However, with this offering, Google is acknowledging that multi-cloud environments are the reality as organizations see the value of expanding their cloud platform portfolios. Anthos is Google’s answer to the challenges enterprises face when adopting cloud solutions alongside their on-prem environments. It aims to enable customers to evolve into a hybrid and multi-cloud environment to take advantage of scalability, flexibility, and global reach. In the theory of “write once, run anywhere”, Anthos also promises to give developers the ability to build once and run apps anywhere on their multi-cloud environments.

Anthos embraces open-source technology

Google Cloud Anthos is based on the Cloud Services Platform that Google introduced last year. Google’s vision is to integrate the family of cloud services together. 

Anthos is generally available on both Google Cloud Platform (GCP) with Google Kubernetes Engine (GKE) and data centers with GKE On-Prem. So how does Google aim to deliver on the multi-cloud promise? It embraces open-source technology standards to let you build, manage and run modern hybrid applications on existing on-prem environments or in public cloud. Moreover, Anthos offers a flexible way to shift workloads from third-party clouds, such as Amazon Web Services (AWS) and Microsoft Azure to GCP and vice-versa. This allows users not to worry about getting locked-in to a provider.

As a 100% software solution, Anthos gives businesses operational consistency by running quickly on any existing hardware. Anthos leverages open APIs, giving developers the freedom to modernize. And, it automatically updates with the latest feature updates and security patches, because is based on GKE.

Rapid cloud transformation from Anthos

Google also introduced Migrate for Anthos at Cloud Next, which automates the process of migrating virtual machines (VM) to a container in GKE, regardless of whether the VM is set up on-prem or in the cloud lets users convert workloads directly into containers in GKE. Migrate for Anthos makes the workload portability less difficult both technically and in terms of developer skills when migrating. 

Though most digital transformations are a mix of different strategies, for the workloads that will benefit the most, containers, migrating with Anthos will deliver a fast, smooth path to modernization according to Migrate for Anthos Beta. 

Streamlining multi-cloud management with Anthos

Another piece of the offering is Anthos Config Management, which lets users streamline confirmation so they can create multi-cluster policies out of the box, set and enforce secure role-based access controls, resource quotas, and create namespaces. The capability to automate policy and security also works with their open-source independent service for microservices, Istio.

The management platform also lets users create common configurations for all administrative policies that apply to their Kubernetes clusters both on-prem and cloud. Users can define and enforce configurations globally, validate configurations with the built-in validator that reviews every line of code before it gets to the repository, and actively monitors them.

Expanded Services for Anthos

Google Cloud is expanding its Anthos platform with Anthos Service Mesh and Cloud Run for Anthos serverless capabilities, announced last week and currently in beta. 

The first is Anthos Service Mesh, which is built on Istio APIs, is designed to connect, secure, monitor and manage microservice running in containerized environments, all through a single administrative dashboard that tracks the application’s traffic. This new service is aimed to improve the developer experience by making it easier to manage and troubleshoot the complexities of the multi-cloud environment. 

Another update Google introduced was Cloud Run for Anthos. This managed service for serverless computing allows users to easily run stateless workloads on a fully managed Anthos environment without having to manage those cloud resources. It only charges for access when the application needs resources. Cloud Run for Anthos can run workloads on Google Cloud on on-premises and is limited to Google’s Cloud Platform (GCP) only.

Anthos Compared

Both AWS and Azure have hybrid cloud offerings but are not the same, mostly for one single reason.

AWS Outposts brings native AWS services, infrastructure, and operating models to virtually any data center, co-location space, or on-premises facility, in the same operating idea as Anthos, using the same AWS APIs, tools, and infrastructure across on-prem and the AWS cloud to deliver a seamless and consistent for an AWS hybrid experience.

As an extension of Azure to consistently build and run hybrid applications across their cloud and on-prem environments, Azure Stack delivers a solution for workloads wherever they reside and gives them access to connect to Azure Stack for cloud services.

As you can see, the main difference is that both AWS Outposts and Azure Stack are limited to combining on-premises infrastructure and the respective cloud provider itself, with no support for other cloud providers, unlike Anthos. Google Cloud Anthos manages hybrid multi-cloud environments, not just hybrid cloud environments, making it a unique offering for multi-cloud environment users.