Is this the one use case where a multi-cloud architecture actually makes sense?

Is this the one use case where a multi-cloud architecture actually makes sense?

There’s a lot of talk about multi-cloud architecture – and apparently, a lot of disagreement about whether there is actually any logical use case to use multiple public clouds.

How many use multi-cloud already?

First question: are companies actually using a multi-cloud architecture?

According to a recent survey by IDG: yes. More than half (55%) of respondents use multiple public clouds: 34% use two, 10% use three, and 11% use more than three. IDG did not provide a term definition for multi-cloud. Given the limited list of major public clouds, the “more than three set” might be counting smaller providers. Or, respondents could be counting combinations such as AWS EC2 and Google G-Suite or Microsoft 365. 

There certainly are some using multiple major providers – as one example, ParkMyCloud has at least one customer using compute infrastructure in AWS, Azure, Google Cloud, and Alibaba Cloud concurrently. In our observation, this is frequently manifested as separate applications architected on separate cloud providers by separate teams within the greater organization. 

Why do organizations (say they) prefer multi-cloud?

With more than half of IDG’s respondents reporting a multi-cloud architecture, now we wonder: why? Or at least – since we humans are poor judges of our own behavior – why do they say they use multiple clouds? On survey, public cloud users indicated they adopted a multi-cloud approach to get best-of-breed platform and service options, while other goals included cost savings, risk mitigation, and flexibility. 

Are these good reasons to use multiple clouds? Maybe. The idea of mixing service options from different clouds within a single application is more a dream than reality. Even with Kubernetes. (Stay tuned for a rant post on this soon). 

Cloud economist Corey Quinn discussed this on a recent livestream with ParkMyCloud customer Rob Weaver. He asked Rob why his team at Workfront hadn’t yet completed a full Kubernetes architecture. 

Rob said,  “we had everything in a datacenter, and we decided, we’re going to AWS. We’re going there as fast as we can because it’s going to make us more flexible. Once we’re there, we’ll figure out how to make it save us money. We did basically lift and shift. …. Then, all of the sudden, we had an enormous deal come up, and we had to go into another cloud. Had we taken the approach of writing our own Lambdas to park this stuff, now GCP comes along. We would have to have written a completely different language, a completely different architecture to do the same thing. The idea of software-as-a-service and making things modular where I don’t really care what the implementation is has a lot of value.”

Corey chimed in, “I tend to give a lot of talks, podcasts, blog posts, screaming at people in the street, etc. about the idea that multi-cloud as a best practice is nuts and you shouldn’t be doing it. Whenever I do that, I always make it a point to caveat that, ‘unless you have a business reason to do it.’ You just gave the perfect example of a business reason that makes sense – you have a customer who requires it for a variety of reasons. When you have a strategic reason to go multi-cloud, you go multi-cloud. It makes sense. But designing that from day one doesn’t always make a lot of sense.” 

So, Corey would say: Rob’s situation is the one use case where a multi-cloud architecture actually makes sense. Do you agree?

Where are you on the Cloud Spend Optimization Maturity Curve?

Where are you on the Cloud Spend Optimization Maturity Curve?

Cloud spend optimization is always top of mind for public cloud users. It’s usually up there with Security, Governance, and Compliance – and now in 2020, 73% of respondents to Flexera’s State of the Cloud report said that ‘Optimize existing use of cloud (cost savings)’ was their #1 initiative this year. 

So – what the heck does that mean? There are many ways to spin it, and while “cost optimization” is broadly applicable, the strategies and tactics to get there will vary widely based on your organization and the maturity of your cloud use. 

Having this discussion within enterprises can be challenging, and perspectives change depending on who you talk to within an organization – FinOps? CloudOps? ITOps? DevOps?.  And outside of operations, what about the Line of Business (LoB) or the Application owners? Maybe they don’t care about optimization in terms of cost but in terms of performance, so in reality optimization can mean something different to cloud owners and users based on your role and responsibility.

Ultimately though, there are a number of steps that are common no matter who you are. In order to facilitate this discussion and understand where enterprises are in their cloud cost optimization journey, we created a framework called the Cloud Cost Optimization Maturity Curve to identify these common steps.

Cloud Spend Optimization Maturity Curve

While cloud users could be doing any combination of these actions, this is a representation of actions you can take to control cloud spend in order of complexity. For example, Visibility in and of itself does not necessarily save you money but can help identify areas ripe for optimization based on data. And taking scaling actions on IaaS may or may not save you money, but may help you improve application performance through better resource allocation, scaling either up (more $$$) or down (less $$$). 

Let’s dig into each in a little more detail:

  1. Visibility – visibility of all costs across clouds, accounts, and applications. This is cloud cost management 1.0, the ability to see cost data better through budgeting, chargeback, and showback.
  2. Schedule suspend – turn off idle resources like virtual machines, databases, scale groups, and container services when not being used, such as nights and weekends based on usage data. This is most common for non-production resources but can have a big bang in terms of savings – 65% savings is a good target that many ParkMyCloud customers achieve even during a free trial.
  3. Delete unused resources – this includes identifying orphaned resources and volumes and then deleting them. Even though you may not be using them, your cloud provider is still charging you for them.
  4. Sizing IaaS (non-production) – many enterprises overprovision their non-production resources and are using only 5-10% of the capacity of a given resource, meaning 90% is unused (really!) so by leveraging usage data you can get recommendations to resize those under utilized resources to save 50% or more.
  5. RI / Savings Plan Management – AWS, Azure, and Google provide the ability to pre-buy capacity and get discounts ranging from 20-60% based on your commitments in both spend and terms. While the savings make it worthwhile, this is not a simple process (though it’s improved with AWS’s savings plans)  and requires a very good understanding of the services you will need 12-36 months out.
  6. Scaling IaaS (prod) – this requires collecting data and understanding both the infrastructure and application layers and taking sizing actions up or down to improve both performance and cost. Taking these actions on production resources requires strong communication between Operations and LoB.
  7. Optimizing PaaS – virtual machines, databases, and storage are all physical in nature and can be turned off and resized, but these top the maturity curve since many PaaS services have to be optimized in other ways like scaling the service up/down based on usage or rearchitecting parts of your application.

For more ways to reduce costs, check out the cloud waste checklist for 26 steps to take to eliminate wasted spend at a more granular level.

Azure Classic vs. ARM VMs: It’s Time to Migrate

Azure Classic vs. ARM VMs: It’s Time to Migrate

We get requests from customers occasionally about whether ParkMyCloud can manage Microsoft Azure Classic vs. ARM VMs. Short answer: no. Since Azure has already announced the deprecation of Azure classic resources – albeit not until March 2023 – you’ll find similar answers from other third-party services. Microsoft advises only to use resource manager VMs. And in fact, unless you already had classic VMs as of February 2020, you are not able to create new classic VMs. 

As of February, though, 10% of IaaS VMs still used the classic deployment model – so there are a lot of users with workloads that need to be migrated in order to use third-party tools, new services, and avoid 2023 deprecation.  

Azure Classic vs. ARM VM Comparison

Azure Classic and Azure Resource Manager (ARM) are two different deployment models for Azure VMs. In the classic model, resources exist independently, without groups for applications. In the classic deployment model, resource states, policies, and tags are all managed individually. If you need to delete resources, you do so individually. This quickly becomes a management challenge, with individual VMs liable to be left running, or untagged, or with the wrong access permissions. 

Azure Resource Manager, on the other hand, provides a deployment model that allows you to manage resources in groups, which are typically divided by application with sub-groups for production and non-production, although you can use whatever groupings make sense for your workloads. Groups can consist of VMs, storage, virtual networks, web apps, databases, and/or database servers. This allows you to maintain consistent role-based access controls, tagging, cost management policies, and to create dependencies between resources so they’re deployed in the correct order. Read more: how to use Azure Resource Groups for better VM management.

How to Migrate to Azure Resource Manager VMs

For existing classic VMs that you wish to migrate to ARM, Azure recommends planning and a lab test in advance. There are four ways to migrate various resources:

  • Migration of VMs, not in a virtual network – they will need to be on a virtual network on ARM, so you can choose a new or existing virtual network. These VMs will need to be restarted as part of the migration.
  • Migration of VMs in a virtual network – these VMs do not need to be restarted and applications will not incur downtime, as only the metadata is migrating – the underlying VMs run on the same hardware, in the same network, and with the same storage. 
  • Migration of storage accounts – you can deploy Resource Manager VMs in a classic storage account, so that compute and network resources can be migrated independently of storage. Then, migrate over storage accounts.
  • Migration of unattached resources – the following may be migrated independently: storage accounts with no associated disks or VMs, and network security groups, route tables, and reserved IPs that are not attached to VMs or networks. 

There are a few methods you can choose to migrate:

We recommend that you move toward ARM resources and eliminate or migrate classic resources as soon as you can. Farewell, classic!

AWS vs Azure vs Google Free Tier Comparison

AWS vs Azure vs Google Free Tier Comparison

Whether you’re new to public cloud altogether or already use one provider and are interested in trying another, you may be interested in a comparison of the AWS vs Azure vs Google free tier.  The big three cloud providers – AWS, Azure and Google Cloud – each have a free tier available that’s designed to give users the cloud experience without all the costs. They include free trial versions of numerous services so users can test out different products and learn how they work before they make a huge commitment. While they may only cover a small environment, it’s a good way to learn more about each cloud provider. For all of the cloud providers, the free trials are available to only new users.

AWS Free Tier Offerings

AWS free tier includes more than 60 products. There are two different types of free options that are available depending on the product used: always free and 12 months free. To help customers get started on AWS, the services that fall under the free 12-months are for new trial customers and give customers the ability to use the products for free (up to a specific level of usage) for one year from the date the account was created. Keep in mind that once the free 12 months are up, your services will start to be charged at the normal rate. Be prepared and review this checklist of things to do when you outgrow the AWS free tier. 

Azure Free Tier Offerings

The Azure equivalent of a free tier is referred to as a free account. As a new user in Azure, you’re given a $200 credit that has to be used in the first 30 days after activating your account. When you’ve used up the credit or 30 days have expired, you’ll have to upgrade to a paid account if you wish to continue using certain products. Ensure that you have a plan to reduce Azure costs in place. If you don’t need the paid products, there’s also the always free option. 

Some of the ways people choose to use their free account are to gain insights from their data, test and deploy enterprise apps, create custom mobile experiences and more. 

Google Cloud Free Tier Offerings

The Google Cloud Free Tier is essentially an extended free trial that gives you access to free cloud resources so you can learn about Google Cloud services by trying them on your own. 

The Google Cloud Free Tier has two parts – a 90 day free trial with a $300 credit to use with any Google Cloud services and always free, which provides limited access to many common Google Cloud resources, free of charge. Google Cloud gives you a little more time with your credit than Azure, you get the full 90 days of the free trial to use your credit. Unlike free trials from the other cloud providers, Google does not automatically charge you once the trial ends – this way you’re guaranteed that the free tier is actually 100% free. Keep in mind that your trial ends after 90 days or once you’ve exhausted the $300 credit. Any usage beyond the free monthly usage limits are covered by the $300 free credit – you must upgrade to a paid account to continue using Google Cloud. 

Free Tier Limitations

It’s important to note that the always-free services vary widely between the cloud providers and there are usage limitations. Keep in mind the cloud providers’ motivations: they want you to get attached to the services so you start paying for them. So, be aware of the limits before you spin up any resources, and don’t be surprised by any charges. 

In AWS, when your free tier expires or if your application use exceeds the free tier limits, you pay standard, pay-as-you-go service rates. Azure and Google both offer credits for new users that start a free trial, which are a handy way to set a spending limit. However, costs can get a little tricky if you aren’t paying attention. Once the credits have been used you’ll have to upgrade your account if you wish to continue using the products. Essentially, the credit that was acting as a spending limit is automatically removed so whatever you use beyond the free amounts, you will now have to pay for. In Google Cloud, there is a cap on the number of virtual CPUs you can use at once – and you can’t add GPUs or use Windows Server instances.

For 12 months after you upgrade your account, certain amounts of popular products are free. After 12 months, unless decommissioned, any products you may be using will continue to run, and you’ll be billed at the standard pay-as-you-go rates.

Another limitation is that commercial software and operating system licenses typically aren’t available under the free tiers.

These offerings are “use it or lose it” – if you don’t use all your credits or utilize all your usage, there will be no rollover into future months. 

Popular Services, Products, and Tools to Check Out for Free

AWS has 33 products that fall under the one-year free tier – here are some of the most popular: 

  • Amazon EC2 Compute: 750 hours per month of compute time, per month of Linux, RHEL, SLES t2.micro or t3.micro instance and Windows t2.micro or t3.micro instance dependent on region.
  • Amazon S3 Storage: 5GB of standard storage
  • Amazon RDS Database: 750 hours per month of db.t2.micro database usage using MySQL, PostgreSQL, MariaDB, Oracle BYOL, or SQL Server, 20 GB of General Purpose (SSD) database storage and 20 GB of storage for database backups and DB Snapshots. 

For the always-free option, you’ll find a number of products as well, some of these include:

  • AWS Lambda: 1 million free compute requests per month and up to 3.2 million seconds of compute time per month.
  • Amazon DynamoDB: 25 GB of database storage per month, enough to handle up to 200M requests per month.
  • Amazon CloudWatch: 10 custom metrics and alarms per month, 1,000,000 API requests, 5GB of Log Data Ingestion and Log Data Archive and 3 Dashboards with up to 50 metrics.

Azure has 19 products that are free each month for 12 months – here are some of the most popular:

  • Linux and Windows virtual machines: 750 hours (using B1S VM) of compute time 
  • Managed Disk Storage: 64 GB x 2 (P6 SSD) 
  • Blob Storage: 5GB (LRS hot block) 
  • File Storage: 5GB (LRS File Storage) 
  • SQL databases: 250 GB

For their always free offerings, you’ll find even more popular products – here are a few:

  • Azure Kubernetes Service: no charge for cluster management, you only pay for the virtual machines and the associated storage and networking resources consumed.
  • Azure DevOps: 5 users for open source projects and small projects (with unlimited private Git repos). For larger teams, the cost ranges from $6-$90 per month.
  • Azure Cosmos DB (400 RU/s provisioned throughput)

Unlike AWS and Azure, Google Cloud does not have a 12 months free offerings. However, Google Cloud does still have a free tier with a wide range of always free services – some of the most popular ones include:

  • Google BigQuery: 1 TB of queries and 10 GB of storage per month.
  • Kubernetes Engine: One zonal cluster per month
  • Google Compute Engine: 1 f1-micro instance per month only in U.S. regions. 30 GB-months HDD, 5 GB-months snapshot in certain regions and 1 GB of outbound network data from North America to all region destinations per month.
  • Google Cloud Storage: 5 GB of regional storage per month, only in the US. 5,000 Class A, and 50,000 Class B operations, and 1 GB  of outbound network data from North America to all region destinations per month.

 

Check out these blog posts on free credits for each cloud provider to see how you can start saving:

ServiceNow Governance Should Enable Users, It Usually Constrains Them

ServiceNow Governance Should Enable Users, It Usually Constrains Them

When most enterprise users hear that their organization will start heavily using ServiceNow governance, they assume that their job is about to get much harder, not easier. This stems from admins putting overly-restrictive policies in place, even with the good intentions of preventing security or financial problems.  The negative side effect of this often manifests itself as a huge limitation for users who are just trying to do their job. Ultimately, this can lead to “shadow IT”, angry users, and inefficient business processes. So how can you use ServiceNow governance to increase efficiency rather than prohibit it?

What is ServiceNow governance?

One of the main features of ServiceNow is the ability to implement processes for approvals, requests, and delegation. Governance in ServiceNow includes the policies and definitions of how decisions are made and who can make those decisions. For example, if a user needs a new virtual machine in AWS, they can be required to request one through the ServiceNow portal. Depending on the choices made during this request, cloud admins or finance team members can be alerted to this request and be asked to approve the request before it is carried out. Once approved, the VM will have specific tags and configuration options that match compliance and risk profiles.

What Drives Governance?

Governance policies are implemented with some presumably well-intentioned business goal in mind. Some organizations are trying to keep risk managed through approvals and visibility. Others are trying to rein in IT spending by guiding users to lower-cost alternatives to what they were requesting. 

Too often, to the end user, the goal gets lost behind the frustration of actions being slowed, blocked, or redirected by the (beautifully automated) red tape. Admins lose sight of the central business needs while implementing a governance structure that is trying to protect those business needs. For users to comply with these policies, it’s crucial that they understand the motivations behind them – so they don’t work around them. 

In practice, this turns into a balancing act. The guiding question that needs to be answered by ServiceNow governance is, “How can we enable our users to do their jobs while preventing costly (or risky) behavior?” 

Additionally, it’s critical that new policies are clearly communicated, and that they hook into existing processes.  Not to say that this is easy. To be done well, it requires a team of technical and business stakeholders to provide their needs and perspectives. Knowing the technical possibilities and limitations must match up with the business needs and overall organizational plans, while avoiding roadblocks and managing edge cases. There’s a lot to mesh together, and each organization has unique challenges and desires, which makes this whole process hard to generalize.

The End Result

At ParkMyCloud, we try to help facilitate these kinds of governance frameworks.  The ParkMyCloud application allows admins to set permissions on users and give access to teams. By reading from resource tags, existing processes for tagging and naming can be utilized. New processes around resource schedule management can be easily communicated via chat or email notifications. Users get the access they need to keep doing their job, but don’t get more access than required.  Employing similar ideas in your ServiceNow governance policies can make your users successful and your admins happy.