Has Azure Chargeback Improved with the New Cost Allocation Capabilities?

Has Azure Chargeback Improved with the New Cost Allocation Capabilities?

Microsoft Azure recently announced an addition designed to help with Azure chargeback: cost allocation, now in preview in Azure Cost Management + Billing. We’re always glad to see cloud providers making an effort to improve their native cost management capabilities for customers, so here’s a quick look at this update.

Chargeback for Cost Accountability

Cost allocation for cloud services is an ongoing challenge. Depending on organizational structure and decisions about billing and budgets, every organization will handle it a bit differently. In some cases, separating by Azure subscription can make this easier, but in others, your organization may have shared costs such as networking or databases that need to be divided by business unit or customer. However, it is an obstacle that must be addressed in order for organizations to gain visibility, address inefficiencies, and climb up the cloud spend optimization curve to actually take action to reduce and optimize costs. 

Many IT organizations address this via an Azure chargeback setup, in which the IT department provisions and delivers services, and each department or group submits internal payment back to IT based on usage. Thus, it becomes an exercise in determining how to tag and define “usage”.

In some cases, showback can be used as an alternative or stepping stone toward chargeback. The content and dollar amounts are the same – but without the accountability driven by chargeback. For this reason, it can be difficult to motivate teams to reduce costs with a showback. We have heard teams using variation on showback – ”shameback”. IT can take the costs they’re showing back and gamify savings, coupled with a public shame/reward mechanism, to drive cost-saving behavior.  

What Azure Added with the Preview Cost Allocation Capabilities

The cost allocation capabilities are currently in preview for Enterprise Agreement (EA) and Microsoft Customer Agreement (MCA) accounts. It allows users to identify the costs that need to be split by subscription, resource group, or tag. Then, you can choose to move them, and allocate in any of the following ways: distribute evenly, distribute proportional to total costs, distribute proportional to either network, compute, or storage costs, or choose a custom distribution percentage. 

Cost allocation does not affect your Azure invoice, and costs must stay within the original billing account. So, Azure did not actually add chargeback, but they did add visualization and reporting tools to facilitate chargeback processes within your organization, outside of Azure.

Improvements in the Right Direction – or Too Little, Too Late? 

Azure and AWS are slowly iterating and improving on their cost visibility, reporting, and management capabilities – but for many customers, it’s too little, too late. The lack of visibility and reporting within the cloud providers’ native offerings is what has led to many of the third-party platforms in the market. We suspect there is still a way to go before customers’ billing and reporting needs are fully met by the CSPs themselves. 

And of course, for organizations with a multi-cloud presence, the cloud costs generally need to be managed separately or via a third-party tool. There are some movements within the CSPs to at least acknowledge that their customers are using multiple providers, particularly on the part of Google Cloud. Azure Cost Management has done so in part as well, with the AWS connector addition to the platform, but it’s unclear whether the 1% charge of managed AWS spend is worth the price – especially when you may be able to pay a similar amount for specialized tools that have more features.

Looking for a Google Cloud Instance Scheduling Solution? As You Wish

Looking for a Google Cloud Instance Scheduling Solution? As You Wish

Like other cloud providers, the Google Cloud Platform (GCP) charges for compute virtual machine instances by the amount of time they are running — which may lead you to search for a Google Cloud instance scheduling solution. If your GCP instances are only busy during or after normal business hours, or only at certain times of the week or month, you can save money by shutting these instances down when they are not being used. So can you set up this scheduling through the Google Cloud console? And if not – what’s the best way to do it?

This post was originally written by Bill Supernor in 2018. I have revised and updated it for 2020.

Why bother scheduling a Google VM to turn off?

As mentioned, depending on your purchasing option, Google Cloud pricing is based on the amount of time an instance is running, charged at a per-second rate. We find that at least 40% of an organization’s cloud resources (and often much more) are for non-production purposes such as development, testing, staging, and QA. These resources are only needed when employees are actively using them for those purposes — so every second that they are left running when not being used is wasted spend. Since non-production VM instances often have predictable workloads, such as a 7 AM to 7 PM work week, 5 days a week, the other 64% of spend is completely wasted. Inconceivable!

The good news is, that means these resources can be scheduled to turn off during nights and weekends to save money. So, let’s take a look at a couple of cloud scheduling options.

Scheduling Option 1: GCP set-scheduling Command

If you were to do a Google search on “google cloud instance scheduling,” hoping to find out how to shut your compute instances down when they are not in use, you would see numerous promising links. The first couple of references appear to discuss how to set instance availability policies and mention a gcloud command line interface for “compute instances set-scheduling”. However, a little digging shows that these interfaces and commands simply describe how to fine-tune what happens when the underlying hardware for your Google virtual machine goes down for maintenance. The options in this case are to migrate the VM to another host (which appears to be a live migration), or to terminate the VM, and if the instance should be restarted if it is terminated. The documentation for the command goes so far as to say that the command is intended to let you set “scheduling options.”  While it is great to have control over these behaviors, I feel I have to paraphrase Inigo Montoya – You keep using that word “scheduling” – I do not think it means what you think it means…

Scheduling Option 2: GCP Compute Task Scheduling

The next thing that looks schedule-like is the GCP Cron Service. This is a highly reliable networked version of the Unix cron service, letting you leverage the Google App Engine service to do all sorts of interesting things. One article describes how to use the Cron Service and Google App Engine to schedule tasks to execute on your Compute Instances. With some App Engine code, you could use this system to start and stop instances as part of regularly recurring task sequences. This could be an excellent technique for controlling instances for scheduled builds, or calculations that happen at the same time of a day/week/month/etc.

While very useful for certain tasks, this technique really lacks flexibility. Google Cloud Cron Service schedules are configured by creating a cron.yaml file inside the app engine application. The GCP Cron Service triggers events in the application, and getting the application to do things like start/stop instances are left as an exercise for the developer. If you need to modify the schedule, you need to go back in and modify the cron.yaml. Also, it can be non-intuitive to build a schedule around your working hours, in that you would need one event for when you want to start an instance, and another when you want to stop it. If you want to set multiple instances to be on different schedules, they would each need to have their own events. This brings us to the final issue, which is that any given application is limited to 20 events for free, up to a maximum of 250 events for a paid application. Those sound like some eel-infested waters.

Scheduling Option 3: ParkMyCloud Google Cloud Instance Scheduling

Google Cloud Platform and ParkMyCloud – mawwage – that dweam within a dweam….

Given the lack of other viable instance scheduling options, we at ParkMyCloud created a SaaS application to automate instance scheduling, helping organizations cut cloud costs by 65% or more on their monthly cloud bill with AWS, Azure, and, of course, Google Cloud.

We aim to provide a number of benefits that you won’t find with, say, the GCP Cron Service. ParkMyCloud’s cloud management software:

  • Automates the process of switching non-production instances on and off with a simple, easy-to-use platform – more reliable than the manual process of switching GCP Compute instances off via the GCP console. Automatic on/off schedules make resource states easy to manage.
  • Provides a single-pane-of-glass view, allowing you to consolidate multiple clouds, multiple accounts within each cloud, and multiple regions within each account, all in one easy-to-use interface.
  • Does not require a developer background, coding, or custom scripting. It is also more flexible and cost-effective than having developers write scheduling scripts.
  • Can be used with a mobile phone or tablet.
  • Avoids the hard-coded schedules of the Cron Service. Users of ParkMyCloud’s GCP scheduler can temporarily override schedules if they need to use an instance on short notice.
  • Supports Teams and User Roles (with optional SSO), ensuring users will only have access to the resources you grant.
  • Helps you identify idle instances by monitoring instance performance metrics, displaying utilization heatmaps, and automatically generating utilization-based “SmartParking” schedule recommendations, which you can accept or modify as you wish...
  • Provides “rightsizing” recommendations to identify resources that are routinely underutilized and can be converted to a different Google Cloud server size to save 50-75% of the cost of the resource.  These recommendations incorporate custom GCP sizes, so you can adjust specifics around memory and CPU independent of each other.
  • Has a 14-day free trial, so you can try the GCP cloud scheduler platform out in your own environment. There’s also a free-forever tier, useful for startups and those on the Google Cloud free tier, as well as paid tiers with more advanced options for enterprises with a larger Google Cloud footprint.
  • Supports multiple GCP products, including Virtual Machines, CloudSQL databases, Autoscaling Groups, and GKE clusters and nodes.
  • Notifies users and admins of resource shutdowns, startups, and actions taken via Google Hangouts, Slack, Microsoft Teams, or Email.

How Much Can You Save with Google Cloud Scheduling?

While it depends on your exact schedule, many non-production Google Cloud VMs – those used for development, testing, staging, and QA – can be turned off for 12 hours/day on weekdays, and 24 hours/day on weekends. For example, the resource might be running from 7 AM to 7 PM Monday through Friday, and “parked” the rest of the week. This comes out to about 64% savings per resource.

Currently, the average savings per scheduled VM in the ParkMyCloud platform is about $245/month. That does not account for any additional savings gained from rightsizing. 

How Enterprises Are Benefitting from ParkMyCloud’s Google Cloud Scheduler + Optimizer

If you’re not quite ready to start your own trial, check out this interview with Workfront, a work management software provider. Workfront uses both AWS and Google Cloud Compute Engine, and needed to coordinate cloud management software across both public clouds. They required automation in order to optimize and control cloud resource costs, especially given users’ tendency to leave resources running when they weren’t being used.

Workfront found that ParkMyCloud would meet their automatic scheduling needs. Now, 200 users throughout the company use ParkMyCloud to:

  • Get recommendations of resources that are not being used 24×7, and use policies to automatically apply on/off schedules to them
  • Get notifications and control the state of their resources through Slack
  • Easily report savings to management
  • Save hundreds of thousands per year

Ways to Save on Google Cloud VMs, Beyond Scheduling

Google has done a great job of creating offerings for customers to save money through regular cloud usage. The two you’ll see mentioned the most are sustained use discounts and committed use discounts. Sustained use discounts give Google Cloud users automatic discounts the longer an instance is run. This post outlines the break-even points between letting an instance run for the discount vs. parking it. Sustained use discounts have also been expanded with resource-based pricing, which allows the sustained use to be applied based on your use of individual vCPUs and GB of memory regardless of the machine type you use.

Committed use discounts, on the other hand, require an upfront commitment for 1 or 3 years’ usage. We have found that they’re best applicable for predictable workloads such as production environments. There are also the pre-emptible VMs, which are offered at a discount from on demand VMs in exchange for being short-lived – up to 24 hours.

In addition to these discounts, you can also save money by rightsizing your instances. Provisioning your resources to be larger than you need is another form of cloud waste. On average, rightsizing a resource reduces the cost by 50%. Google Cloud makes it easy to change the CPU or the Memory amounts using custom instance sizes. If you’d rather use standard sizing, they offer that as well. By keeping an eye on the usage patterns of your servers, you can make sure that you’re getting the most use of the resources you are paying for.

How to Create a Google Cloud Schedule with ParkMyCloud 

Getting started with ParkMyCloud is easy. Simply register for a free trial with your email address and connect to your Google Cloud Platform to allow ParkMyCloud to discover and manage your resources. A 14-day free trial free gives your organization the opportunity to evaluate the benefits of ParkMyCloud while you only pay for the cloud computing power you use. At the end of the trial, there is no obligation on you to continue with our service, and all the money your organization has saved is, of course, yours to keep.

If you do choose to continue, our Google Cloud scheduler/optimizer pricing is straightforward. You will choose a functionality tier and pay per resource per month. There is a free forever tier available – so have at it. 

Have fun storming the castle!

Future Trends in Cloud Computing All Point to Optimization

Future Trends in Cloud Computing All Point to Optimization

Given our focus on public cloud cost control, we here at ParkMyCloud are always trying to understand more about the future trends in cloud computing, specifically the public cloud infrastructure (IaaS) and platform (PaaS) market. Now that public cloud has become ubiquitous, there’s a common theme. While new services and products continue to develop, more and more of them are focusing on not just creating capabilities that were previously lacking – they’re focused on optimizing what already exists.

Are Cloud Services Still Growing?

Before we dive into optimization, let’s take a look at how the cloud market continues to grow in 2020 and beyond. Gartner estimates that $257.9B will be spent on public cloud services in 2020, up 6.3 from 2019 as outlined in the table below:

And according to IDC, almost half of IT spending is cloud-based, “reaching 60% of all IT infrastructure and 60-70% of all software, services and technology spending in 2020.” These projections come mid-2020, showing that even given the disruption this year,  between Gartner and IDC, no one expects cloud adoption and spending to slow down any time soon. So what’s driving this growth and what are the future trends in cloud computing we should be on the lookout for in 2020 and beyond?

Trends in Cloud Computing You’ve Probably Heard About

There is definitely a lot of hype around Blockchain, Quantum Computing, Machine Learning, and AI, as there should be. But at a more basic level, cloud computing is changing businesses in many ways. Whether it is the way they store their data, improvements to agility and go-to-market for faster release of new products and services, or how they develop and operate services remotely in today’s “locked-down world”, cloud computing is benefitting all businesses in every sector. Smart businesses are always looking for the most innovative ways to improve and accomplish their business objectives, i.e., make money.

When it comes to cloud technology, more and more businesses are realizing the benefits that cloud can provide them and are beginning to seek more cloud solutions to conduct their business activities. And obviously, Amazon, Microsoft, Google, Alibaba, IBM, Cisco, VMWare and Oracle plan to capture this spend by providing a dizzying array of IaaS, PaaS, and DaaS offerings to help enterprises build and run their services.

How These Trends Make Cloud Computing Better

Cloud Automation Tools: as modern IT environments continue to become more diverse and distributed in the pursuit of key business goals, they also bear new challenges for the operation teams responsible for keeping everything running smoothly. The go-to strategy for taming the associated complexity can be summed up in one word – automation.

Automation tools, including some that incorporate AI, are on the rise in 2020. These new automation capabilities, along with comprehensive dashboards that provide a holistic view into multi-cloud operations, will become increasingly important for cloud and IT operations to support the lines of business regardless of where they place their workloads. These tools can help put the right workloads in the right place, manage costs, improve security and governance, and ensure application performance.

Desktop as a service (DaaS): DaaS is expected to have the most significant growth in 2020, increasing 95.4% to $1.2 billion. DaaS offers an inexpensive option for enterprises that are supporting the surge of remote workers due to the global pandemic and their need to securely access enterprise applications from multiple devices and locations.

Multi-Cloud and Hybrid Cloud: Once predicted as the future, the multi- and hybrid cloud world has arrived and will continue to grow. Most enterprises (93 percent) described their strategy as multi-cloud in 2020 according to a Flexera report (up 21% from 2018) and 87% have a hybrid cloud strategy. In addition, 71 percent of public cloud adopters are using 2+ unique cloud environments/platforms. These numbers will only go up in 2021. While this offers plenty of advantages to organizations looking to benefit from different cloud capabilities, using more than one CSP complicates governance, cost optimization, and cloud management further as native CSP tools are not multi-cloud. As cloud computing costs remain a primary concern, it’s crucial for organizations to stay ahead with insight into cloud usage trends to manage spend (and prevent waste) and optimize application performance. 

It’s a complex problem, but we do see many organizations adopting a multi-cloud strategy with cost control and governance in mind, as it avoids vendor lock-in and allows flexibility for deploying workloads in the most cost-efficient manner (and at a high level, keeps the cloud providers competitive against each other to continually lower prices).

Growth of Managed Services: The global cloud managed services market is growing rapidly and is expected to reach $116B billion by 2025, growing from $62.4B in 2020 according to a study conducted by Markets and Markets. Enterprises are focusing on their primary business operations, which results in higher cloud managed services adoption. Business services, security services, network services, data center services, and mobility services are major categories in the cloud managed services market. Implementation of these services will help enterprises reduce IT and operations costs and will also enhance productivity of those enterprises. 

Managed service providers – the good ones, anyway – are experts in their field and some of the most informed consumers of public cloud. By handing cloud operations off to an outside provider, companies are not only optimizing their own time and human resources – they’re also pushing MSPs to become efficient cloud managers so they can remain competitive and keep costs down for themselves and their customers.

Cloud Trends Are Always Evolving

While today, it sometimes seems like we’ve seen the main components of cloud operations and all that’s left to do is optimize them, history tells us that’s not the case. Cloud has been and will continue to be a disruptive force in enterprise IT for years to come as has the Global Pandemic of 2020, and future technology trends in cloud computing will continue to shape the way enterprises leverage public, private and hybrid cloud. Remember: AWS was founded in 2006, the cloud infrastructure revolution is still in early days, and there is plenty more XaaS to be built.

Tips for Purchasing Software Through Cloud Marketplaces

Tips for Purchasing Software Through Cloud Marketplaces

One of the activities we are engaged in is the cloud marketplaces run by the large public cloud providers. These marketplaces provide an alternative channel for software and services companies apart from the more typical direct sales or reseller/distributor models. Many customers ask about options to buy our product via one of these marketplaces – which has given us some tips for others interested in purchasing this way. 

Given the “app store model” that has been so widely embraced by consumers (App Store, Google Play Store etc) it’s not surprising that the cloud providers see an opportunity to leverage their customer footprint and derive revenue share when customers choose to purchase via their marketplaces. For customers, it can be a way to consolidate bills, get discounts, and simplify administration.

How Cloud Marketplaces Work 

The business model is simple. The Cloud Service Providers (CSPs) charge a percentage of revenue based on the value of the purchase price being paid by the customer. Companies list their products to reach buyers who they would not otherwise reach, or provide a purchasing method which better suits the customers needs since they can add the cost of the purchased product onto their monthly cloud bill thus avoiding complex new procurement / purchasing arrangements. The CSPs obviously hope that the value proposition is strong enough to warrant the sellers giving up some margin in exchange for net additional sales or sales which would be otherwise overly complex to close and bureaucratically burdensome. 

Currently, we only participate in the AWS Marketplace, but there are similar options available in the Azure Marketplace and the Google Cloud Marketplace. The largest and seemingly most well stocked is that of AWS where there are close to some 10,000 listings from a broad range of software vendors. In fact, Gartner estimates that some 25% of global 1,000 organizations are using such online marketplaces. Forrester reports that as much as 40% of all B2B transactions are through digital sales channels. And a 2112 Group Survey reported that 11% of channel executives believe marketplaces will drive the majority of their indirect revenue as soon as 2023. 

These organizations claim the benefits as being: a lower sales/buying cycle time; ease of use, increased buyer choice; and ease of provisioning and deployment. Additionally the promise of leveraging the CSPs own account managers to support co-selling on specific opportunities and the potential for them to act as lead sources, albeit we imagine these need to be larger deals and part of a broader relationship between the CSPs and their most valuable ISV customers. Still finding and aligning with CSP sales reps who get to retire quota by selling your product via the marketplace especially if it means those same reps get to sell more of their core cloud services.

Opportunities to offer alternate sales models can also be made available through the marketplace. For example, charging on a metered basis where the customer only pays for what is used and has this cost added to the bill (rather than a fixed monthly fee)  or via longer term contracts secured over two or three years at discounted rates.

Those companies that have managed to optimize their offerings in partnership with CSPs and have developed co-developed / co-branded products have the potential for a lot of upside. Databricks partnership with Azure and Snowflake and Datadog with AWS have driven enormous growth and helped them build unicorn sized businesses within a few years. 

One area which has been somewhat frustrating is the ability for customers to discover appropriate software products to meet their needs within the marketplaces. In part this is a similar challenge as faced in consumer facing app marketplaces where there is an over abundance of products and the categorization and search algorithms are often weak. This leaves the sellers (particularly the lesser known ones) frustrated and customers unable to determine what software is best to meet their needs. In our own cost optimization space this has many different dimensions and lots of offerings often 

Tips for Purchasing on the Marketplace

So what do buyers need to know about these marketplaces and making them work to your advantage? To help answer this we have included a short checklist of tips and considerations.

  • Always carefully check that any products you wish to research or purchase are listed in the marketplace. Despite the likes of Amazon and Google running these, the listing can often be hidden and categorized in unusual ways so if you do not find it listed always contact the vendor and ask.
  • Marketplace pricing can often differ from buying directly from the vendor. Products might be bundled in certain ways or for different time periods (e.g. multi-year) which are not offered via a direct purchase. Additionally, all three of the large CSPs allow for a concept called Private Offers. These are uniquely negotiated between buyer and seller and allow for custom agreements such as additional discounts, different payment schemes, etc.
  • The vendor’s pricing model can sometimes differ from buying directly given the availability of metering options i.e. paying only for what you use. If this is something available it will typically require some analysis to determine which model might deliver the greatest ROI.
  • If you have an existing relationship with your account manager at the CSP it might be worth seeing what additional discretionary incentives might be available for use of the marketplace. 
  • Determine the potential reduction in administrative burden by adding the product cost to your monthly bill can be a worthwhile exercise. Minimizing purchasing and procurement team involvement as well as monthly processing of invoices by the finance team can alone be advantageous even if there is not a significant cost saving when buying in the marketplace.

Depending on your situation, there may be other considerations but what is for sure is that managing multiple marketplaces requires time and resources. If you have not already investigated these, either as a buyer or a seller, now might be the time to have a look.

Your Guide to Azure SQL Pricing

Your Guide to Azure SQL Pricing

To understand how Azure SQL pricing works, we’ll first talk about how the Azure SQL service is offered. Expanding from one limited offering to a set of services, Azure SQL is a family of managed products built upon the familiar SQL Server database engine, useful for migrating SQL workloads, modernizing existing applications, and more. 

Running Azure SQL database

When Azure SQL Database first launched in 2010, its only offering was a single pricing option. But, now the Azure SQL portfolio has a more complex service model, with many possible combinations of deployment options, including compute models and service tiers. It has grown from “Azure SQL” to a multi-faceted service. It offers three deployment models, two service tiers, and two compute options. 

To run Azure SQL databases, you’ll first need to choose your deployment option. This is how you’ll structure the SQL server and its databases. Then, you’ll need to choose your purchase model to pay for your service. Select your service tier for the level of compute power you want. And, your compute tier to be able to either compute 24/7 or on-demand basis. 

Azure SQL Deployment Models

Azure SQL deployment options differ primarily in their cost and the amount of control they give you over the underlying platform. Deployment options determine how to structure the “SQL Server” and its databases. The three options available are:

  • Azure SQL Database is a general-purpose relational database, provided as a managed service. 
  • Azure SQL Managed Instance – this option modernizes existing SQL Server applications at scale with the managed instance as a service.
  • SQL Server on Azure VMs for lifting-and-shifting the SQL Server workload provides full control over the SQL Server instance

Azure SQL Pricing Models

Depending on the deployment model you’ve chosen for Azure SQL database. There are two purchasing models available:

Here are some examples of how the various pricing options play out: 

To better understand the related storage costs and compare different storage options, calculate Azure SQL costs for your specific scenario using Azure’s pricing calculator.

Azure SQL Service Tiers 

There are two service tiers used by Azure SQL Database and Azure SQL Managed Instance, each with a different architectural model. These service tiers include: 

  • A General Purpose tier for common workloads 
  • A Business Critical tier for high throughput OLTP applications requiring low latency and high resilience 

And, Azure SQL Database offers an additional service tier called:

  • A Hyperscale tier for very large OLTP systems with faster auto-scaling, backup and restore support.

Azure SQL Compute Tiers 

Under the Azure SQL Database deployment option, under the vCore pricing model with General Purpose storage, you’ll find two options for your compute resources, these include:

  • Provisioned: Azure SQL provides Azure resources that run your database with a fixed amount of compute resources for a fixed hourly price.
  • Serverless: the database is provisioned as a serverless component with auto-scaling compute and billing for use per second

Optimizing Costs on Azure SQL

The choice to mix and match Azure SQL deployment options depends on your application and migration requirements. If you are still not sure which Azure SQL deployment option is right for your workloads, here are some tips from Microsoft on how to choose. 

Now, to monitor and control your storage expenses and optimize usage in your SQL databases, yes, you can use Azure Cost Management. However, even though cloud efficiency is a core tenant of the Microsoft Azure Cost Management tool, optimization is not its strongest suit.

Another way to save money on Azure SQL Database and SQL Managed Instance is by committing to a reservation for compute resources compared to pay-as-you-go prices. With reserved capacity, you make a commitment for SQL Database and/or SQL Managed Instance use for a period of one or three years to get a significant discount on the compute costs. Or, In the provisioned compute tier of the vCore-based purchasing model, you can exchange your existing licenses for discounted rates on Azure SQL Database and Azure SQL Managed Instance by using Azure Hybrid Benefit. 

ParkMyCloud continues to add ways to optimize cloud environments no matter what cloud service you use. Azure SQL database types are just the latest cloud resources you can manage in the ParkMyCloud platform. Scheduling and parking recommendations will be available soon on these resources so you can optimize your costs more efficiently and automatically. 

If you’re new to ParkMyCloud, you can get started with a free trial

Pro Tip: You Can Scale AWS ASGs Down to Zero

Pro Tip: You Can Scale AWS ASGs Down to Zero

It sounds obvious when you first say it: you can scale AWS ASGs (Auto Scaling Groups) down to zero. This can be a cost-savings measure: zero servers means zero cost. But most people do not do this! 

Wait – Why Would You Want to?

Maybe you’ve heard the DevOps saying: servers should be cattle, not pets. This outlook would say that you should have no single server that is indispensable – a special “pet”. Instead, servers should be completely replaceable and important only in herd format, like cattle. One way to adhere to this framework is by creating all servers in groups.

Some of our customers follow this principle: they use Auto Scaling Groups for everything. When they create a new app, they create a new ASG – even if it has a server size of one. This can remove challenges to scale up in the future. However, this leaves these users with built-in wasted spend. 

Here’s a common scenario: a production environment is built with Auto Scaling Groups of EC2 instances and RDS databases. A developer or QA specialist copies production to their testing or staging environment, and soon enough, there are three or four environments of ASGs with huge servers and databases mimicking production, all running, and costing money, when no one is using them.

By setting an on/off schedule on your Auto Scaling Groups, you can scale them down to a min/max/desired number of instances as “0” overnight, on weekends, or whenever else these groups are not needed.

In essence, this is just like parking a single EC2 instance when not in use. Even for an EC2 instance, users are unlikely to go into the AWS console at the end of a workday to turn off their non-production servers overnight. For ASGs, it’s even less likely. For a single right-click to stop an EC2 instance, an AWS ASG requires you to go to ASG settings, edit, modify the min/max/desired number of instances – and then remember to do the opposite when you need to turn them back on.

How You Can “Scale to Zero” in Practice

 

One ParkMyCloud customer, Workfront, is using this daily to keep costs in check. Here’s how Rob Weaver described it in a recent interview with Corey Quinn:

Scaling environments are a perfect example. If we left scaling up the entire time 24/7 – it would cost as much as a production instance. It’s a full set of databases, application servers, everything. For that one, we’ve got it set so the QA engineers push a button [in ParkMyCloud], they start it up. For a certain amount of time before it shuts back down.  

In other cases, we’ve got people who go in and use the [ParkMyCloud] UI, push the little toggle that says “turn this on”, choose how long to turn it on, and they’re done.

How else does Workfront apply ParkMyCloud’s automation to reduce costs for a 5X ROI? Find out here

Another Fun Fact About AWS ASGs

One gripe some users have about Auto Scaling Groups is that they terminate resources when scaling down (one could argue that those users are pro-pet, anti-cattle, but I digress). If your needs require servers in AWS ASGs to be temporarily stopped instead of terminated, ParkMyCloud can do that too, with the “Suspend ASG Processes” option when parking a scale group. This will suspend the automation of an ASG and stop the servers without terminating them, and reverse this process when the ASG is being “unparked”. 

Try both scaling to zero and suspending ASGs – start a free trial of ParkMyCloud to try it out.