How to Reduce One of the Largest (Non-Labor) Costs in Your IT Budget

How to Reduce One of the Largest (Non-Labor) Costs in Your IT Budget

Analysts are reporting that IT budget cuts are expected to continue, dropping 5-8% this year overall. That puts IT departments in a difficult position: what should they cut, and how? While there is no magic bullet, there are places to trim the fat that will require no sacrifice and make no impact on operations. 

Public Cloud Spend is High – And Users Want to Optimize

The largest cost in many enterprises’ IT budget is, of course, labor. You already know that the layoffs are happening and that engineering and operations departments are not immune. Whether you’re trying to avoid layoffs or trying to make the most of a reduced budget and workforce after them, you can look at other portions of your budget, including public cloud – often ranked the third-highest area of spend.

Even before COVID-19 wreaked havoc on businesses the world over, cloud customers ranked cloud cost optimization as a priority. Like water and electricity in your home, public cloud is a utility. It needs to be turned off when not being used.

This is made more urgent by today’s economic climate. There’s a lot of pressure in certain verticals, industries, and enterprises to reduce cloud spend and overall operational expenditures. 

The Least Controversial Fix: Wasted Cloud Spend

There’s a reason “optimization” is so important: it implies waste. That faucet running when no one’s in the room – there’s simply no reason for the spend, which makes it an “easy” fix. No one will miss it.

The first step is identifying the waste. We estimate that almost $18 billion will be wasted this year in two major categories. The first is idle resources – these are resources being paid for by the hour, minute, or second, that are not actually being used every hour, minute, or second. The most common type is non-production resources provisioned for development, staging, testing, and QA, which are often only used during a 40-hour work week, That means that for the other 128 hours of the week, the resources sit idle, but are still paid for.

The second-largest swath of wasted spend is overprovisioned infrastructure — that is, paying for resources that are larger in capacity than needed. About 40% of instances are oversized. Just by reducing an instance by one size, the cost is reduced by 50%. Or look at it the other way – every size up costs you double.

Other sources of waste not included in this calculation include orphaned volumes, inefficient containerization, underutilized databases, instances running on legacy resource types, unused reserved instances, and more. 

How to Activate Optimization

Cutting this waste from your budget is an opportunity to keep the spend you actually need, and make more investment in applications to produce revenue for your business. The people who use this infrastructure on a daily basis need to get on board, and that can be challenging. 

The key to taking action to address this wasted spend is to bridge the gap between the people who care about the cloud bill – Finance, IT, etc. –  and the people working in the cloud infrastructure every day – the app owners, the lines of business, developers, engineers, testers, people in QA, DevOps, SREs, etc. Those internal “end users” need a self-service tool or platform to take action.

However, app owners have a stack of priorities ahead of cost, and a lack of time to evaluate solutions. Ideally, the cloud operations team will administer a platform, and have that platform enable the app owners or lines of business to take actions, make changes, based on recommendations from that platform. Then you get Finance and IT to see a reducing – or at least flat – cloud bill, with optimized costs.

For an example of how enterprise Cloud Operations departments can approach this, learn from Sysco. They deployed ParkMyCloud to hundreds of app owners and end users across the globe, and used gamification to get them all on board with reducing costs. 

Is Azure Surging? Azure growth in 2020.

Is Azure Surging? Azure growth in 2020.

Microsoft Azure growth has long held the silver medal in public cloud. As of Q1 2020, Azure held 17% of the public cloud market, behind AWS’s 32%. But much of the adaptation to COVID-19 has happened after the Q1 period, which means they’re missing some dramatic activity: the drop in usage for businesses with lower demand, the massive increase in usage for those with high demand, and the infrastructure changes to support the at-home workforce. 

Azure Growth Trends

Market reporting comparing Azure to its competitors in the IaaS market has shown steady growth and gain in market share. Microsoft reported that Azure grew 59% year-over-year last quarter, and has been growing at similar rates for the past year.

While these Azure growth rates are reported, the actual revenue numbers are reported as part of the “Intelligent Cloud” business, which includes Azure, other private and hybrid server products, GitHub, and enterprise services. 

Something to keep in mind is that it’s easy to equate growth with net new customers Azure has gained – however, much of the growth comes from the increase in resources and usage within each customer. As just one example, among ParkMyCloud users, the average number of resources per Azure account increased 30-fold over a six-month period ending in February this year. 

COVID-19 and Azure Usage

Back in March, Microsoft shared that, given any capacity constraints within a region, it would be giving resource priority to certain types of customers: first responders, health and emergency management services, critical government infrastructure, and Microsoft Teams to enable remote work. Even as they shared that, some customers were already running up against capacity constraints in certain regions and unable to create or restart VMs.

Whether customers experienced these shortages themselves or not, we’ve heard anecdotally that the possibility of capacity constraints has instilled enough fear in some that they’ve chosen to leave resources running when not being used as an (expensive) guarantee of availability for the next time they’re needed. 

Microsoft Teams and Windows Virtual Desktops (VDI) are also seeing rapid adoption. As of last month, Teams daily active users were up to 75 million, up from 32 million in early March. Teams is part of the Productivity and Business Processes segment and does not impact the Intelligent Cloud revenue. However, it is integrated with Office 365 products, making it the platform of choice for many new users right now almost by default, similarly to the many enterprise users that adopt Azure as part of larger Microsoft agreements. 

So – is Azure experiencing growth? Certainly, yes. But is it growing faster than competitors? Right now, there’s no evidence that it is. 

New to Azure?

Are you among the newest batch of Azure users? There’s a lot to learn. Here are a few resources other new users have found helpful. 

  • Make sure you take advantage of free training resources.
  • See if you’re eligible for Azure credits.
  • Ensure your IAM roles are in order before adding users or granting third-party access.
  • Know the difference between “deallocating” a VM and “stopping” a VM.
  • …which matters because one costs money, even when you’re not using it. Next up, get wasted spend from always-running and oversized resources under control.

And use this checklist to find other ways you might be wasting money.

New in ParkMyCloud: GKE Cost Optimization!

New in ParkMyCloud: GKE Cost Optimization!

Today we are happy to announce that ParkMyCloud now offers GKE cost optimization! You can now capitalize on your utilization data to automatically schedule Google Kubernetes Engine (GKE) to turn off when not needed in order to reduce costs.

GKE Cost Control is a Priority 

GKE is the third Kubernetes service ParkMyCloud has rolled out support for in the past six weeks, following Amazon’s EKS and Azure’s AKS. Inbound requests for container cost control have been on the rise this year, and cloud users continue to tell us that container cost control is a major priority.

For example, Flexera’s 2020 State of the Cloud report found that the #1 cloud initiative for this year is to optimize existing use of cloud, and the #3 initiative is to expand use of containers. The report also found that 58% of cloud users use Kubernetes, and container-as-a-service offerings from AWS, Azure, and Google Cloud are all growing. 451 Research predicts that container spending will rise from $2.7 billion this year to $4.3 billion by 2022.

Wasted spend on inefficient containerization is among the problems contributing to $17.6 billion in cloud waste this year alone. Sources of waste include: nonproduction pods that are idle outside of working hours, oversized pods, oversized nodes, and overprovisioned persistent storage.

How to Reduce GKE Costs with ParkMyCloud

 

ParkMyCloud now offers optimization of GKE clusters and nodepools through scheduling. As with other cloud resources such as Google Cloud VM instances, preemptible VMs, SQL Databases, and Managed Instance groups – as well as resources in AWS, Azure, and Alibaba Cloud – you can create on/off schedules based on your team’s working hours and automatically assign those schedules with the platform’s policy engine. Better yet, get recommended schedules from ParkMyCloud based on your resources’ utilization data.

This comes with full user governance, self-service management of all projects in a single view, and flexible features such as schedule overrides (which you can even do through Slack!) Manage your application stacks with intuitive resource grouping and ordered scheduling.

Get Started

If you haven’t yet tried out ParkMyCloud, please start a free trial and connect to your Google Cloud account through a secure limited access role.

If you already use ParkMyCloud, you will need to update your Google Cloud IAM policy to allow scheduling actions for GKE. Details available in the release notes.

Questions? Requests for features or more cloud services ParkMyCloud should optimize? Let us know – comment below or contact us directly. 

How to Use the New AWS Cost Categories for Better Cost Allocation

How to Use the New AWS Cost Categories for Better Cost Allocation

Last week, AWS announced the general release of AWS Cost Categories. Anyone involved in managing AWS costs within your organization should ensure they understand this new feature and start using it for better cost allocation and billing management. 

What are AWS cost categories?

AWS Cost Categories are now visible in the console on the billing dashboard.

AWS cost categories are a new way to create custom groups that transcend tags and allow you to better manage costs according to your organizational structure, for example, by application group, team, or location.

Cost categories allow you to write rules and create custom groups of billing line items, which you can then use in AWS Cost Explorer, AWS Budgets, the AWS Cost and Usage Report, etc. You can group by account, tag, service, and charge types. 

How are cost categories different from tags?

At first, this “category” structure may seem similar to the tagging structure you already use in Cost Explorer, but there are a few key differences. 

First, you can create logic to categorize costs from specific services to all belong to the same team. For example, you may have RDS resources belong to a category for the DBA team; Redshift belong to a FinOps category, or CodePipeline belong to a DevOps category. Categories also allow you to include resources that are not taggable, such as AWS support costs and some Reserved Instance charges.

Why should you use AWS cost categories?

The ability to create your own categorization rules is what makes this new option powerful. You can do this through the Cost Categories rule builder, JSON editor, or API. The rule builder is straightforward and has some built-in logic options such as “is, is not, contains, starts with” and “ends with”. 

For organizations with many AWS accounts, categories of accounts into business units, products, applications, or production vs. non-production are helpful in allocating and evaluating costs.

Ensure costs are in control

Of course, whenever you take a closer look at your current costs, you’ll find areas that can be better optimized. Make sure you are only paying for the AWS resources you actually need, and schedule idle resources to turn off using ParkMyCloud – now supporting EKS and soon to support Redshift. 

8 Things You Should Know About AWS Redshift Pricing

8 Things You Should Know About AWS Redshift Pricing

If you use AWS, it’s likely you’ll use or at least run across Amazon Redshift – so make sure you know these eight things about how AWS Redshift Pricing works.

Amazon Redshift Overview

AWS calls Redshift the “most popular and fastest” cloud data warehouse. It is fully-managed, and scalable to petabytes of data for storage and analysis. You can use Redshift to analyze your data using SQL and business intelligence tools. It features:

  • Integration with data lakes and other AWS services that allows you to query data and write data back to data lake in open formats
  • High performance – fast query performance, columnar storage, data compression, and zone maps. 
  • Petabyte scale – “virtually unlimited” according to AWS, with scalable number and type of nodes, with limitless concurrency.

There are three node types, two current and one older:

  • RA3 nodes with managed storage – scale and pay for compute and managed storage independently. You choose the number of nodes based on performance requirements and pay only for the storage you used. Managed storage uses SSDs in each RA3 nodes for local storage, and Amazon S3 for longer-term storage. If the data in your node exceeds the local SSDs, data will be automatically offloaded to S3. 
  • DC2 nodes – these compute-intensive data warehouses include local SSD storage. You choose the number of nodes based on data size and performance requirements. Data is stored locally, and as it grows, you can add more compute nodes. AWS recommends this for datasets under 1TB uncompressed, otherwise, use RA3 for the S3 offloading capability to keep costs down.
  • DS2 nodesthese use HDDs, and AWS recommends you use RA3 instead.

Where did the name come from? In astronomy, “redshift” refers to the lengthening of electromagnetic radiation toward longer wavelengths as an object moves away from the observer – the light equivalent of the change in an ambulance siren pitch as it passes you, collectively known as the Doppler Effect. Or, if you’re into gossip, it’s a thumb to the nose at “big red” Oracle.

AWS Redshift Pricing Structure

So, how much does Amazon Redshift cost? Like EC2 and other services, the core cost is on-demand by the hour, based on the type and number of nodes in your cluster. 

Core On-Demand Pricing

  • RA3 – as of this writing, prices range from $3.26 per hour for an ra3.4xlarge in US East (N. Virginia) to $5.195 for the same type in Sao Paulo. Price increases linearly with size, with the ra3.16xlarge costing 4 times the ra3.4xlarge.
    • Data stored in managed storage is billed separately based on actual data stored in the RA3 node types, at the same rate whether your data is in SSDs or S3.
  • DC2 – the dc2.large currently costs $0.25 per hour in US East (N. Virginia) up to $0.40 in Sao Paulo.

Note: We’ve omitted DS2 as those are no longer recommended.

Pricing for Additional Capabilities

Amazon Redshift Spectrum Pricing – Redshift Spectrum allows you to run SQL queries directly against Amazon S3 data. You are charged for the number of bytes scanned by Spectrum, rounded up by megabyte, with a 10MB minimum per query. Compressed and columnar data will keep costs down. Current pricing is $5 per terabyte of data scanned.

Concurrency Scaling Pricing – you can accumulate one hour of concurrency scaling cluster credits every 24 hours while your main cluster is running. Past that, you will be charged the per-second on-demand rate

Redshift Managed Storage Pricing – managed storage for RA3 node types is at a fixed GB-month rate for the region you are using. It is calculated per hour based on total data present in the managed storage. 

8 Things to Keep in Mind

Of course, there’s always more to know.

  • Free trial – for your first Redshift cluster, you can get a two month free trial of a DC2.Large node. It’s actually a trial of 750 hours per month, so if you use more than 160GB of compressed SSD storage or run multiple nodes, you will exhaust that in less than one month, at which point you’ll be charged the on-demand rate unless you spin down the cluster. This is separate from the AWS Free Tier
  • Reserved Instances are available – by paying upfront, you can pay less overall – 3% to 63% depending on the payment options you choose. But should you use them?
  • Billed per-secondfor partial hours, you will only be billed at a per-second rate. Surprisingly, this was only released in February of this year.
  • You can pause – you can pause and resume to suspend billing, but you will still pay for backup storage while a cluster is paused.
  • Redshift Spectrum pricing does not include costs for requests made against your S3 buckets – see S3 pricing for those rates. 
  • Redshift managed storage pricing does not include backup storage due to snapshots, and once the cluster is terminated, you will still be charged for your backups. Don’t let these get orphaned!
  • Data transfer – there is no charge for data transferred between Amazon Redshift and Amazon S3 within the same region for backup, restore, load, and unload operations – but for all other data transfers in and out, you will be billed at standard data transfer rates. 
  • RA3 is not available in all regions

Keep Your AWS Redshift Costs In Control

There are a few things you can do to optimize your AWS Redshift costs:

  • Use Reserved Instances where you have predictable needs, and where the savings over on-demand is high enough
  • Delete orphaned snapshots – like all backups, ensure that you are managing your snapshots and deleting when clusters are deleted
  • Schedule on/off times – for Redshift clusters used for development, testing, staging, and other purposes not needed 24×7, make sure you schedule them to turn off when not needed – now possible with the announcement from last month that Redshift clusters can now be paused. Automated scheduling coming soon in ParkMyCloud!