ParkMyCloud Expands Cloud Cost Optimization to Containers

ParkMyCloud Expands Cloud Cost Optimization to Containers

ParkMyCloud’s Amazon EKS Scheduling Enables Enterprises to Identify and Eliminate Billions in Wasted Cloud Spend

March 31, 2020 (Dulles, VA) ParkMyCloud, provider of the leading enterprise platform for continuous cost control in public cloud, has expanded its public cloud cost optimization capabilities to container technology, starting with Amazon Elastic Kubernetes Service (Amazon EKS). 

Wasted cloud spend is a significant problem, draining IT budgets in companies of all sizes and across industry verticals. ParkMyCloud reports that cloud waste will exceed $17.6 billion this year, with container spend accounting for $2.7 billion of that, growing by 60% to $4.3 billion by 2022. 

As is the case with compute and database resources, inefficient use of containers can cause significant wasted cloud spend. Sources of waste include: nonproduction pods that are idle outside of working hours, oversized pods, oversized nodes, and overprovisioned persistent storage. 

Since 2015, ParkMyCloud users have reduced cloud costs by identifying idle and over-provisioned compute and databases resources, and programmatically scheduling and resizing them, saving enterprises around the world tens of millions of dollars. Now, that same scheduling is available to reduce EKS costs. Users can set schedules based on working hours and automatically assign those schedules and resources with policy-driven actions. ParkMyCloud recommends schedules for resources based on actual utilization data. 

“Customers have an immediate need to optimize their Kubernetes spend across the three major cloud providers,” said ParkMyCloud Founder Jay Chapel. “There is a significant pain point around growing wasted spend here, which needs to be addressed through automation and data-driven action – and that’s what ParkMyCloud is now providing.”

Following today’s Amazon EKS support release, ParkMyCloud will soon also support scheduling for Amazon ECS, AWS Fargate, Azure Kubernetes Services (AKS), and Google Kubernetes Engine (GKE), with cluster rightsizing to follow scheduling.

To get started, public cloud users should visit www.parkmycloud.com/free-trial to start a free 14-day trial of the product and receive recommendations for Amazon EKS resources, as well as compute and database resources in AWS, Azure, and Google Cloud. 

About ParkMyCloud

ParkMyCloud, a Turbonomic company, provides a self-service SaaS platform that helps enterprises automatically identify and eliminate wasted cloud spend. More than 1,400 enterprises around the world – including Sysco, Workfront, Hitachi ID Systems, Sage Software, and National Geographic – trust ParkMyCloud to cut their cloud spend by tens of millions of dollars annually. ParkMyCloud allows enterprises to easily manage, govern, and optimize their spend across multiple public clouds. For more information, visit www.parkmycloud.com.

Contact

Katy Stalcup, ParkMyCloud

kstalcup@parkmycloud.com 

AWS EBS Volume Types & What to Use Them For

AWS EBS Volume Types & What to Use Them For

AWS offers several EBS volume types that you can use for your storage needs. Here’s a quick overview of what options are available and how they differ. 

What is EBS?

Amazon Elastic Block Store (EBS) is AWS’s block-level, persistent local storage solution for Amazon EC2. For example, for relational and NoSQL databases, data warehousing, Big Data processing, and/or backup and recovery.

Each network-attached block is presented as a simple volume. Since they are distributed, EBS is easily scaled (hence the “elastic”.) They are also easily backed up with snapshots.

It is just one of many AWS storage options, which also include:

  • Amazon Elastic File System (EFS) – scalable elastic file system for Linux-based workloads for use with AWS cloud services and on-premises resources. It can scale on demand automatically as you add and remove files.
  • Amazon Simple Storage Service (S3) – general purpose object store for user-generated content, active archive, serverless, etc.
  • Amazon S3 Glacier & Amazon S3 Glacier Deep Archive – inexpensive long term storage for infrequently accessed data, and assists with compliance in highly regulated fields.

Types of EBS Volumes

Amazon EBS volume types are broken into two main categories: 

  • SSD-backed volumes are optimized for IOPS, which are best for workloads involving frequent read/write operations with small I/O size.
  • HDD-backed volumes are optimized for throughput (measured in MiB/s) for large streaming workloads. Cannot include boot volumes.

Within each of those groups are two options. The default type is General Purpose SSD (gp2), and there are 3 others available:

  • General Purpose SSD (gp2) – general purpose, balances price and performance.
    • Use cases: Most workloads such as virtual desktops, dev and test environments, and low-latency interactive apps.
  • Provisioned IOPS SSD (io1)highest-performance SSD volume for mission-critical low-latency or high-throughput workloads that require sustained IOPS performance, or more than 16,000 IOPS or 250 MiB/s of throughout per volume.
    • Use cases: Mission-critical applications, large database workloads such as MongoDB, Microsoft SQL Server, Cassandra, Oracle, MySQL, and PostgreSQL
  • Throughput Optimized HDD (st1) – low-cost HDD volume for frequently accessed workloads with high throughput.
    • Use cases: Streaming workloads, big data, data warehouses, log processing.
  • Cold HDD (sc1) lowest cost HDD volume for less-frequently accessed workloads
    • Use cases: Throughput-oriented storage for large volumes of data that is infrequently accessed

You may see references to Magnetic HDD type volumes in older articles about types of volumes in EBS – those are now considered a “previous generation”. 

Interested in managing costs for your EBS volumes and snapshots? Stay tuned for announcements from ParkMyCloud coming soon on new ways the platform can optimize your costs. 

Further Reading:

Wasted Cloud Spend to Exceed $17.6 Billion in 2020, Fueled by Cloud Computing Growth

Wasted Cloud Spend to Exceed $17.6 Billion in 2020, Fueled by Cloud Computing Growth

More than 90% of organizations will use public cloud services this year, fueled by record cloud computing growth. In fact, public cloud customers will spend more than $50 billion on Infrastructure as a Service (IaaS) from providers like AWS, Azure, and Google. While this growth is due in large part to wider adoption of public cloud services, much of it is also due to growth of infrastructure within existing customers’ accounts. Unfortunately, the growth in spending often exceeds the growth in business. That’s because a huge portion of what companies are spending on cloud is wasted.

Cloud Computing Growth in 2020

Before we get to the waste, let’s look a little closer at that growth in the cloud market. Gartner recently predicted that cloud services spending will grow 17% in 2020, to reach $266.4 billion.

While Software as a Service (SaaS) makes up the largest market segment at $116 billion, the fastest growing portion of cloud spend will continue to be Infrastructure as a Service (IaaS), growing 24% year-over-year to reach $50 billion in 2020. 

Typically, we find that about ⅔ of enterprise’s average public cloud bill is spent on compute, which means about $33.3 billion this year will be spent on compute resources. 

Unfortunately, this portion of a cloud bill is particularly vulnerable to wasted spend. 

Growth of Cloud Waste

As cloud computing growth continues and cloud users mature, you might hope that this $50 billion is being put to optimal use. While we do find that cloud customers are more aware of the potential for wasted spending than they were just a few years ago, this does not seem to be correlated with cost optimized infrastructure from the beginning – it’s simply not a default human behavior. We frequently run potential savings reports for companies interested in using ParkMyCloud, to find out whether or not they will benefit from using the product. Invariably, we find wasted spend in these customers’ accounts. For example, one healthcare IT provider was found to be wasting up to $5.24 million annually on their cloud spend, an average of more than $1,000 per resource per year.

Here’s where the total waste is coming from:

Idle Resources

Idle resources are VMs and instances being paid for by the hour, minute, or second, that are not actually being used 24×7. Typically, these are non-production resources being used for development, staging, testing, and QA. Based on data collected from our users, about 44% of their compute spend is on non-production resources. Most non-production resources are only used during a 40-hour work week, and do not need to run 24/7. That means that for the other 128 hours of the week (76%), the resources sit idle, but are still paid for.

So, we find the following wasted spend from idle resources:

$33.3 billion in compute spend * 0.44 non-production * 0.76 of week idle = $11 billion wasted on idle cloud resources in 2020.

Overprovisioned Resources

Another source of wasted cloud spend is overprovisioned infrastructure — that is, paying for resources are larger in capacity than needed. That means you’re paying for resource capacity you’re rarely, or never, using. 

About 40% of instances are sized at least one size larger than needed for their workloads. Just by reducing an instance by one size, the cost is reduced by 50%. Downsizing by two sizes saves 75%.

The data we see in ParkMyCloud’s users’ infrastructure confirms this, and in the problem may be even larger. Infrastructure managed in our platform has an average CPU utilization of 4.9%. Of course, this could be skewed by the fact that resources managed in ParkMyCloud are more commonly for non-production resources. However, it still paints a picture of gross underutilization, ripe for rightsizing and optimization.

If we take a conservative estimate of 40% of resources oversized by just one size, we find the following:

$33 billion in compute spend * 0.4 oversized * 0.5 overspend per oversized resource = $6.6 billion wasted on oversized resources in 2020.

The Extent of Wasted Cloud Spend

Between idle and overprovisioned resources alone, that’s $17.6 billion in cloud spend that will be completely wasted this year. And the potential is even higher. Other sources of waste include orphaned volumes, inefficient containerization, underutilized databases, instances running on legacy resource types, unused reserved instances, and more. Some of these result in significant one-off savings (such as deleting unattached volumes and old snapshots) whereas others can deliver regular monthly savings. 

That’s a minimum of about $5 million wasted per day, every day this year, that could be reallocated toward other areas of the business. 

It’s time to end wasted cloud spend. Join ParkMyCloud in taking a stand against it today.

How to Communicate Software Development Costs to Your Finance Department

How to Communicate Software Development Costs to Your Finance Department

If you’re in engineering or development, communicating about cloud infrastructure and other software development costs with your finance department is tricky. For one thing, those costs are almost certainly rising.

Also, you are in different roles with different priorities. This naturally creates barriers of communication. You may think your development costs are perfectly reasonable while your CFO thinks there’s a problem – or you may be focused on different parts of the bill than your colleagues in finance are. 

Here are some ways to break down that communication barrier and make your software development costs sound a little less scary. 

Use the CFO’s Language 

Engineering and finance use different language to talk about the same things – which means there’s going to be an element of translation involved. Before meeting with someone who lives in a different day-to-day world than you do, consider how they may talk about cost areas in a way that’s meaningful to their role. For example: 

  • Dev-speak: “Non-production workloads” – or dev or test or stage
  • Finance-speak: R&D costs

 

  • Dev-speak: “Production workloads”
  • Finance-speak: Cost of goods sold or COGS

Focus on Business Growth Impact

So your software development costs are probably going up. There will be some wasted spend that can be eliminated, but for the most part, this growth is unavoidable for a growing business. Highlight the end results that drove decisions to increase spending on software development, for example:

  • We increased our headcount and sprint velocity to speed time to market and beat our competition for offering A.
  • We are developing multiple applications in parallel.
  • Our user base is growing, which is increasing our infrastructure costs. 
  • Our open bug count is down by 50% YOY, increasing customer satisfaction and retention.

Know the Details, But Don’t Get Bogged Down in Them

Are your S3 costs surging? Did you just commit to a bunch of 3-year reserved instances upfront (wait –– did you really?) Did your average salary per developer increase due to specialized skill requirements, or by moving outsourced QA in-house?

You should know the answers to all of these questions, but there’s no need to lead with them in a conversation. Use them as supporting information to answer questions, but not the headline.

Share Your Cost Control Plans – and Automate

Everybody likes an action plan. Identify the areas where you can reduce costs.

  • Consider roles where outsourcing may be prudent – such as apps outside your core offering
  • Automate QA testing – You’re not going to replace human software developers with bots (yet), but there are a few areas where automation can reduce costs, such as QA testing.
  • Optimize your existing infrastructure to turn off when not needed, and size resources to match demand based on utilization metrics, automatically
  • Reduce other wasted infrastructure spend by decommissioning legacy systems, 

Like many things in business, effective communication and collaboration can go a long way. While it’s important to optimize costs to make your software development costs go the furthest, they are going to continue to rise. And that’s okay.

EC2 Instance Types Comparison (and how to remember them)

EC2 Instance Types Comparison (and how to remember them)

AWS offers a range of EC2 instance types optimized for various purposes. It’s great that they provide so much variety, but of course, it means one more thing that you have to learn. It’s worth taking the time to do so, as ⅔ of IaaS spend goes toward compute – that’s a lot of EC2.

Check out a brief breakdown in this video, which also compares EC2 purchasing options. Check it out here:

Or, read on for a look into each of the AWS instance types. Remember that within each type, you’ll still need to choose the AWS instance sizes that suit your specific needs. Additionally, older generations within each instance types are available for purchase – for example, c5 is the latest “c” instance, but c4 and c3 are still available – but as the newer types tend to perform better at a cheaper price, you’ll only want to use the older types if you have an AMI or other dependency. The differences matter for some users… but you probably already know who you are. 

Note: a version of this blog was originally published in July 2018. It has been rewritten and updated for 2020. New EC2 instance types since our last writeup include A1, T3, z1d, high memory, R5, G4, and F1. 

Quick EC2 Instance Info 

This chart shows a quick summary of what we’ll cover. We’re including a brief description and mnemonic for each (hopefully helpful if you’re studying for an AWS certification!)

EC2 Instance Info

If you’ve taken a look at AWS training materials, you may have seen a couple of overall acronyms to remember all of these – perhaps Dr McGiFT Px or FIGHT Dr McPX. Whether these acronyms are useful at all is perhaps a point of discussion, but to ensure that all the instance types above are in your list, we suggest:

  • Fight Czar MXPD 
  • Fright Camp DXZ
  • March Gift PZXD

(and don’t forget high memory and Inf!)

General Purpose

These general purpose AWS EC2 instance types are a good place to start, particularly if you’re not sure what type to use. There are three general purpose types.

t instance type

The t3 family is a burstable instance type. If you have an application that needs to run with some basic CPU and memory usage, you can choose t3. It also works well if you have an application that gets used sometimes but not others. When the resource is idle, you’ll generate CPU credit, which you’ll utilize when the resource is used. It’s useful for things that come and go a lot, such as websites or development environments, and while generally inexpensive, make sure you understand how the CPU credits work before deploying these. There’s a little bit of math and they may not be as cheap as they look at first glance. 

Make sure you also understand the difference between t3 and the older t2 – t3 are in “unlimited mode” by default, so instead of throttling down to baseline CPU when your instance runs out of credits, you pay for overages.

For each of the EC2 types we cover here, we’ll also add a mnemonic to help you remember the purpose of each instance type.

Mnemonic: t is for tiny or turbo.

m instance type

The m5 instance type is similar, but for more consistent workloads. It has a nice balance of CPU, memory, and disk. It’s not hard to see why almost half of EC2 workloads are on “m” instances. In addition to m5, you also have the option of m6g, which are powered by Arm-based AWS Graviton2 processors, making them more cost-efficient. There’s also m5a, m5n, and m4 – most of which are safe to ignore unless you have a specific use case for one of the other processors besides m5’s Intel Xeon Platinum 8175 processors. If you aren’t sure what to choose, m5 is the most versatile of all the Amazon instance types. 

Mnemonic: m is for main choice or happy medium.

a1 instance type

The a1 instance type was announced in late 2018 and can be a less expensive option than other EC2. They are suited for scale-out workloads such as web servers, containerized microservices, caching fleets, distributed data stores, and development environments.  The instances are powered by Arm processors and suited for Arm-based workloads.

Mnemonic: a is for Arm processor

Compute Optimized

c instance type

The c5 instance type has a high ratio of compute/CPU versus memory. If you have a compute-intensive application – maybe scientific modelling, intensive machine learning, or multiplayer gaming – these instances are a good choice. There is also the c5d option, which is SSD-backed. See also the C5n which have up to 100 Gbps network bandwidth and increased memory compared to equivalent c5 instances. The c4 family is also still available.

Mnemonic: c is for compute (at least that one’s easy!)

Memory Optimized

r instance family

The r instance family is memory-optimized, which you might use for in-memory databases, real-time processing of unstructured big data, or Hadoop/Spark clusters. You can think of it as a kind of midpoint between the m5 and the x1e. In addition to r5, there are r5a which deliver lower cost per GiB memory and r5n which have higher bandwidth for applications that need improved network throughput and packet rate performance.

Mnemonic: r is for RAM.

x1 instance family

The x1 family has a much higher ratio of memory, so this is a good choice if you have a full in-memory application or a big data processing engine like Apache Spark or Presto. X1e are optimized for high-performance databases, in-memory databases, and other memory intensive enterprise applications.

Mnemonic: x is for xtreme, as in “xtreme RAM” seems to be generally accepted, but we think this is a bit weak. If you have any suggestions, comment below.

High Memory instance family

We’re not sure why these didn’t get an alphabet soup name like the rest of the AWS instances, but at least it’s easy to remember and understand. As you might guess, high memory instances run large in-memory databases, including production deployments of SAP HANA. 

Mnemonic: we’ll leave this one up to you.

z1d instance family

The z1d instances combine high compute capacity with a high memory footprint. They have a sustained core frequency of up to 4.0 GHz, the fastest of AWS’s offerings. These are best for electronic design automation (EDA) and some relational database workloads with high per-core licensing costs.

Mnemonic: z is for zippy 

Accelerated Computing

p instance type

If you need GPUs on your instances, p3 instances are a good choice. They are useful for video editing, and AWS also lists use cases of “computational fluid dynamics, computational finance, seismic analysis, speech recognition, autonomous vehicles” – so it’s fairly specialized. p2 instances are also available.

Mnemonic: p is for pictures (graphics).

Inf1 instance type

The Inf1 instances are a specialized EC2 type for machine learning inference applications, such as recommendation engines, forecasting, image and video analysis, advanced text analytics, document analysis, voice, conversational agents, translation, transcription, and fraud detection.

Mnemonic: inf is for inference

g instance type

The g instance type uses Graphics Processing Units (GPUs) to accelerate graphics-intensive workloads, and also designed to accelerate machine learning inference. This could include adding metadata to an image, automated speech recognition, and language translation, as well as graphics workstations, video transcoding, and game streaming in the cloud. 

g4 is the latest family, and g3 are available as well.

Mnemonic: g is for graphics or GPU

F1 instance type

f1 instances offer customizable hardware acceleration with field programmable gate arrays (FPGAs) – hence the “f”. Applications could include genomics research, financial analysis, and real-time video processing.

Mnemonic: f is for FPGA 

Storage Optimize

i3 instance type

The i3 instance type is similar to h1, but it is SSD backed, so if you need an NVMe drive, choose this type. Use it for NoSQL databases, in-memory databases, Elasticsearch, and more. The i3en option has higher network bandwidth with Elastic Network Adapter (ENA)-based enhanced networking. 

Mnemonic: i is for IOPS.

d2 instance type

d2 instances have an even higher ratio of disk to CPU and memory, which makes them a good fit for Massively Parallel Processing (MPP), MapReduce and Hadoop distributed computing, and similar applications.

Mnemonic: d is for dense.

h1 instance type

The h1 type is HDD backed, with a balance of compute and memory. You might use it for distributed file systems, network file systems, or data processing applications.

Mnemonic: h is for HDD.

What EC2 instance types should you use?

As AWS has continued to add options to EC2, there are now EC2 instance types for almost any application. If you have comparison questions around pricing, run them through the AWS monthly calculator. And if you don’t know, then generally starting with t3 or m5 is the way to go.

Looking for info on the other cloud providers?