Multi-Cloud, Hybrid Cloud, and Cloud Spend – Statistics on Cloud Computing

The latest statistics on cloud computing all point to multi-cloud and hybrid cloud as the reality for most companies. This is confirmed by what we see in our customers’ environments, as well as by what industry experts and analysts report. At last week’s CloudHealth Connect18 in Boston we heard from Dave Bartoletti, VP and Principal Analyst at Forrester Research, who broke down multi-cloud and hybrid cloud by the numbers:

  • 62% of public cloud adopters are using 2+ unique cloud environments/platforms
  • 74% of enterprises describe their strategy as hybrid/multi-cloud today
  • But only:
    • 42% regularly optimize cloud spending
    • 41% maintain an approved service catalog
    • 37% enforce capacity limits or expirations

More often than not, public cloud users and enterprises have adopted a multi-cloud or hybrid cloud strategy to meet their cloud computing needs. Taking advantage of features and capabilities from different cloud providers can be a great way to get the most out of the benefits that cloud services can offer, but if not used optimally, these strategies can also result in wasted time, money, and computing capacity.

The data is telling – but we won’t stop there. For more insight on the rise of multi-cloud and hybrid cloud strategies, and to demonstrate the impact on cloud spend (and waste) – we have compiled a few more statistics on cloud computing.

Multi-Cloud and Hybrid Cloud Adoption Statistics

The statistics on cloud computing show that companies not only use multiple clouds today, but they have plans to expand multi- and hybrid cloud use in the future:

  • According to a 451 Research survey, 69% of organizations plan to run a multi-cloud environment by 2019. As they said, “the future of IT is multi-cloud and hybrid” – but with this rise, cloud spending optimization also becomes more of a challenge.
  • In a survey of nearly 1,000 tech executives and cloud practitioners, over 80% of companies were utilizing a multi-cloud strategy, commonly including a hybrid cloud model consisting of both public and private clouds.
  • And by multi-cloud, we don’t mean just two. On average, the number of private and public clouds used by companies to run applications and test out new services is 4.8.
  • On hybrid cloud strategy:
    • 83% of workloads are virtualized today (IDC)
    • 60% of large enterprises run VMs in the public cloud (IDC)
    • 65% of organizations have a hybrid cloud strategy today (IDC)

Cloud Spend Statistics

As enterprises’ cloud footprints expand, so too does their spending:

  • It’s not just public – the rise in cloud spend is happening on all fronts. According to IDC, 62.3 percent of private cloud spending went to on-premise private clouds in 2017.
  • The increase in cloud use, along with the rise of multi-cloud and hybrid cloud strategies, also correlates with an increased investment in cloud services. In a survey of nearly 1,000 tech executives and cloud practitioners, 20% of enterprises plan to increase their cloud spend by more than double, and another 17% plan to up their cloud spending by 50-100%, according to the report.  
  • 75% of participants said that one of their primary concerns was the challenge of managing cloud spend. Cloud cost optimization was a priority for the majority of participants, and average cloud waste was reported at 35%.
  • In another study from 451 Research, 38.8% of CIOs said that “cost savings” was their biggest motivator in migrating to the cloud, but post migration, cloud costs was the biggest challenge they faced. Here’s what else they had to say:

“Cloud is an inexpensive and easily accessible technology. People consume more, thereby spending more, and forget to control or limit their consumption. With ease of access, inevitably some resources get orphaned with no ownership; these continue to incur costs. Some resources are overprovisioned to provide extra capacity as a ‘just in case’ solution. Unexpected line items, such as bandwidth, are consumed. The IT department has limited visibility or control of these items.”

What Does ParkMyCloud User Data Tell Us?

We’ve noticed some interesting patterns in the cloud platforms adopted by ParkMyCloud users as well, which highlight the multi-cloud trends discussed above as well as correlations between the types of companies that are attracted to each of the major public clouds. We observed:

  • A high rate of growth in the number of Google Cloud Platform (GCP) customers over the past several months. While Amazon Web Services still holds the lion’s share among organizations using ParkMyCloud, the rate of growth is much higher for GCP. We believe that as more and larger organizations become enmeshed in GCP’s infrastructure, they are finding a greater need for cost optimization.
  • Among our customers using a multi-cloud strategy, the majority use AWS in combination with Azure, while the rest are using AWS with Google Cloud Platform.
  • The company profiles of AWS and GCP users are similar – we find these to be tech-forward small/medium businesses, whereas Azure attracts a larger proportion of big enterprises.

What These Statistics on Cloud Computing Mean for Cloud Management  

Upon examining these statistics on cloud computing, it’s clear that multi-cloud and hybrid cloud approaches are not just the future, they’re the current state of affairs. While this offers plenty of advantages to organizations looking to benefit from different cloud capabilities, using more than one CSP complicates governance, cost optimization, and cloud management further as native CSP tools are not multi-cloud. As cloud costs remain a primary concern, it’s crucial for organizations to stay ahead with insight into cloud usage trends to manage spend (and prevent waste). To keep costs in check for a multi-cloud or hybrid cloud environment, optimization tools that can track usage and spend across different cloud providers are a CIO’s best friend.

Read more ›

3 Things Candy Crush Can Do To Make Cloud Migration Sweeter

Candy Crush is migrating to Google Cloud, marking its first major cloud migration as decided by the online game-maker, King. Starting in early 2019, Candy Crush will be hauling a substantial amount of big data from on-premise to Google Cloud Platform.

A cloud migration is no easy feat, and for a company that provides online gaming to over 270 million people globally, choosing the right cloud provider to navigate the challenges of such a move is crucial. Aside from “even richer online gaming experiences,” Sunil Rayan, managing director of gaming at Google Cloud, makes a good case for why Google was the best choice for Candy Crush:

“It will continue to innovate and demonstrate its leadership position as a global innovator by utilising our big data, AI and machine learning capabilities to give its engineers the next generation of tools to build great experiences.”

But with the potential for better gaming, higher speed, and scalability, a cloud migration also comes with a few big risks. Here are 3 things Candy Crush can do to make their cloud migration sweeter:

1. Don’t rush data transfer

Transferring data from on-premise to the cloud is a huge undertaking, especially for a company that claims to have the largest Hadoop cluster in Europe. Transferring massive amounts of data is not recommended because it slows download speed, so it would be best for Candy Crush to make the move in parts, over time, and with the anticipation of potentially massive transfer costs associated with moving data out of or into a cloud.

2. Prepare for potential downtime

Downtime is a huge risk for any application, let alone a game played by millions across the world. Candy Crush can’t afford for downtime on a game users say is downright addictive, so it’s important to account for inconsistencies in data, examine network connections, and prepare for the real possibility of applications going down during the cloud migration process.

3. Adapt to technologies for the new cloud

Since choosing a cloud provider means committing to a heavy amount of time reconfiguring an application for the move – it’s important evaluate that the technology is the best fit. Technology is a big reason for Candy Crush moving their monolothic, on-premise environment to Google Cloud. Asa Bresin, FVP of technology at King, listed innovations in machine learning, query processing, and speed as drivers for cloud migration, and with technology known for speed and scalability, Google has met their requirements.  

Bonus: Keep costs in check. Whether it’s heavy transfer costs, losing money during downtime periods, or the time and manpower needed to reconfigure an application to the cloud – cloud migrations come with costs. The time and costs of a cloud migration are easily misunderstood or drastically understated. For ease and efficiency of keeping costs in check throughout and after the migration process, it’s important to have an understanding of cloud service offerings, pricing models, and the complexity of a cloud adoption budget. Evaluate all of these costs and look into options that will help you save post-migration, like optimization tools.

With a gradual shift, planning for risks of downtime, and the patience and flexibility to reconfigure for Google Cloud, Candy Crush can win at cloud migration.   

Read more ›

How to Create Your 2018 AWS re:Invent Schedule

It’s time to plan your 2018 AWS re:Invent schedule! This will be our team’s fourth re:Invent, so we’ve put together some tips for planning out your conference experience.

First up, if you have not yet registered for re:Invent, do that now! Tickets sold out last year, so don’t wait.

Choose Your Sessions in Advance

The key to a great AWS re:Invent schedule is to plan in advance. The essential part of this planning is to register for sessions in advance. There will be a session registration open date, which has not yet been announced for 2018. When that date is released, though, put it on your calendar and reserve some time for registration – it can be competitive and sessions fill up quickly. Last year, session registration opened on October 19, so expect a similar date this year.

What you can get started with today is reading through the re:Invent agenda and, especially, the immense event catalog. Note the sessions you’re interested in. Here are some tips to keep in mind:

  • Focus – what do you most hope to gain at re:Invent? You can sort sessions based on subject areas and industries – would a “focus path” help you gain more out of your experience?
  • Value of In-Person vs. Session Videos – Many sessions will be online afterward, so prioritize sessions with an element that is more valuable in person – that may be chalk talks, workshops, and others with interactive elements. You’ll be able to watch any sessions you missed and catch up on the information on others with videos. This can put you more at ease and let you have some fun while in Vegas.
  • Travel time – This won’t be the first or the last time you hear this, but it’s worth saying again: the re:Invent campus is big. HUGE. Plan your schedule accordingly, with as few travel periods up and down The Strip as possible. If there are multiple sessions you’re interested in at the same time, prioritize ones with the least travel time. You should also plan to arrive to sessions early.

Once dates, times, and locations have been announced for sessions, we recommend putting them into your calendar for a clean visual of your day, and reminders. Once it’s available, you’ll be able to view your AWS re:Invent schedule in the mobile app, along with maps and more.

Set Aside Time for the Expo Hall

Make sure you plan on time to visit the expo hall! Actually, there are now two expos – the main one at The Venetian and another at the Aria.

The Welcome Reception from 4-7 PM on Monday is a great time to visit the expo and kick off your re:Invent experience with food, drinks, and giveaways. However, it will be crowded. You’ll want to come back again later in the week to check out vendor products and services, chat with vendors whose products you already use, get swag, and enter drawings. The expo is open from 8 AM – 6 PM Tuesday, 10 AM – 6 PM Wednesday, and 10 AM – 4 PM Thursday.

You won’t be disappointed by the swag. Just search #reinventswag for examples — sponsors go all out. By the way, if you’re aiming to maximize swag, definitely stop by after lunch on Thursday. Sponsors will practically beg you to take stuff off their hands so they don’t have to ship it home. You can grab toys, stickers, and keychains for your kids, or build an entire wardrobe of t-shirts and socks for yourself.

And of course, stop by and visit ParkMyCloud at the Venetian expo, booth #1709! Mention this post and we’ll hook you up with some secret bonus swag.

(Also, what secret bonus swag would you want? Asking for a friend…)

Activities and Parties

Round out your Vegas experience with some partying! The great thing about a conference like this is that you can often drink your way through for free, courtesy of vendors with bigger marketing budgets than mine. Outside of Tuesday’s pub crawl, many parties require you to register ahead of time, so keep an eye on your email for invitations. You’ll want to bookmark this list of 2018 re:Invent parties. As of this writing, it’s a bit sparse, but check out last year’s party list for an idea of the multitude of options to come.

Obviously, you don’t want to miss re:Play, the centerpiece of the conference (you know, besides the keynotes.) More free food, drink, an EDM concert, retro arcade, laser escape room, drone obstacle course, climbing wall, dodgeball, bounce castle, archery tag, and/or whatever else they come up with for this year.

Or venture out beyond the conference hall walls and try your luck or catch a show – it’s hard to be bored in Vegas.

 

Do you have any other tips for planning the perfect AWS re:Invent schedule? Let us know in the comments. Cheers, and see you there!

 

More on re:Invent: 2017 recap.

Read more ›

Google Cloud Machine Types Comparison

Google Cloud Platform offers a range of machine types optimized to meet various needs. Machine types provide virtual hardware resources that vary by virtual CPU (vCPU), disk capability, and memory size, giving you a breadth of options. But with so much to choose from, finding the right Google Cloud machine type for your workload can get complicated.

In the spirit of our recent blog on EC2 instance types, we’re doing an overview of each Google Cloud machine type. This image shows the basics of what we will cover, but remember that you’ll want to investigate further to find the right machine type for your particular needs.

Predefined Machine Types

Predefined machine types are a fixed pool of resources managed by Google Compute Engine. They come in five “classes” or categories:

Standard (n1-standard)

Standard machine types work well with workloads that require a balance of CPU and memory. The n1-standard family of machine types come with 3.75 GB of memory per vCPU. There are 8 total in the series and they range from 3.75 to 360 GB of memory, corresponding accordingly with 1 to 96 vCPU.

High-Memory (n1-highmem)

High memory machine types work for just what you’d think they would – tasks that require more system memory as opposed to vCPUs. The n1-highmem family comes with 6.50 GB of memory per vCPU, offering 7 total varieties ranging from 13 to 624 GB in memory, corresponding accordingly with 2 to 96 vCPUs.

High-CPU (n1-highpcu)

If you’re looking for the most compute power, the n1-highcpu series is the way to go, offering 0.90 GB per vCPU. There are 7 options within the high cpu machine type family, ranging from 1.80 to 86.6GB and 2 to 96 vCPUS.

Shared-Core (f1-micro)

Share-core machine types are cost-effective and work well with small or batch workloads that only need to run for a short time. They provide a single vCPU that runs on one hyper-thread of the host CPU running your instance.

The f1-micro machine type family provides bursts of physical CPU for brief periods of time in moments of need. They’re like spikes in compute power that can only happen in the event that your workload requires more CPU than you had allocated. These bursts are only possible periodically and are not permanent.

Memory Optimized (n1-ultramem or n1-megamem)

For more intense workloads that require high memory but also more vCPU than that you’d get with the high-memory machine types, memory-optimized machine types are ideal. With more than 14 GB of memory per vCPU, Google suggests that you choose memory-optimized machine types for in-memory databases and analytics, genomics analysis, SQL analysis services, and more. These machine types are available based on zone and region.

Custom Machine Types

Predefined machine types vary to meet needs based on high memory, high vCPU, a balance of both, or both high memory and high vCPU. If that’s not enough to meet your needs, Google has one more option for you – custom machine types. With custom machine types, you can define exactly how many vCPUs you need and what amount of system memory for the instance. They’re a great fit if your workloads don’t quite match up with any of the available predefined types, or if you need more compute power or more memory, but don’t want to get bogged down by upgrades you don’t need that come with predefined types.

About GPUs and machine types

On top of your virtual machine instances, Google also offers graphics processing units (GPUs) that can be used to boost workloads for processes like machine learning and data processing. GPUs typically can only be attached to predefined machine types, but in some cases can also be placed with custom machine types depending on zone availability. In general, the higher number of GPUs attached to your instances, the higher number of vCPUs and system memory available to you.

What Google Cloud machine type should you use?

Between the predefined options and the ability to create custom Google Cloud machine types, Google offers enough variety for almost any application. Cost matters, but with the new resource-based pricing structure, the actual machine you chose matters less when it comes to pricing.

With good insight into your workload, usage trends, and business needs, you have the resources available to find the machine type that’s right for you.

Read more ›

AWS Reserved Instance Marketplace – Seller’s FAQ

As we continue to dive into AWS Reserved Instances, today we want to take a look at the AWS Reserved Instance Marketplace.

Reserved Instances are a great way to save money – unless they don’t get used, and you won’t really know until you get the bill. But just because you’re locked into that contract doesn’t mean that your unused RIs have to be a total waste of money. AWS has given users a place to sell them –  the Reserved Instance Marketplace.

Using the Reserved Instance Marketplace, you can list your reservation for other users to purchase.  Of course, like any online marketplace, there’s no guarantee that you’ll actually sell them, but at least you have a shot at getting some of your money back.

AWS has some solid documentation for all the ins and outs of buying and selling in the Reserved Instance Marketplace, but we decided to highlight answers to some of the questions we most commonly see about  how to get started with selling unused RIs. Read our FAQ below.

Selling on the Reserved Instance Marketplace

AWS customers and third-parties are free to use the marketplace to sell unused Standard RIs regardless of length terms or original pricing options.

When is it a good idea to sell unused RIs?

If you’re changing instance types (perhaps for rightsizing or better optimizing the instance type for its load or application), moving regions, your  business needs have changed, your capacity needs have changed, or you just don’t need that instance type anymore – use the marketplace.  

How do I become a seller?

To register as a seller, you’ll need to provide bank account and tax information. Once you’ve completed registration, you’ll receive a confirmation email.

Are there any restrictions or limitations to what I can sell?

  • Once you’ve registered as a seller, you’re free to sell any EC2 Standard Reserved Instances as long as your term length has at least one month remaining.
  • Convertible instances cannot be sold in the marketplace.
  • You can sell Standard RIs regardless of the purchasing plan (No Upfront, Partial Upfront, or All Upfront), but in the case of All Upfront – you must have made the full payment before you can sell, and the reservation must be active for at least 30 days before listing. AWS also charges a 12% service fee for upfront pricing.
  • Pricing is flexible – the minimum sale price is $0.00
  • You can’t modify or change a listing once it’s been made, but you can cancel it and create a new one.

What information does AWS share with buyers?

According to US regulations, buyers will be able to see your legal name on the buyer’s statement. In the event that AWS Support is contacted regarding invoices or tax purposes, the buyer may receive your email address to be able to communicate with you directly, along with your ZIP code and country.   

How does selling work?

Once you list the RIs you want to sell in the marketplace, buyers will be able to see them. Instances are grouped by remainder of term length and hourly rate. The cheapest reservations are sold first, followed by the next cheapest, and so on until the buyer’s order is fulfilled. AWS handles the transaction and transfer of ownership. The instances are yours until they’re sold, and once you make a sale, you’ll go back to paying the on-demand rate whenever you use that instance type moving forward.

How do I list my RIs in the marketplace?

There’s a few ways you can list your unused RIs in the AWS Reserved Instances Marketplace. You can sell them all at once, in parts, or by instance type, platform, and scope. You can also cancel your listing, but you won’t get anything back on any portions that have already been sold. There are also several routes you can take for where and how to list your RIs: using the AWS Management Console, using the AWS CLI or Amazon EC2 API, and from the Listing State of the My Listings tab of the Reserved Instances page.

How do I price my RIs in the marketplace?

When selling an RI, the only fee that you can decide on is the upfront fee – the one-time fee that the buyer is charged for purchasing your instance. Usage and recurring fees cannot be specified – the buyer will pay what was charged for the original purchase. The minimum sales price allowed is $0.00 and the maximum you can sell per year is $50,000 (although AWS can grant you permission to sell more on a case-by-case basis).

AWS also sets a default pricing schedule for your listed RIs. Pricing decreases incrementally over a month-to-month period to account for the value of the RI decreasing over time. What you can do, however, is set upfront prices based on the point of sale for your RI (a set price if its sold with 5 months remaining in the term, 3 months remaining, etc).

What happens after I make a sale?

You’ll get an email notification anytime an RI has sold, and each day there is any activity on your account, such as creating or selling a listing. Once the buyer pays AWS for your RIs, you’ll get a message to your email account about the sold reservation. AWS sends a wire transfer to the bank account provided, typically 1-3 days from the date of sale, but you won’t be able to receive funds until after AWS has verified the account with your bank, which can take up to 2 weeks. You can also see your sales in the Reserved Instance disbursement report, where you can check the status of everything you’ve listed. Or you can track the status of your RI listings in the console (Reserved Instance > My Listings > Listing State) for a full breakdown of available listings, pending, sold, and canceled.

Conclusion

Reserved Instances can save money on your AWS bill, but can just as easily waste money by going unused. Luckily, the AWS Reserved Instances Marketplace can help by giving you a place to sell your unused RIs. Did we miss any of your questions in this AWS Reserved Instances Marketplace FAQ? Let us know!

Read more ›

How to Analyze Google Cloud Committed Use Discounts

In this blog we will look at the Google Cloud Committed Use discount program for customers that are willing to “commit” to a certain level of usage of the GCP Compute Engine.

The Committed Use purchasing option is particularly useful if you are certain that you will be continually operating instances in a region and project over the course of a year or more. If your instance usage does not add up to a full month, you may instead want to look at the Google Cloud Sustained Use discounts, which we discussed in a previous blog.

The Google Cloud Committed Use discount program has many similarities to the AWS and Azure Reserved Instances (RI) programs, and a couple unique aspects as well. A comparison of the various cloud providers’ reservation programs is probably worth a blog in itself, so for now, let’s focus on the Google Cloud Committed Use discounts, and the best times and places to use them.

Critical Facts about Google Cloud Committed Use

  • The Committed Use discount is best for a stable and predictable workload (you are committed to pay – regardless of whether you use the resources or not!)
  • Commitment periods are for either 1 or 3 years.
  • Commitments are for a specific region and a specific project. Zone assignment within a region is not a factor.
  • Discounts apply to the total number of vCPUs and amount of memory– not to a specific machine or machine type.
  • No pre-payment – the commitment cost is billed monthly.
  • GCP Committed Use discounts are available for all of the GCP instance families except the shared-core machine types, such as f1-micro and g1-small.
  • Committed Use discounts do not apply to the premium charges for sole-tenants, nor can they be used for Preemptible instances.
  • The commitments for General Purpose instances are distinct from those for Memory Optimized instances. If you have some of both types, you must buy two different types of Commitment. These types are:
General Purpose Memory Optimized
  • Standard – n1-standard
  • High Memory – n1-highmem
  • High CPU – n1-highcpu
  • Custom
  • General purpose sole-tenant
  • n1-ultramem

How much does it cost?

Each Committed Use purchase must include a specific number of vCPUs and the amount of memory per vCPU. This combination of needing to commit to both a number of vCPUs and amount of Memory can make the purchase of a commitment a bit more complicated if you use a variety of machine types in your environment. The following table illustrates some GCP machine types and the amount of memory automatically provided per vCPU:

Machine Type Memory per vCPU
n1-standard 3.75 GB
n1-highmem 6.50 GB
n1-highcpu 0.90 GB
n1-ultramem 14-24 GB
custom 0.9 – 6.5 GB

While the vCPU aspect is fairly straightforward, the memory commitment to purchase requires a bit of thought. Since it is not based on a specific machine type (like AWS and Azure), you must decide just how much memory to sign-up for. If your set of machine types is homogeneous, this is easy – just match the vCPU/memory ratio to what you run. The good news here is that you are just buying a big blob of memory – you are not restricted to rigidly holding to some vCPU/memory ratio. The billing system will “use up” a chunk of memory for one instance and then move on to the next.

Looking at a specific example, the n1-standard-8 in the Oregon region that we discussed in the Sustained Usage Blog, we can see that the Committed Use discount does amount to some savings, but one must maintain a usage level throughout the month to fully consume the commitment.

Google Cloud Committed Use Discount vs Sustained Use Discount Break Even Point

Recall from the earlier blog that the base price of this instance type in the GCP Price list already assumes a Sustained Usage discount over a full month, and that the actual “list price” of the instance type is $277.40, and Sustained Usage provides up to a maximum of a 30% discount. With that as a basis, we can see that the net savings for the Committed Use discount over 1 year is 37%, and over 3 years, rises to 55%. This is close to the advertised discount of 57% in the GCP pricing information, which varies by region.

The break-even points in this graph are about 365 hours/month for a 3 year commitment, and 603 hours/month for a 1 year commitment. In other words, if you are sure you will be using a resource less than 365 hours/month over the course of a year, then you probably want to avoid purchasing a 3 year Commitment.

Allocation of Commitments

Because Commitments are assigned on a vCPU/RAM basis, you cannot simply point at a specific instance, and say THAT instance is assigned to my Committed Use discount. Allocation of commitments is handled when your bill is generated, and your discount is applied in a very specific order:

  1. To custom machine types
  2. Sole-tenant node groups
  3. Predefined machine types

This sequence is generally good for the customer, in that it applies the Commitment to the more expensive instances first. For example, an n1-standard-4 instance in Northern Virginia normally costs $109.35. If an equivalent server was constructed as a Custom instance, it would cost $114.76.

For sole-tenant node groups, you are typically paying for an entire physical machine, and the Committed Use discount serves to offset the normal cost for that node. For a sole-tenant node group that is expected to be operating 7x24x365, it makes the most sense to buy Committed Use for the entire system, as you will be paying for the entire machine, regardless of how many instances are running on it.

Commitments are allocated over the course of each hour in a month, distributing the vCPUs and RAM to all of the instances that are operating in that hour. This means you cannot buy a Commitment for 1 vCPU and 3.75 GB of RAM, and run two n1-standard-1 instances for the first half of the month, and then nothing for the second half of the month, expecting it all to be covered by the Commitment. In this scenario, you would be charged for one month at the committed rate, and two weeks at the regular rate (subject to whatever Sustained Usage discount you might accumulate for the second instance).

Thank you for….not…sharing?

Unlike AWS, where Reserved Instances are automatically shared across multiple linked accounts within an organization, GCP Commitments cannot be shared across projects within a billing account. For some companies, this can be a major decision point as to whether or not they commit to Commitments. Within the ParkMyCloud platform, we see customers with as many as 60 linked AWS accounts, all of which share in a pool of Reserved Instances. GCP customers do not have this flexibility with Commitments, being locked-in to the Project in which they were purchased. A number of our customers use AWS Accounts as a mechanism to track resources for teams and projects; GCP has Projects and Quotas for this purpose, and they are not quite as flexible for committed resource sharing. For a larger organization, this lack of sharing means each project needs to be much more careful about how they purchase Commitments.

Conclusions

Google Cloud Committed Use discounts definitely offer great savings for organizations that expect to maintain a certain level of usage of GCP and that expect to keep those resources within a stable set of regions and projects. Since GCP Commitments are assigned at the vCPU/Memory level, they provide excellent flexibility over machine-type-based assignments. With the right GCP usage profile over a year or more, purchase of Google Cloud Committed Use discounts is a no-brainer, especially since there are no up-front costs!

Read more ›

EC2 Instance Types Comparison (and how to remember them)

aws ec2 instance types comparison

AWS offers a range of EC2 instance types optimized for various purposes. It’s great that they provide so much variety, but of course, it means one more thing that you have to learn.

We broke this down in a new video, which also compares EC2 purchasing options. Check it out here:

Or, read on for a look into each instance type. Remember that within each type, you’ll still need to choose instance sizes for your specific needs. Additionally, older generations within each instance types are available for purchase – for example, c5 is the latest “c” instance, but c4 and c3 are still available – but as the newer types tend to perform better at a cheaper price, you’ll only want to use the older types if you have an AMI or other dependency.

This image shows a quick summary of what we’ll cover:

ec2 instance types comparison chart with mnemonics

General Purpose

These general purpose EC2 instance types are a good place to start, particularly if you’re not sure what type to use. There are two general purpose types.

t2 instance type

The t2 family is a burstable instance type. If you have an application that needs to run with some basic CPU and memory usage, you can choose t2. It also works well if you have an application that gets used sometimes but not others. When the resource is idle, you’ll generate CPU credit, which you’ll utilize when the resource is used. It’s a cheaper option that’s useful for things that come and go a lot, such as websites or development environments.

We’ll also add a mnemonic to help you remember the purpose of each instance type.

Mnemonic: t is for tiny or turbo.

m5 instance type

The m5 instance type is similar, but for more consistent workloads. It has a nice balance of CPU, memory, and disk. It’s not hard to see why almost half of EC2 workloads are on “m” instances.

There’s also an m5d option, which uses solid state drives (SSD) for the instance storage.

Mnemonic: m is for main choice or happy medium.

Compute Optimized

c5 instance type

The c5 instance type has a high ratio of compute/CPU versus memory. If you have a compute-intensive application – maybe scientific modelling, intensive machine learning, or multiplayer gaming – these instances are a good choice. There is also the c5d option, which is SSD-backed.

Mnemonic: c is for compute (at least that one’s easy!)

Memory Optimized

r4 instance family

The r4 instance family is memory-optimized, which you might use for in-memory databases, real-time processing of unstructured big data, or Hadoop/Spark clusters. You can think of it as a kind of midpoint between the m5 and the x1e.

Mnemonic: r is for RAM.

x1e instance family

The x1e family has a much higher ratio of memory, so this is a good choice if you have a full in-memory application or a big data processing engine like Apache Spark or Presto.

Mnemonic: x is for xtreme, as in “xtreme RAM” seems to be generally accepted, but we think this is a bit weak. If you have any suggestions, comment below.

Accelerated Computing

p3 instance type

If you need GPUs on your instances, p3 instances are a good choice. They are useful for video editing, and AWS also lists use cases of “computational fluid dynamics, computational finance, seismic analysis, speech recognition, autonomous vehicles” – so it’s fairly specialized.

Mnemonic: p is for pictures (graphics).

Storage Optimized

h1 instance type

The h1 type is HDD backed, with a balance of compute and memory. You might use it for distributed file systems, network file systems, or data processing applications.

Mnemonic: h is for HDD.

i3 instance type

The i3 instance type is similar to h1, but it is SSD backed, so if you need an NVMe drive, choose this type. Use it for NoSQL databases, in-memory databases, Elasticsearch, and more.

Mnemonic: i is for IOPS.

d2 instance type

d2 instances have an even higher ratio of disk to CPU and memory, which makes them a good fit for Massively Parallel Processing (MPP), MapReduce and Hadoop distributed computing, and similar applications.

Mnemonic: d is for dense.

What EC2 instance types should you use?

As AWS has continued to add options to EC2, there are now EC2 instance types for almost any application. If you have comparison questions around pricing, run them through the AWS monthly calculator. And if you don’t know, then generally starting with t2 or m5 is the way to go.

Read more ›

Don’t Waste Money on Unused Reserved Instances

Among the various ways to lose money on idle or orphaned resources in the cloud, here’s another for AWS users to add to the list: unused reserved Instances. At first, the idea of wasting money on AWS Reserved Instances seems counterintuitive. After all, aren’t RIs meant to save money? The short answer is yes – but only if you use them efficiently.

How Unused Reserved Instances Occur

To understand how unused Reserved Instances contribute to cloud waste, consider how they work. With AWS Reserved Instances, you’re making a commitment of usage by renting instances for a fixed amount of time in exchange for a lower rate (per-hour or per-second) than on-demand. You’re still free to use all the same families, OS types, and instance sizes with either one, except with RIs your ability to use certain instance types is limited to the purchasing plan you choose.

The only real difference between an AWS On-Demand instance and an AWS Reserved Instance is how you get billed for them on the backend – and this is where it gets tricky. You don’t know if your Reserved Instances have been used until you get the bill. Instead, you run your instances as you always would, with no insight into what will get billed as reserved instances. It’s only when your bill is created the following month that AWS reviews your reservations alongside your usage to apply the Reserved Instances that match up with your workload. This leaves you with little visibility into what your costs will be, forcing you to track usage on your own, and running the risk of unused reservations that result in, you guessed it – wasted money.

Ways to Avoid Losing Money Unused Reserved Instances

Reserved Instances require commitment of usage, ongoing awareness and insight into your future costs, and the possibility of going unused if AWS can’t apply them sufficiently. But that doesn’t mean you should shy away from using them. Reserved Instances can be cost-effective if used with a few things in mind:

Pick the RI type that suits your usage and workload. The best plan is one of prevention. Before you get started with purchasing reservations, get a detailed look at your usage and the most optimal instance types for your workload (something you should already be doing as part of your cost control measures). By design, Reserved Instances work best with steady state workloads and consistent usage. Once you confirm that your usage makes you a good candidate, you’ll want to choose the RI instance type that will benefit your needs most:

  • Standard RIs – Recommended for steady-state usage, and provide the most savings.
  • Convertible RIs – a smaller discount from On-Demand instances, but in return provide flexibility to change families, OS types, and tenancies.
  • Scheduled RIs – similar to Standard RIs, but only apply to instances launched within the time windows you select, which can recur on a daily, weekly, or monthly schedule.

Sell unused reserved instances on the Reserved Instance Marketplace. Using the marketplace allows you to list your reservation for purchase by other users. The cheapest reservations are sold first, and once someone purchases yours, you’ll be the charged the on-demand rate whenever you use that instance type moving forward.  

Purchase convertible reservations. With convertible reservations, you have the option to convert your reserved instances to other types, so long as the new type is more expensive. You won’t get as much of a discount, but flexibility and more options for use make up for the smaller savings.

The Lesson to be Learned

Just like any other idle or unused cloud resource, unused reserved instances can only do one thing – waste your money. Cloud services were meant to help you keep infrastructure costs in check, but only if you use them smartly. Optimize your cloud spend with awareness of your usage, ongoing insight into your infrastructure needs, and running instances only when you need them.

More questions answered in our recent blog, AWS Reserved Instance FAQs.

Read more ›

How Cloud Trends are Changing (& Happy Birthday, ParkMyCloud!)

ParkMyCloud just turned 3 years old, and from here, the future looks great. The market is growing, cloud is the norm, and cost control is always top of mind for companies big and small. In fact, over 600 enterprises in 25+ countries now use our platform to “park idle cloud resources (including instances, databases and scale groups) in AWS, Azure, GCP and now Alibaba.

As we look to the future, we’re taking a moment to consider current cloud trends and how cost control needs are changing. To provide context, let’s take a quick look at where the market was three years ago.

The Problem that Got Us Started

When we founded the company three years ago, we set out to build a self-service, SaaS platform which would allow DevOps users to automate cloud cost control and integrate it into their cloud operations. We saw a need for this platform as we were talking to enterprises using AWS about broader cloud management needs as a service play. They wanted a self-service, purpose-built easy button for instance scheduling that could be centrally managed and governed but left up to the end user to control – enter ParkMyCloud.

Our value proposition started simply and has stayed relatively constant: save 20% on your cloud bill in 15 minutes or less (it’s 65% per parked resource). The ease of use, verifiable ROI, and richness of our platform capabilities allow global companies like McDonald’s, Unilever, Sysco, Sage and many others to adopt ParkMyCloud on their own, with no services, and begin to automate their cloud cost control in minutes – not days or weeks.

I went back and looked at our pre-launch pitch decks. At that time, the cloud Infrastructure-as-a-Service (IaaS) market was $10B or so, and dominated by AWS, and others like Rackspace and HP were in the game with the other usual suspects. Today, Gartner estimates enterprises will spend $41B on IaaS in 2018, and it’s still dominated by AWS, but the number of players is really down to 4 or 6 depending on where you want to put IBM and Oracle.

But the cloud waste problem is still prominent and growing, most analysts and industry pundits estimate that 25% or more of your bill is wasted on unused, idle or over provisioned resources – that equates to $10B+ based on 2018 IaaS predictions being wasted – that’s a BIG nut. In fact, if you break that down that’s $1MM in wasted cloud spend every hour. And it’s important. Most enterprises rank cloud security/governance and cost management as their primary concerns with cloud adoption.

Cloud Trends Driving the Market

So how are things changing? We see three key trends that will drive our company and platform vision over the next 3 years:

  1. Multi-cloud – it’s been long discussed, but it’s now a reality: 20% of the enterprises using PMC manage 2 or more CSPs in the platform, and that number is growing. As always, cost control is an important factor in a multi-cloud strategy.  
  2. PaaS – Platform as a Service (PaaS) use is growing, so users are looking to optimize these resources. ParkMyCloud offers optimization for databases, scale groups, and logical groups. We plan to expand into containers and stacks to meet this need.
  3. Data-driven automation (AIOps) – our customers, large and small, are pushing us to expand our data-driven policies and automation – everyone is becoming more comfortable with the idea of automation. Our first priority on this front is to optimize overprovisioned resources – often referred to as RightSizing … RightSizeMyCloud!

 

Cloud trends are not always easy to predict, but one thing is for certain: costs will need to be controlled. Good fun ahead.

Read more ›

6 Types of Overprovisioned Resources Wasting Money on Your Cloud Bill

In our ongoing discussion on cloud waste, we recently talked about orphaned resources eating away at your cloud budget, but there’s another type of resource that’s costing you money needlessly and this one is hidden in plain sight – overprovisioned resources. When you looked at your initial budget and made your selection of cloud services, you probably had some idea of what resources you needed and in what sizes. Now that you’re well into your usage, have you taken the time to look at those metrics and analyze whether or not you’ve overprovisioned?

One of the easiest ways to waste money is by paying for more than you need and not realizing it. Here are 6 types of overprovisioned resources that contribute to cloud waste.  

Unattached/Underutilized Volumes

As a rule of thumb, it’s a good idea to delete volumes that are not attached to instances or VMs. Take the example of AWS EBS volumes unattached to EC2 instances – if you’re not using them, then all they’re doing is needlessly accruing charges on your monthly bill. And even if your volume is attached to an instance, it’s billed separately, so you should also make a practice of deleting volumes you no longer need (after you backup the data, of course).

Underutilized database warehouses

Data warehouses like Amazon Redshift, Google Cloud Datastore, and Microsoft Azure SQL Data Warehouse  were designed as a simple and cost-effective way to analyze data using standard SQL and your existing Business Intelligence (BI) tools. But to get the most cost savings benefits, you’ll want to identify any clusters that appear to be underutilized and rightsize them to lower costs on your monthly bill.

Underutilized relational databases

Relational databases such as Amazon RDS, Azure SQL, and Google Cloud SQL offer the ability to directly run and manage a relational database without managing the infrastructure that the database is running on or having to worry about patching of the database software itself.

As a best practice, Amazon recommends that you check the configuration of your RDS for any idle DB instances. You should consider a DB instance idle if it has not had a connection for a prolonged period of time, and proceed by deleting the instance to avoid unnecessary charges. If you need to keep storage for data on the instance, there are other cost-effective alternatives to deleting altogether, like taking snapshots. But remember – manual snapshots are retained, taking up storage and costing you money until you delete them.

Underutilized Instances/VMs

We often preach about idle instances and how they waste money, but sizing your instances incorrectly is just as detrimental to your monthly bill. It’s easy to overspend on large instances or VMs that are you don’t need. With any cloud service, whether it’s AWS, Azure, or GCP, you should always “rightsize” your instances and VMs by picking the instance size that is optimized for the size of your workload – be it compute optimized, memory optimized, GPU optimized, or storage optimized.

Once your instance has been running for some time, you’ll have a better idea of whether not the chosen size is optimal. Review your usage and make cost estimates with AWS Management Console, Amazon CloudWatch, and AWS Trusted Advisor if you’re using AWS. Azure users can review their metrics from Azure Monitor data, and Google users can import GCP metrics data for GCP virtual machines. Use this information to find under-utilized resources that can be resized to better optimize costs

Inefficient Containerization

Application containerization allows multiple applications to be distributed across a single host operating system without requiring their own VM, which can lead to significant cost savings. It’s possible that developers will launch multiple containers and fail to terminate them when they are no longer required, wasting money. Due to the number of containers being launched compared to VMs, it will not take long for container-related cloud waste to match that of VM-related cloud waste.

The problem with controlling cloud spend using cloud management software is that many solutions fail to identify unused containers because the solutions are host-centric rather than role-centric.  

Idle hosted caching tools (Redis)

Hosted caching tools like Amazon ElastiCache offer high performance, scalable, and cost-effective caching. ElastiCache also supports Redis, an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. While caching tools are highly useful and can save money, it’s important to identify idle cluster nodes and delete them from your account to avoid accruing charges on your monthly bill. Be cognizant of average CPU utilization and get into the practice of deleting the node if your average utilization is under designated minimum criteria that you set.

How to Combat Overprovisioned Resources (and lower your cloud costs)

Now that you have a good idea of ways you could be overprovisioning your cloud resources and needlessly running up your cloud bill – what can you do about it? The end-all-be-all answer is “be vigilant.” The only way to be sure that your resources are cost-optimal is with constant monitoring of your resources and usage metrics. Luckily, optimization tools can help you identify and automate some of these best practices and do a lot of the work for you, saving time and money.

Read more ›

Is Cloud to Cloud Migration Worth the Effort?

When we talk about cloud migration challenges, the conversation is about a company switching their workloads from an on-premise datacenter to a public cloud environment. But what about cloud to cloud migration?

The Benefits of Cloud to Cloud Migration

Why would a company go through the trouble of moving its entire infrastructure to the cloud, investing in one cloud service provider only to switch to another?

The cloud shift is no longer anything new. Companies have accepted cloud adoption and are becoming more comfortable with using cloud services. Now with AWS, Azure, and Google Cloud Platform currently leading the market (plus others growing rapidly), and constantly offering new and better options in terms of pricing and services, switching providers could prove to be fruitful.

Choosing a cloud provider to begin with is a monumental task. Businesses have to make choices regarding a number of factors – cost, reliability, security, and more. But even with all factors considered, business environments are always changing. Cost can become more or less important, your geographical region might evolve (which affects cost and availability of services), and priorities can shift to the point where another platform might be a better fit.  

Perhaps your migration to AWS a few years ago was driven mainly by reliability and risk mitigation. While other providers were up and coming, you wanted to go with the gold standard. A few years later, productivity tools like Google’s G Suite became useful to your business. You now have business partners using other platforms like Azure or Google Cloud. You realize that your needs for software have changed, business partnerships have influence, and it becomes clear that another provider could be of greater benefit. Not to mention, cloud services themselves are ever-changing, and you might find better pricing, service-level agreements, scalability, and improved performance with another provider as offerings change over time.

While all of this makes sense, theoretically speaking, let’s take a look at a real example:

The Case of GitLab

A number of users were up in arms over Microsoft’s acquisition of Github, so much so that hundreds of thousands have already moved to another Git-repository manager – GitLab. And in a twist of fate, GitLab has made the announcement that they’ve decided to swap Microsoft Azure for another cloud provider – Google Cloud Platform.

Ask Andrew Newdigate, the Google Cloud Platform Migration Project Lead at GitLab, about why they’re making the move to GCP and he’ll likely mention service performance, reliability, and something along the lines of Kubernetes is the future.

Kubernetes, the open source project first released by Google and designed for application management of multiple software containers “makes reliability at massive scale possible.” What’s also appealing is that GitLab gets to use Google Kubernetes Engine, a service designed to simplify operating a Kubernetes cluster, as part of their cloud migration. The use of GKE has been cited as another driving factor for GitLab, looking to focus on “bumping up the stability of scalability of GitLab.com, by moving our worker fleet across to Kubernetes using GKE.”

Sid Sijbrandij, CEO of GitLab, adds better pricing and superior performance as reasons behind the migration. In an interview with VentureBeat, he said:

“Google as a public cloud, they have more experience than the other public cloud providers because they basically made a cloud for themselves […] And you find that in things such as networking, where their network quality is ahead of everyone else. It’s more reliable, it has less jitter, and it’s just really, really impressive how they do that, and we’re happy to start hosting Gitlab.com on that.”

The Challenges of Cloud to Cloud Migration

There’s a long list of factors that influence a company’s decision in selecting a cloud provider, and they don’t stop once you start building infrastructure in a particular cloud. Over time, other providers may prove to be better for the needs of your business. But just as there are challenges with cloud adoption in the first place, similar challenges apply when making the switch from cloud to cloud:

  • Data transfer. Transferring data between different cloud service providers is a complex task, to say the least. Like data transfer from enterprise to cloud, information is transferred over the internet, but between cloud providers instead from server to cloud. This presents the issue of speed at which data downloads, and as a rule of thumb you should avoid transferring large chunks of data at a time. There can even be massive transfer costs of moving the data out of or into a cloud.

 

  • Potential downtime. Downtime is also a risk. It’s important to account for inconsistencies in data, examine network connections, and prepare for the real possibility of applications going down during the migration process.

 

  • Adapting to technologies for the new cloud. You built an application for Azure, but now you’re going Google – it’s not as simple is picking it up from one platform and expecting it to run on another (and with the same benefits). Anticipate a heavy amount of time spent reconfiguring the application code to get the most out of your new platform.

 

  • Keeping costs in check. Consider the time and costs to migrate to the cloud, which tend to be misunderstood or drastically understated. Again, the same applies for cloud to cloud migration. By now, you have a better understanding of cloud service offerings, pricing models, and the complexity of a cloud adoption budget – for the service you were using. Once again, you’ll have to evaluate all of these costs and look into options that will help you save post-migration, like optimization tools.

Cloud to Cloud Migration – Is it worth it?

Before shifting to the cloud, you probably asked yourself the same thing. And just like before, you’ll have to dive deeply into factors like costs, technologies, and risk versus reward to assess whether or not a cloud to cloud migration is the right move for your business.

At first glance, a cloud to cloud migration is just as complicated and time-consuming as moving to the cloud in the first place, and it might seem like it’s just not worth the effort. But why did you move to the cloud? If you did to save costs over time, create better business opportunities, improve reliability and performance – then why would you NOT go with another provider that will benefit your business more in those areas? Not to mention, the more time you spend with one provider, building more applications as you go, the harder it will be to make the switch.    

So cloud to cloud migration – is it worth it? Yes – but only if you’ve considered all the factors to determine whether or not another cloud is better for your business.

 

Read more ›

AWS Reserved Instances FAQ

This AWS Reserved Instances FAQ is a culmination of questions we are often asked about using Reserved Instances to save on your Amazon Web Services (AWS) costs.

First, let’s make sure you’re all on the same page. AWS Reserved Instances are a way to save money on your cloud bill by reserving capacity in advance, which you can choose to pay for all, some, or no upfront. Due to the different billing model and variety of options, these often raise questions.  Let’s explore a few:

Q1: How do I know which instances are reserved?

Because of how AWS billing works, Reserved Instances aren’t applied until you’re actually being charged for the instances. Think of it more like a “voucher” than a specific virtual machine.

This means that you run your instances as you normally would throughout the month, without seeing which will be billed as reserved vs. on-demand. When your bill comes on the 5th of the next month, AWS will look at your reservations and your usage and automatically apply the reservations that match the instances.

This means that you don’t have to decide which instances are reserved, but it also means you don’t have the visibility into what your true costs are going to be. You’ll have to track this on your own, or use some tools to help you manage your Reserved Instances. AWS has included some reports in the Cost Explorer that can help you visualize your month-to-date usage of your reservations.

Q2: What if I don’t need that instance type anymore?

There’s a couple of options for unused reservations, depending on the reason you aren’t using them anymore. The first option is to sell the reservation on the Reserved Instance Marketplace. This marketplace lets you list your reservation and allows others to purchase it from you. The reservations are grouped together, with the cheapest being sold first in that group. Once someone purchases your reservation, you start getting charged at the on-demand rate (if you still use that type).

The other option is to purchase convertible reservations, which allow you to change the attributes of the reservation (as long as it’s to a more expensive instance type). These convertible types give you much more flexibility, especially as your company or application are growing in use, but they do come with a smaller savings rate over standard Reserved Instances.

Q3: How do Reserved Instances work if I have multiple AWS accounts?

Many users of AWS in large organizations have started using a multi-account strategy, either for security reasons or billing reasons. When managing multiple AWS accounts, billing and reservations can get confusing. The good news is that reservations can “float” across multiple linked accounts, even if they weren’t purchased in the master account. Amazon is smart enough to attempt to apply the reservation to the account it was purchased in, but if a suitable instance is not found, it can apply it to an instance in a different account that matches. You can also choose to exclude linked accounts from receiving these floating reservations if you’d like.

Q4: Are AWS Reserved Instances better than On Demand?

Great question. This is actually the subject of the most popular post on the ParkMyCloud blog. In short: usually yes, but only for production environments. Otherwise, On Demand instances with on/off schedules are typically better value.

Conclusion

Reserved Instances can help save a lot of money on your AWS bill, but require a different way of thinking about your instances than the normal pay-as-you-go that you may be used to with cloud computing. These reservations are typically used for steady-state workloads with consistent usage. Did we miss any of your questions in this AWS Reserved Instances FAQ? Let us know!

Read more ›

Announcing Alibaba Cloud Cost Control with ParkMyCloud

We’re happy to announce that ParkMyCloud now supports Alibaba Cloud!

Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) users have saved millions of dollars on their cloud bills using ParkMyCloud’s automated cloud cost optimization platform. Customers like McDonald’s, Sysco, and Unilever use ParkMyCloud to automatically turn off idle cloud resources as part of their DevOps process.

Now, Alibaba Cloud customers can do the same.

Why Alibaba?

Alibaba Cloud is experiencing rapid customer adoption and growth – in the 4th quarter of last year, they saw over 100% growth, with more than 300 products and features launched. The company is clearly expanding their horizons beyond retail and putting a focus on innovation and development in the cloud space – both in China where their core customer base is located, and throughout the world as companies globally choose Alibaba as their primary cloud provider or as part of a multi-cloud strategy.

But the real reason we’re here is to help cloud users solve the enormous problem of cloud waste.

We estimate that Alibaba users will waste $552 million on idle cloud resources this year – that’s $1.5 million per day that could easily be saved with automated cost optimization in place. There’s no time to lose in getting cost control measures in place.

See it In Action

Get a preview of ParkMyCloud – watch this 2-minute demo to see how it works. To see a full demo and get your questions answered, schedule a personalized demo now.

Try Now for Free

You can get started with Alibaba Cloud cost control now with a free 14-day trial of ParkMyCloud, with full access to premium features.

After your trial expires, you can choose to continue using the free tier, or upgrade to use premium features such as SmartParking, full API access, advanced reporting and SSO.

Cheers, and happy parking.

Read more ›

Interview: Hitachi ID Systems Optimizes Training Infrastructure with ParkMyCloud

Hitachi ID Systems recently reached their first ParkMyCloud birthday – to celebrate, we chatted with Patrick McMaster about how they optimize training infrastructure and why he and his team said “We honestly couldn’t be happier with ParkMyCloud.”

Can you start by telling me about Hitachi ID Systems and what you and your team do within the company?

Hitachi ID Systems makes identity and access management (IAM) software. I am the training coordinator, so I handle getting clients and potential partners up to speed with how our software works, how to install it, how to administrate it, etc. For those who are more interested in learning about software, we set them up with a virtual environment and course materials or an instructor to get them up to speed with how the software works.

Can you describe how you’re using public cloud?

We use AWS exclusively. When we advertise that we’re running a course a few months ahead of time, our infrastructure starts seeing the registrations and will start creating VMs, applying patches, getting the latest version of the appropriate software installers on the desktop and getting everything ready for the students, who will be accessing geographically-local AWS infrastructure.

In the past, everything for this online training was very manual. We on the training team  would spin up the VMs manually, do the updates manually, and send the information to the potential students., Then when the course was over we would go through and do the reverse – shutting the elements down and turning off the virtual machine on AWS.

What does the supporting training infrastructure in AWS look like?

We have a number of VMs running per student or team that only need to be active during the team’s local business hours, plus some additional supporting infrastructure which is required 24/7. We started to realize as we got more students and began offering self-paced training that our AWS fees were increasing from the 24/7 access we were providing, but also just the management of keeping track of which students are where, when they should be brought into the system, when they should be shut down, etc. We needed to find a solution pretty quickly as we experienced that period of rapid growth.

How did that lead you to finding ParkMyCloud?

We knew we needed to automate the manual processes for this. Of course, lots of organizations tend to come up with solutions internally first. We’re a software company, so we had the talent for that, but we never have enough time. I’ve come to terms more and more every year with the benefits of delegating to the other sources. I realized that we are probably not the first organization to have this problem, so I Googled and found ParkMyCloud.

It became quickly evident that the features that you offered were exactly what we were looking for.

Can you describe your experience as a ParkMyCloud user so far?

Sure! So just before our demo of ParkMyCloud, we were fighting with this issue of trying to figure out how we can manage multiple time zones and multiple geographic locations, and not pay for that time that VMs are just spun up.

Then we went through the ParkMyCloud demo process and started our trial. We connected to our system and looked at ways to set up different schedules and pull information from AWS. There was definitely a moment where everyone in the room looked each other and said, “we must be missing something” – there had to be some additional steps we hadn’t thought out because it seemed too easy to work. But it really was that easy.

It just took a week of monitoring to make sure everything was turning off when it was supposed to – the bulk of our effort was really in that first week, and the time we need to spend in the interface is so small. We can go into ParkMyCloud’s dashboard and make exceptions to the schedule when needed, but the time that we actually spend thinking about these things is now about 2 hours a week, whereas before it was something that members of our staff might struggle with for 1-2 days. It’s been a huge improvement.

We did some calculations just in terms of uptime versus what we were doing before, and having the different schedules at our disposal and being able to spend that one week coming up with every scenario we could come up with was time well-spent. Now there are very few exceptions. I don’t think we’ve had to create a new schedule in a long time. Everything is organized logically, it’s very easy for us to find everything we need.

Who is responsible for tracking your AWS spending in the organization? Have they had any feedback?

Our finance department. Since we started using ParkMyCloud, it’s been very very quiet. No news is good news from finance. We are saving about 40% of our bill.

Do you have any other cloud cost savings measures in place?

Not for this training infrastructure. We have a pretty unique use case here. Our next steps are going to be more towards automatic termination, automatic spinning things up, more time-saving measures rather than cost.

This summer ParkMyCloud is working on instance rightsizing, if that’s something that would be helpful for you.

 That’s definitely something that we could use. We are always trying to find better ways of doing things.

OK, great, thanks Patrick! Appreciate your time.

 Thank you, have a good one!

 

Read more ›

4 Types of Idle Cloud Resources That Are Wasting Your Money

We have been talking about idle cloud resources for several years now. Typically, we’re talking about instances purchased On Demand that you’re using for non-production purposes like development, testing, QA, staging, etc. These resources can be “parked” when they’re not being used, such as on nights and weekend, saving 65% or more per resource each month. What we haven’t talked much about is how the problem of idle cloud resources extends beyond just your typical virtual machine.

Why Idle Cloud Resources are a Problem

If you think about it, the problem is pretty straightforward: if a resource is idle, you’re paying your cloud provider for something you’re not actually using. This adds up.

Most non-production resources can be parked about 65% of the time, that is, parked 12 hours per day and all day on weekends (this is confirmed by looking at the resources parked in ParkMyCloud – they’re scheduled to be off just under 65% of the time.) We see that our customers are paying their cloud providers an average list price of $220 per month for their instances. If you’re currently paying $220 per month for an instance and leaving it running all the time, that means you’re wasting $143 per instance per month.

Maybe that doesn’t sound like much. But if that’s the case for 10 instances, you’re wasting $1,430 per month. One hundred instances? You’re up to a bill of $14,300 for time you’re not using. And that’s just a simple micro example. At a macro level that’s literally billions of dollars in wasted cloud spend.

4 Types of Idle Cloud Resources

So what kinds of resources are typically left idle, consuming your budget? Let’s dig into that, looking at the big three cloud providers — Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP).

  • On Demand Instances/VMs – this is the core of the conversation, and what we’ve addressed above. On demand resources – and their associated scale groups – are frequently left running when they’re not being used, especially those used for non-production purposes.
  • Relational Databases – there’s no doubt that databases are frequently left running when not needed as well, in similar circumstances to the On Demand resources. The problem is whether you can park them to cut back on wasted spend. AWS allows you to park certain types of its RDS service, however, you can not park like idle database services in Azure (SQL Database) or GCP (SQL). In this case, you should review your database infrastructure regularly and terminate anything unnecessary – or change to a smaller size if possible.
  • Load Balancers – AWS Elastic Load Balancers (ELB) can not be stopped (or parked), so to avoid getting billed for the time you need to remove it. The same can be said for Azure Load Balancer and GCP Load Balancers. Alerts can be set up in Cloudwatch/Azure Metrics/Google Stackdriver when you have a load balancer with no instances, so be sure to make use of those alerts.
  • Containers – optimizing container use is a project of its own, but there’s no doubt that container services can be a source of waste. In fact, we are evaluating the ability for ParkMyCloud to park container services including ECS and EKS from AWS, ACS and AKS from Azure, and GKE from GCP, and the ability to prune and park the underlying hosts. In the meantime, you’ll want to regularly review the usage of your containers and the utilization of the infrastructure, especially in non-production environments.

Cloud waste is a billion-dollar problem facing businesses today. Make sure you’re turning off idle cloud resources in your environment, by parking those that can be stopped and eliminating those that can’t, to do your part in optimizing cloud spend.

Read more ›

4 Orphaned Cloud Resources Eating Away at Your AWS Budget – and How to Avoid Them

We recently discussed how orphaned volumes and snapshots contribute to cloud waste and what you can do about it, but those are just two examples of orphaned cloud resources that result in unnecessary charges. The public cloud is a pay-as-you-go utility, requiring full visibility of specific infrastructure – you don’t want to be charged for resources you aren’t using. Here are other types of orphaned cloud resources that contribute to cloud waste (and cost you money):

Unassociated Elastic IPs  

Elastic IPs are reserved public IP addresses designed for dynamic cloud computing in AWS. As a static IPv4 address associated with your AWS account, Elastic IPs can continue running an EC2 instance, even if it is stopped and restarted, by quickly remapping the address to another one of your instances. You can allocate an Elastic IP address to any EC2 instance in a given region, until you decide to release it.  

The advantage of having an Elastic IP (EIP) is the ability to mask the failure of an EC2 instance, but if you do not associate the address to your account – you’re still getting charged. To avoid incurring a needless hourly charge from AWS, remember to release any unassociated IPs you no longer need.

Elastic Load Balancers (with no instances)

Cloud load balancing allows users to distribute workloads and traffic with the benefit of the cloud’s scalability. All major cloud providers offer some type of load balancing – AWS users can balance workloads and distribute traffic among EC2 instances with its Elastic Load Balancer, Google Cloud can distribute traffic between VM instances with Google Cloud Load Balancing, and Azure’s Load Balancer distributes traffic across multiple data centers.   

An AWS Elastic Load Balancer (ELB) will incur charges to your bill as long as it’s configured with your account. Like with Elastic IPs, whether you’re using it or not – you’re paying. If you have no instances associated with your ELB, delete to avoid paying needless charges on your monthly bill.   

Unused Machine Images (AMIs)

A Machine Image provides the information required to launch an instance, which is a virtual server in the cloud. In AWS they’re called AMIs, in Azure they’re Managed Images, and in Google Cloud Platform they’re Custom Images.

As part of your measures to reduce unnecessary costs from orphaned volumes, delete unused machine images when you no longer need them. Unless deleted, the snapshot that was created when the image was first created will continue to incur storage costs.

Object Storage

One of the growing pains that organizations face is the management of isolated pools of data in their cloud environment. Fragmented storage can result from data coming from a number of sources used by applications and business processes. Object Storage was designed to break down silos into scalable, cost-effective storage to store data in its native format. AWS offers object storage solutions like Amazon S3 and Amazon Glacier, Google has Google Cloud Storage, and Azure calls its solution Azure Blob Storage. All options help you manage your storage in one place, keeping your data organized and your business more cost effective.  

Although object storage in and of itself is a cost effective solution, there are still ways to optimize and reduce costs within this solution. Delete files you no longer need so that you’re not paying for them. Delete unused files which can be recreated. In S3, use the “lifecycle” feature to delete or overwrite older versions of data. Clean up incomplete uploads that were interrupted, resulting in partial objects that are taking up needless space. And compress your data before storing to give you better performance and reduce your storage requirements.

How to Avoid Wasted Spend on Orphaned Cloud Resources

Don’t let forgotten resources waste money on your cloud bill. Put a stop to cloud waste by eliminating orphaned cloud resources and inactive storage, saving space, time, and money in the process. Remember to:

  • Release unassociated IPs you no longer need.
  • Remove Elastic Load Balancers with no instances attached.
  • Delete unused machine images when you no longer need them.
  • Keep object storage minimal – optimize by “cleaning up” regularly, removing files you don’t need.

The premise of the cloud and the resources available to you were meant to be cost effective, but it’s up to you keep costs in check. Optimize your cloud spend with visibility, management, and cost automation tools like ParkMyCloud to get the most out of your cloud environment.  

Read more ›

How to Keep Costs in Check After Converting a Monolith to Microservices

You’ve gone full-blown DevOps, drank the Agile Kool-Aid, cloudified everything, and turned your monolith to microservices — so why have all of your old monolith costs turned into even bigger microservices costs? There are a few common reasons this happens, and some straightforward first steps to get microservices cost control in place.

Why Monolith to Microservices Drives Costs Up

As companies and departments adapt to modern software development processes and utilize the latest technologies, they assume they’re saving money – or forget to think about it altogether. Smaller applications and services should come with more savings opportunities, but complexity and rapidly-evolving environments can actually make the costs skyrocket. Sometimes, it’s happening right under your nose, but the costs are so hard to compile that you don’t even know it’s happening until it’s too late.

The same thing that makes microservices attractive — smaller pieces of infrastructure that can work independently from each other — can also be the main reason that costs spiral out of control. Isolated systems, with their own costs, maintenance, upgrades, and underlying architecture, can each look cheaper than the monolithic system you were running before, but can skyrocket in cost when aggregated.

How to Control Microservices Costs

If your microservices costs are already out of control, there are a few easy first steps to reining them in.

Keep It Simple

As with many new trends, there is a tendency to jump right in and switch everything to the new hotness. Having a drastic cutover, while scrapping all of your old code, can be refreshing and damaging all at the same time. It makes it hard to keep track of everything, so costs can run rampant while you and your team are struggling just to comprehend what pieces are where. By keeping some of what you already have, but slowly creating new functionality in a microservices model, you can maintain a baseline while focusing on costs and infrastructure of your new code.

The other way to keep it simple is to keep each microservice extremely limited in scope. If a microservice does just one thing, without a bunch of bells and whistles, it’s much easier to see if costs are rising and make the infrastructure match the use case. Additional opportunities for using PaaS or picking a cloud provider that fits your needs can really help maximize utilization.

Scalability and Bursting

Microservices architectures, by the very nature of their design, allow you to optimize individual pieces to minimize bottlenecks. This optimization can also include cost optimization of individual components, even to the point of having idle pieces turned completely off until they are needed. Other pieces might be on, but scaled down to the bare minimum, then rapidly scale out when demand runs high. A fluctuating architecture sounds complex, but can really help keep costs down when load is low.

User Governance

Along with a microservices architecture, you may start having certain users and departments be responsible for just a piece of the system. With that in mind, cloud providers and platform tools can help you separate users to only access the systems and infrastructure they are working on so they can focus on the operation (and costs) of that piece. This allows you to give individual users the role that is necessary for minimal access controls, while still allowing them to get their jobs done.

Ordered Start/Stop and Automation with ParkMyCloud

ParkMyCloud is all about cost control, so we’ve started putting together a cost-savings plan for our customers who are moving from monolith to microservices.

First, they should use ParkMyCloud’s Logical Groups to put multiple instances and databases into a single entity with an ordered list. This way, your users do not have to remember multiple servers to start for their application – instead, they can start one group with a single click. This can help eliminate the support tickets that are due to parts of the system not running.

Additionally, use Logical Groups to set start delays and stop delays between nodes of the group. With delays, ParkMyCloud will know to start database A, then wait 10 minutes before starting instance B, to ensure the database is up and ready to accept connections. Similarly, you can make sure other microservices are shut down before finally shutting down the database.

Everything you can do in the ParkMyCloud user interface can also be done through the ParkMyCloud REST API. This means that you can temporarily override schedules, toggle instances to turn off or on, or change team memberships programmatically. In a microservices setup, you might have certain pieces that are idle for large portions of the day. With the ParkMyCloud API, you could have those nodes turned off on a schedule to save money, then have a separate microservice call the API to turn the node on when it’s needed.

The Goal: Continuous Cost Control

Moving from monolith to microservices can be a huge factor in a successful software development practice. Don’t let cost be a limiting factor – practice continuous cost control, no matter what architecture you choose. By putting a few costs control measures in place with ParkMyCloud, along with some automation and user management, you can make sure your new applications are not only modern, but also cost-effective.

Read more ›

Sabre Chose Multi-Cloud for Cost Control – But They Need to Get it Right

Travel technology company Sabre announced a strategic agreement with Microsoft last week, weeks after a similar agreement with AWS. There are a lot of factors contributing to these decisions, but among them, it seems likely they’ve chosen multi-cloud for cost control.

The company has been under the leadership of CEO Sean Menke for a year and a half, and in that time has already downsized its workforce by 10% – saving the company $110 million in annual costs. Against such a backdrop, clearly, cost control will be front of mind.

So how will a multi-cloud strategy contribute to controlling costs as Sabre aims to “reimagine the business of travel”, in their words?

Why Multi-Cloud for Cost Control Makes Sense

As Sabre moves into AWS and Azure, they plan to write new applications with a microservices architecture deployed on Docker containers. Containerization can be an effective cost-saving strategy by reducing the amount of infrastructure needed – and thereby reducing wasted spend, and simplifying software delivery processes to increase productivity and reduce maintenance.

Plus, containerization has the advantage of ease of portability. With a large and public account like Sabre’s, this becomes a cost reduction strategy as AWS and Azure are forced into competition for their business against each other. “We want to have incentives for (cloud providers) not to take our business for granted,” said CIO Joe DiFonzo.

Avoiding vendor lock-in and optimizing workloads are the top two cited reasons for companies to choose a multi-cloud strategy – both of which contribute to cost control.

Either Way, Cost Has to Be a Factor

Aside from the reasons listed above, Sabre may have chosen to make deals with both AWS and Azure due to each cloud providers’ technological strengths, support offerings, developer familiarity, or for other reasons. Whether they’ve chose multi-cloud for cost control as the primary reason is debatable, but they certainly need to control costs now that they’re there.

First of all, most cloud migrations go over budget – not to mention that 62% of first-attempt cloud migrations take longer than expected or fail outright, wasting money directly and through opportunity cost.

Second, Sabre’s legacy system of local, on-premises infrastructure means their IT and development staff is used to the idea of resources that are always available. Users need to be re-educated to learn a “cloud as utility” mindset – as a Director of Infrastructure of Avid put it, users need to learn “that there’s a direct monetary impact for every hour that an idle instance is running.” Of course, this is an issue we see every day.

For companies new to the cloud, we recommend providing training and guidelines to IT Ops, DevOps and Development teams about proper use of cloud infrastructure. This should include:

  • Clear governance structures – what users can make infrastructure purchases? How are these purchases controlled?
  • Turning resources off when not needed – automating non-production resources to turn off when not needed can reduce the cost of those resources by 65% or more (happy to help, Joe DiFonzo!)
  • Regular infrastructure reviews – especially as companies get started in the cloud, it’s easy to waste money on orphaned resources, oversized resources, and resources you no longer need. We recommend regular reviews of all infrastructure to ensure every unused item is caught and eliminated.

Cheers to you, Sabre, and best of luck in your cloud journey.

Read more ›

AWS vs Alibaba Cloud Pricing: A Comparison of Compute Options

More cloud users are starting to investigate Alibaba Cloud, so the time is ripe for a comparison of AWS vs Alibaba Cloud pricing. Commonly recognized as the #4 cloud provider (from a revenue perspective anyway), Alibaba is one of the fastest growing companies in the space today.

Alibaba has been getting a lot of attention lately, given its rapid growth, and its opportunities for cloud deployments within mainland China. ParkMyCloud is preparing to release support for Alibaba, and that of course has let us focus on pricing and cost savings – our forte. In this article I am going to dive a bit into the pricing of the Alibaba Elastic Compute Service (ECS), and compare it with that of the AWS EC2 service.

Alibaba vs Aliyun

Finding actual pricing for comparison purposes can be a bit complicated, as the prices are listed in a couple different places and do not quite exactly match up. If one searches for Alibaba pricing, one ends up here, which I am going to call the “Alibaba Cloud” site. However, when you actually get an account and want to purchase an instance, you can up here or here, both of which I will call the “Aliyun” site. [Note that you may not be able to see the Aliyun sites without signing up for an account and actually logging-in.]  

Aliyun (literally translated “Ali Cloud”) was the original name of the company, and the name was changed to Alibaba Cloud in July 2017. Unsurprisingly, the Aliyun name has stuck around on the actual operational guts of the company, reflecting that it is probably hard-coded all over the place, both internally and externally with customers. (Supernor’s 3rd Conjecture: Engineering can never keep up with Marketing.)

Both sites show that like the other major cloud providers, Alibaba’s pricing model includes a Pay-As-You-Go (PAYG) offering, with per-second billing. Note, however, that in order to save money on stopped instances, one must specifically enable a “No fees for stopped instances” feature. Luckily, this is a global one-time setting for instances operating under a VPC, and you can set it and forget it. Unlike AWS, this feature is not available for any instances with local disks (this and other aspects of the description lead me to believe that Alibaba instances tend to be “sticky” to the underlying hardware instance). On AWS, local disks are described as ephemeral, and are simply deallocated when they are not in use. Like AWS, system/data disks continue to accrue costs even when an instance is stopped.

Both sites also show that Alibaba also has a one-month prepaid Subscription model. Based on a review of the pricing listed for the us-east-1 region on the Alibaba Cloud site, the monthly subscription discount reflects a substantial 30-60% discount compared to the cost of a PAYG instance that is left up for a full month. For a non-production environment that may only need to be up during normal business hours (say, 9 hours per day, weekdays only), one can easily see that it may be more cost-effective to go with the PAYG pricing, and use the ParkMyCloud service to shut the instances down during off-hours, saving 73%.

But this is where the similarities between the sites end. For actual pricing, instance availability, and even the actual instance types, one really needs to dive into a live Alibaba account. In particular, if PAYG is your preference, note that the Alibaba public site appears to have PAYG pricing listed for all of their available instance types, which is not consistent with what I found in the actual purchasing console.

Low-End Instance Types – “Entry Level” and “Basic”

The Alibaba Cloud site breaks down the instance types into “Entry Level” and “Enterprise”, listing numerous instance types under both categories. All of the Entry Level instance types are described as “Shared Performance”, which appears to mean the underlying hardware resources are shared amongst multiple instances in a potentially unpredictable way, or as described by Alibaba: “Their computing performance may be unstable, but the cost is relatively low” – an entertaining description to say the least. I did find these instance types on the internal purchasing site, but did not delve any further with them, as they do not offer a point of reference for our AWS vs. Alibaba Cloud pricing comparison. They may be an interesting path for additional investigation for non-production instance types where unstable computing performance may be OK in exchange for a lower price.

That said…after logging in to the Alibaba management console, reaching the Aliyun side of the website, there is no mention of Entry Level vs Enterprise. Instead we see the top-level options of “Basic Purchase” vs “Advanced Purchase”. Under Basic Purchase, there are four “t5” instance types. The t5 types appear to directly correspond to the first four AWS t2 instance types, in terms of building up CPU credits.

These four instance types do not appear to support the PAYG pricing model. Pricing is only offered on a monthly subscription basis. A 1-year purchase plan is also offered, but the math shows this is just the monthly price x12. It is important to note that the Aliyun site itself has issues, as it lists the t5 instance types in all of the Alibaba regions, but I was unable to purchase any of them in the us-east-1 region – “The configuration for the instance you are creating is currently not supported in this zone.”  (A purchase in us-west-1, slightly more expensive, was fine).

The following shows a price comparison for Alibaba vs AWS for “t” instance prices in a number of regions. The AWS prices reflect the hourly PAYG pricing, multiplied by an average 730 hour month. I was not able to get pricing for any AWS China region, so the Alibaba pricing is provided for reference.

While the AWS prices are higher, the AWS instances are PAYG, and thus could be stopped when not being used, common for t2 instances used in a dev-test environment, and potentially saving over 73%. One can easily see that this kind of savings is needed to compete with the comparatively low Alibaba prices. I do have to wonder what is up with that Windows pricing in China….does Microsoft know about this??

Aliyun “Advanced Purchase”

Looking at the “Advanced” side of the Aliyun purchasing site, we get a lot more options, including Pay-As-You-Go instances. To keep the comparison simple, I am going to limit the scope here to a couple of instance types, trying to compare a couple m5 and i3 instances with their Alibaba equivalents. I will list PAYG pricing where offered.

In this table, the listed monthly AWS prices reflect the hourly pay-as-you-go price, multiplied by an average 730 hour month.

The italicized/grey numbers under Alibaba indicate PAYG numbers that had to be pulled from the public-facing website, as the instance type was not available for PAYG purchase on the internal site. From a review of the various options on the internal Aliyun site, it appears the PAYG option is not actually offered for very many standalone instance types on Alibaba…

The main reason I pulled in the PAYG prices from the second source was for auto scaling, which is normally charged at PAYG prices. In Alibaba, “all ECS instances that Auto Scaling automatically creates, or manually adds, to a scaling group will be charged according to their instance types. Note that you will still be charged for Pay-As-You-Go instances even after you stop them.”  It is possible, however, to manually add subscription-based instances to an auto scaling group, and configure them to be not removed when the group scales-down.

In general, the full price of the AWS Linux instances over a month is 22-35% higher that of an Alibaba 1-month subscription. A full price AWS Windows instance over a month is 9-25%  higher than that of an Alibaba subscription. (And once again, it appears Windows licensing fees are not a factor in China.)

AWS vs Alibaba Cloud Pricing: Alibaba is cheaper, but…

Alibaba definitely comes out as less expensive in this AWS vs Alibaba cloud pricing comparison – the one-month subscription has a definite impact. However, for longer-lived instances, AWS Reserved Instances will certainly be less expensive, running about 40-75% less expensive than AWS PAYG, and thus less than some if not all of the Alibaba monthly subscriptions. AWS RI’s are also more easily applicable to auto scaling groups than a monthly subscription instance.

For non-production instances that can be shut down when not in use, PAYG is less expensive for both cloud providers, where ParkMyCloud can help you schedule the downtime. The difficulty with Alibaba will actually be finding instances types that can actually be purchased with the PAYG option.

Read more ›

Don’t Let Orphaned Volumes and Resources Contribute to Cloud Waste

Maybe you’re familiar with the ways idle instances contribute to cloud waste, but orphaned volumes and other resources also go easily-missed, needlessly increasing your monthly bill. Since the cloud is a pay-as-you-go utility, it’s easy to lose visibility of specific infrastructure costs and discover charges for resources you aren’t even using. Here’s how orphaned resources contribute to cloud waste, and what you can do do about it.

How Orphaned Volumes are Eating Your Budget

The gist of it: When you shut down or terminate an instance or VM, you deal with orphaned volumes and snapshots of those or other volumes, unattached to servers and continuing to incur monthly $/GB charges.

Let’s take the example of AWS EC2. You’ve stopped all of your AWS EC2 instances, but you’re still getting charged monthly for Amazon EBS storage and accruing charges for unused instances. This happens because even though you didn’t leave your instances running (*high five*), you’re still getting charged for EBS storage in GB per month for the amount provisioned to your account. While EC2 instances only accrue charges while they’re running, EBS volumes attached to those instances retain information and continue charging you even after an instance has been stopped.

How to Reduce Waste from Orphaned Volumes

To save your data without paying monthly for the storage volume, you can take a snapshot of the volume as a backup and then delete the original volume. You’ll still be charged for EBS snapshots, but they’re billed at a lower rate and you still have the option to restore the volume from the snapshot if you need it later. EBS volume snapshots are backed up to S3. They’re compressed and therefore save storage, but do keep in mind that the initial snapshot is of the entire volume, and depending on how frequently you take subsequent (incremental) snapshots, your total could end up taking as much space the first snapshot.

When you no longer need these snapshots, Amazon’s user guide has instructions for how to delete EBS volumes and EBS snapshots.

Similar to EBS, Azure offers Managed Disks as a storage service for VMs and provides backups of persistent disks. But while EBS volume snapshots are compressed and also include incremental backups, therefore taking up less storage, Azure only takes full point-in-time snapshots, which can become costly when you can take as many snapshots as you want from the same Managed Disk.

If you’re using Google Cloud Platform, then Compute Engine also provides backups of persistent disks with instructions for create, restore, and delete snapshots. Like EBS snapshots, Google’s persistent disk snapshots are automatically compressed and also include incremental backups, saving storage space. The benefits (and risks) are the same as the other cloud providers e.g. lower bills and less storage costs, but you will still need to ensure that your snapshotting strategy does not leave you exposed to risk.

Watch Out for Other Orphaned Resources

Moral of the story: delete snapshots that you don’t need from terminated instances and VMs. It’s easy to see how a small feature that is supposed to save you money can end up forgotten, costing you money for resources you’re not using.

Orphaned volumes and snapshots are just one example of how orphaned resources can result in unnecessary charges. Others include:

  • Unassociated IPs (AWS – Elastic IPs);
  • Load Balancers (with no instances);
  • Unused machine images; and
  • Object Storage.

Don’t let orphaned volumes, snapshots, and other forgotten resources drive up your cloud bill. Put a stop to cloud waste by eliminating orphaned resources and inactive storage, saving space, time, and money in the process.

Read more ›
Page 1 of 612345...Last »
Copyright © ParkMyCloud 2016-2018. All rights reserved|Privacy Policy