Up to $2.6 Billion in Cloud Spend Wasted on Orphaned Volumes and Snapshots Annually

Up to $2.6 Billion in Cloud Spend Wasted on Orphaned Volumes and Snapshots Annually

Wasted cloud spend from orphaned volumes and snapshots exacerbates the problem of cloud waste. We have previously estimated that $17.6 billion will be wasted this year due to idle and oversized resources in public cloud. Today we’re going to dive into so-called orphaned resources. 

A resource can become “orphaned” when it is detached from the infrastructure it was created to support, such as a volume detached from an instance or a snapshot detached from any volumes. Whether you are aware these remain in your cloud environment or not, they can continue to incur costs, wasting money and driving up your cloud bill. 

How Resources Become Detached

One form of orphaned resources comes from storage. Volumes or disks such as Amazon EBS, when created, are attached to an EC2 instance. You can attach multiple volumes to a single instance to add storage space. If an instance is terminated but a volumes attached to it have not, “orphaned volumes” have been created. Note that by default, the boot disk attached to every instance is designed to terminate when the instance is terminated (although it is possible to deselect this option), but any additional disks that have been attached do not necessarily follow this same behavior. 

Snapshots can also become orphaned resources. A snapshot is a point-in-time image of a volume. In the case of Amazon EBS, AWS snapshots are stored in Amazon S3. EBS snapshots are incremental, meaning, only the blocks on the device that have changed after your most recent snapshot are saved. If the associated instance and volume is deleted, a snapshot could be considered orphaned. 

Are All Detached Resources Unnecessary?

Just because a resource is detached it does not mean it should be deleted. For example, you may want to keep:

  • The most recent snapshots backing up a volume
  • Machine images used to create other machines
  • Snapshots used to inexpensively store the state of a machine you intend to use later, rather than keeping a volume around

However, like the brownish substance in the tupperware in the back of your freezer, anything you want to keep needs to be clearly labeled in order to be useful. By default, snapshots and volumes do not always get automatically tagged with enough information to know what they actually are.  In ParkMyCloud we see exabytes of untagged storage in our customers’ environments, with no way of knowing if it is safe to delete. In the AWS console, metadata is not cleanly propagated from the parent instance, and you have to go out of your way to tag snapshots before the parent instances are terminated. Once the parent instance is terminated, it can be impossible to identify the source of an orphaned volume or snapshot without actually re-attaching it to a running instance and looking at the data. Tag early and tag often!

The Size of Wasted Spend 

To estimate the size of the problem of orphaned volumes and snapshots, we’ll start with some data from ParkMyCloud customers in aggregate. ParkMyCloud customers spend approximately 15% of their bills on storage. We found that 35% of that storage spend was on unattached volumes and snapshots. As detailed above, while this doesn’t mean that all of that is wasted, the lack of tagging and excess of snapshots of individual volumes indicates that much of it is.

Overall, an average of 5.25% of our customers’ bills is being spent on unattached volumes and snapshots. Then applied to the $50 billion estimated to be spent on Infrastructure as a Service (IaaS) this year gives us a maximum of up to $2.6 billion wasted this year on orphaned volumes and snapshots. This is a huge problem.

Based on the size of this waste and customer demand, ParkMyCloud is developing capabilities to add orphaned volume and snapshot management to our cost control platform. 

Interested? Let us know here and we’ll notify you when this capability is released. 

Your Guide to Microsoft Ignite 2020: Now Free, Online, and Split in Two

Your Guide to Microsoft Ignite 2020: Now Free, Online, and Split in Two

As we look forward to this year’s Microsoft Ignite 2020, we can’t help but also reflect on our first visit to the sold-out live event last year. Part of the live conference experience is the fun surrounding meeting new people, having conversations, attending sessions, spending some time at the expo hall meeting vendors, checking out product demos, plus the swag and cool prizes. However, Microsoft Ignite 2020 is going to look a little bit different this year in its new format as a free digital event. 

September is Only The First Part of Ignite

In response to the current global health crisis, Microsoft announced that Ignite, its conference for developers and IT professionals, will follow the company’s other upcoming events and shift to a digital-only format, instead of the in-person conference scheduled to be held in New Orleans. In addition, Microsoft will split Ignite into two events. The first event will take place on September 22-24, while the second one is planned for early 2021.

Announcements, Speakers, and More!

Microsoft has yet to release the full agenda for Ignite, but one thing it has revealed is the introduction of TableTalks and TableTopics to drive community conversation during the digital event. TableTopics will feature multiple tables with designated topics hosted on the Microsoft tech community where you can comment on a conversation or start your own. It will use a built-in AI translation to enable a global conversation giving everyone the opportunity to network between peers around the world. And, TableTalks will be hosted by a moderator for face-to-face conversations (a.k.a. team meetings) for a real-time conversation over video chat.

You can expect in-depth sessions on how to use Azure, Teams, GitHub, and other Microsoft assets, new capabilities across its major platforms to enhance cloud computing and productivity and cover topics such as: 


How to Get the Most Out of Digital-Only events

Last year Microsoft announced Azure Arc, Azure Synapse Analytics, along with other updated capabilities in Azure, and Power Platform, so while you wait for this year’s digital event, you can revisit last year’s event highlights and sessions now available on-demand from the MyIgnite community website. 

Microsoft plans to make all events digital-only at least through mid-2021. Earlier this year, Build, Microsoft’s annual developer conference, was also held in a virtual-only event with a focus on practical tools, services, and resources for developers, with some sessions live and others pre-recorded as well as their partner conference Inspire.

While it won’t be the same as a live event, here are a few ways to maximize the experience:

  • Create a schedule – block off the full days in your calendar now, so you don’t get overbooked with meetings. Once the schedule is released, plan in advance which sessions you’ll attend and put them on your calendar. 
  • Find a watch party – it can actually be easier with a digital event to find other folks to discuss and chat with. If coworkers are tuning in, create a Teams or Slack channel to chat about sessions and announcements. Or, use the #MSIgnite hashtag on Twitter. Many local meetup groups will have their own mechanisms to watch together. And don’t count out Reddit groups and other forums.
  • Look for offers from would-be sponsors – if there are Microsoft product/service-related vendors you’re interested in, sign up for their mailing lists now. There will likely be many online swag/prize giveaways to make up for the loss of the conference hall, which can be a fun way to win cool stuff and of course, learn about potential solutions. (You can always unsubscribe!) We’ll keep an eye out for giveaways and update here. 

The registration is set to open on September 3rd, you can check Ignite’s website for more updates. Both Ignite and Build are expected to once again be hosted virtually for the earlier part of 2021.

Cloud Financial Management – The New Focus of the AWS Well-Architected Cost Optimization Pillar

Cloud Financial Management – The New Focus of the AWS Well-Architected Cost Optimization Pillar

In July, AWS updated the cost optimization pillar of their Well-Architected Framework to focus on cloud financial management. This change is a rightful acknowledgment of the importance of functional ownership and cross-team collaboration in order to optimize public cloud costs.

AWS Well-Architected Framework and the Cost Optimization Pillar

If you use AWS, you are probably familiar with the Well-Architected Framework. This is a guide of best practices to help you understand the impact of the decisions you make while designing and building systems on AWS. AWS Well-Architected allows users to learn best practices for building high-performing, resilient, secure, and efficient infrastructure for their workloads and applications. 

This framework is based on five pillars — operational excellence, security, reliability, performance efficiency, and cost optimization. Overall, AWS has done a great job with these particular resources, making them clear and accessible with links to further detail. 

The Cost Optimization pillar generally covers principles we have been preaching for a long time: expenditure and usage awareness; choosing cost-effective resources; managing demand and supply resources; and regularly reviewing your environments and architectural decisions for cost. 

Now, they have added Cloud Financial Management to this pillar. Cloud Financial Management is a set of activities that enables Finance and Technology organizations to manage, optimize and predict costs as they run workloads on AWS. 

Why Do Businesses Need Cloud Financial Management? 

Incorporating Cloud Financial Management into an organization’s cost optimization plans allows them to accelerate business value realization and optimize cost, usage and scale to maximize financial success. 

This is an important part of the cost optimization pillar as it dedicates resources and time to build capability in specific industries and technology domains. Similar to the other pillars, users need to build capability with different resources, programs, knowledge building, and processes to become a cost-efficient organization.

The first step AWS proposes for CFM is functional ownership. (Further reading: Who Should Manage App Development Costs? and 5 Priorities for the Cloud Center of Excellence).The reason all of this is important is since many organizations are composed of different units that have different priorities, there’s not one standard set of objectives for everyone to follow. By aligning your organization on a set of financial objectives, and providing them with the means to make it happen, organizations will become more efficient. Once an organization is running more efficiently, this will lead to more innovation and the ability to build faster. Not to mention organizations will be more agile and have the means to adjust to any factors. 

What You Need to Keep in Mind

When most people think of cost optimization they think of cutting costs  – but that’s not exactly what AWS is getting at by adding cloud financial management to their framework. It’s about assigning responsibility; partnering between finance and technology; and creating a cost-aware culture

In a survey conducted earlier this year by 451 Research, they found that adopting Cloud Financial Management practices doesn’t only lower IT costs. In fact, enterprises that adopted Cloud Financial Management practices also benefited in many other aspects of the organization such as, growing revenue through increased business agility, increasing operational resilience to decrease risk, improved profitability and the potential for increased staff productivity. 

Cloud Financial Management increases with cloud maturity, so it’s important to be patient with the process and remember that small changes can have huge impacts and benefits can increase as time goes on.  

Amazon provides you with a few services to manage cloud costs such as Cost Explorer, AWS Budgets, AWS Cost and Usage Report (CUR), Reserved Instances Recommendation and Reporting, and EC2 Rightsizing Recommendations.  But, it’s important to note that while many CFM tools are free to use, there can be some costs associated with labor to build ongoing use of these tools and continuous organizational processes – it may be in your best interest to look into a tool that can optimize costs on an ongoing basis. Ensure your people and/or tools are able to scale applications to address new demands.

By using the framework to evaluate and implement your cloud financial management practices, you’ll not only achieve cost savings, but more importantly, you’ll see business value increase across operational resilience, staff productivity and business agility.

Public Cloud Adoption Statistics & Market Shares through the PMC Lens

Public Cloud Adoption Statistics & Market Shares through the PMC Lens

While we monitor the market as a whole, cloud adoption statistics are indicators of market share among the large providers. The ParkMyCloud platform sees a very large volume of data flow through it each day, month, and quarter, and therefore affords an interesting and helpful perspective on our users. When this in turn allows you to examine their usage preferences of other third-party services it can be downright enlightening. 

For this post, rather than examining the granular detail of specific user preferences for certain products and services, I thought it might be interesting to roll the numbers up and see what this says about the world of public cloud as a whole. In particular, given we are now at the end of earnings season (see our recent post here ) we thought it would be interesting to compare what we see in our customer base. While obviously we only have a tiny percentage of the overall user base of public cloud, in recent years it has increasingly been my belief that it is fairly representative of the overall market. In fact I would go as far as to say that what we have observed in our data often appears in the public announcements some months later. The big trends such as increased usage of very short lived instances (especially for data analytics workloads) or increased use of custom instances have caught our eye only to be affirmed more broadly by the market.

Some of the things we look at each quarter include:

  • Relative changes in the proportions of customers exclusively using one of our supported cloud providers ;
  • Relative changes in the proportion of customers using multiple clouds (typically two or three different providers); and
  • The total number of accounts each customer has with each provider.

Over the last year or so it has been interesting to see the shifts occurring in the first of these measures – customers making exclusive use of a single cloud. Putting to one side an obvious caveat which is that customers could have other cloud accounts not brought into the PMC platform, we have observed that some 95% of users are exclusively using a single cloud. There have been some shifts in the relative proportions using either AWS and Azure, AWS and GCP and Azure and GCP but the numbers here are so small compared to those using a provider exclusively, that it is hard to draw any strong conclusions. 

Figure 1: Changes in Customers Making Exclusive Use of Cloud Provider (Source: PMC).

However, once again putting the overall representativeness of the PMC user base to one side, we can without doubt see some meaningful changes over the last six quarters in which clouds are being exclusively used by our customers. The chart shown in Figure 1 above, shows the relative changes over the last 18 months, with Q6 being March-May 2020. It is therefore likely that we picked up some of the COVID-19 related shifts, but will see more in the coming quarters.

To show these changes I have rebased the data (Q0) and then looked at the relative changes over the period. So for example, you can see that exclusive Azure users grew their footprint amongst our customer base by some 9.2% by Q5 and ended the period up 6.2%. There is a clear upward trendline for Azure during these last six quarters, versus AWS and GCP which are showing a flat to a slight downward trajectory. As mentioned above the proportion using multiple clouds has stayed fairly static.

It will be interesting to see how these cloud adoption statistic trends play out. Based upon what we are hearing anecdotally from our customers, there has been a lot of growth in the market for Virtual desktop infrastructure (VDI) in the move to remote working, and that Azure have been the largest beneficiary of the shift. With employers increasingly alerting staff to the possible realities of home working throughout the winter months, we think it likely that the trend continues. 

Figure 2: Cloud Revenue Growth: AWS, Azure and GCP (Source: Venturebeat)

What we do know from the earnings numbers is that the growth in cloud revenue numbers is slowing down for all three providers with the steepest declines being reported by AWS (although actual earned revenue is still increasing for all three). In such an environment, competition for market share is likely to get even more intense and so monitoring these shifts is likely to become even more important to track. 

Why Kubernetes If It Makes Your Life Worse?

Why Kubernetes If It Makes Your Life Worse?

The bleeding-edge tech community is full of fast-moving recommendations telling you why Kubernetes, blockchain, serverless, or the latest Javascript library is the pinnacle of technology and you are going to be left behind if you don’t drop what you’re doing RIGHT NOW and fully convert all of your applications. Inevitably, after the initial rush of posts expounding upon the life-changing benefits of these hot new technologies, eager followers will wake up to the fact that perhaps the new fad isn’t a magic bullet after all. This is how new tech holy wars begin, with both sides yelling that the other side is a member of a cult or trying to sell you something. So how do you weed through the noise and decide which technologies might actually improve your operations? And: is Kubernetes one of them?

Kubernetes – The Latest Holy War

Kubernetes v1.0 was released in 2015, after being developed and used internally at Google since 2014. It quickly emerged as the most popular way to manage and orchestrate large numbers of containers, despite competition from Docker Swarm, Apache Mesos, Hashicorp’s Nomad, and various other software from the usual suspects at IBM, HP, and Microsoft. Google partnered with the Linux Foundation in 2015 to form the Cloud Native Computing Foundation (CNCF) and had Kubernetes as one of the seed technologies.

This public dedication to open-source technology gave Kubernetes instant nerd bonus points, and it being used internally at one of the largest software companies in the world made it the hot new thing. Soon, there were thousands of articles, tweets, blog posts, and conference talks about moving to a microservices architecture built on containers using Kubernetes to manage the pods and services. It wasn’t long before misguided job postings were looking for 10+ years of Kubernetes experience and every startup pivoted from their blockchain-based app to a Kubernetes-based app overnight. 

As with any tech fad, the counterarguments started to mount while new fads rose up. Serverless became the new thing, but those who put their eggs in the Kubernetes basket resisted the shift from containers to functions. Zealots from the serverless side argued that you can ditch your entire code base and move to Lambda or Azure scripts, while fanatics of Kubernetes said that you didn’t need functions-as-a-service if you just packaged up your entire OS into a container and just spun up a million of them. So, do you need Kubernetes?

You’re (Probably) Not Google

Here’s the big thing that gets missed when a huge company open-sources their internal tooling – you’re most likely not on their scale. You don’t have the same resources, or the same problems as that huge company. Sure, you are working your hardest to make your company so big that you have the same scaling problems as Google, but you’re probably not there yet. Don’t get me wrong: I love when large enterprises open-source some of their internal tooling (such as Netflix or Amazon), as it’s beneficial to the open-source community and it’s a great learning opportunity, but I have to remind myself that they are solving a fundamentally different problem than I am.

While I’m not suggesting that you avoid planning ahead for scalability, getting something like Kubernetes set up and configured instead of developing your main business application can waste valuable time and funds. There’s a lot of overhead with learning, deploying, and managing Kubernetes that companies like Google can afford. If you can get the same effect from an autoscaling group of VMs with less headache, why wouldn’t you go that route? Remember: something like 60% of global AWS spend is on EC2, and with good reason. You can get surprisingly far using tried-and-true technologies and basics without having to rip everything out and implement the latest fad, which is why Kubernetes (or serverless, or blockchain, or multi-cloud…) shouldn’t be your focus. Kubernetes certainly has its place, and can be the tool you need. But most likely, it’s making things more complex for you without a corresponding benefit.