ParkMyCloud just turned 3 years old, and from here, the future looks great. The market is growing, cloud is the norm, and cost control is always top of mind for companies big and small. In fact, over 600 enterprises in 25+ countries now use our platform to “park idle cloud resources (including instances, databases and scale groups) in AWS, Azure, GCP and now Alibaba.
As we look to the future, we’re taking a moment to consider current cloud trends and how cost control needs are changing. To provide context, let’s take a quick look at where the market was three years ago.
The Problem that Got Us Started
When we founded the company three years ago, we set out to build a self-service, SaaS platform which would allow DevOps users to automate cloud cost control and integrate it into their cloud operations. We saw a need for this platform as we were talking to enterprises using AWS about broader cloud management needs as a service play. They wanted a self-service, purpose-built easy button for instance scheduling that could be centrally managed and governed but left up to the end user to control – enter ParkMyCloud.
Our value proposition started simply and has stayed relatively constant: save 20% on your cloud bill in 15 minutes or less (it’s 65% per parked resource). The ease of use, verifiable ROI, and richness of our platform capabilities allow global companies like McDonald’s, Unilever, Sysco, Sage and many others to adopt ParkMyCloud on their own, with no services, and begin to automate their cloud cost control in minutes – not days or weeks.
I went back and looked at our pre-launch pitch decks. At that time, the cloud Infrastructure-as-a-Service (IaaS) market was $10B or so, and dominated by AWS, and others like Rackspace and HP were in the game with the other usual suspects. Today, Gartner estimates enterprises will spend $41B on IaaS in 2018, and it’s still dominated by AWS, but the number of players is really down to 4 or 6 depending on where you want to put IBM and Oracle.
But the cloud waste problem is still prominent and growing, most analysts and industry pundits estimate that 25% or more of your bill is wasted on unused, idle or over provisioned resources – that equates to $10B+ based on 2018 IaaS predictions being wasted – that’s a BIG nut. In fact, if you break that down that’s $1MM in wasted cloud spend every hour. And it’s important. Most enterprises rank cloud security/governance and cost management as their primary concerns with cloud adoption.
Cloud Trends Driving the Market
So how are things changing? We see three key trends that will drive our company and platform vision over the next 3 years:
- Multi-cloud – it’s been long discussed, but it’s now a reality: 20% of the enterprises using PMC manage 2 or more CSPs in the platform, and that number is growing. As always, cost control is an important factor in a multi-cloud strategy.
- PaaS – Platform as a Service (PaaS) use is growing, so users are looking to optimize these resources. ParkMyCloud offers optimization for databases, scale groups, and logical groups. We plan to expand into containers and stacks to meet this need.
- Data-driven automation (AIOps) – our customers, large and small, are pushing us to expand our data-driven policies and automation – everyone is becoming more comfortable with the idea of automation. Our first priority on this front is to optimize overprovisioned resources – often referred to as RightSizing … RightSizeMyCloud!
Cloud trends are not always easy to predict, but one thing is for certain: costs will need to be controlled. Good fun ahead.
In our ongoing discussion on cloud waste, we recently talked about orphaned resources eating away at your cloud budget, but there’s another type of resource that’s costing you money needlessly and this one is hidden in plain sight – overprovisioned resources. When you looked at your initial budget and made your selection of cloud services, you probably had some idea of what resources you needed and in what sizes. Now that you’re well into your usage, have you taken the time to look at those metrics and analyze whether or not you’ve overprovisioned?
One of the easiest ways to waste money is by paying for more than you need and not realizing it. Here are 6 types of overprovisioned resources that contribute to cloud waste.
As a rule of thumb, it’s a good idea to delete volumes that are not attached to instances or VMs. Take the example of AWS EBS volumes unattached to EC2 instances – if you’re not using them, then all they’re doing is needlessly accruing charges on your monthly bill. And even if your volume is attached to an instance, it’s billed separately, so you should also make a practice of deleting volumes you no longer need (after you backup the data, of course).
Underutilized database warehouses
Data warehouses like Amazon Redshift, Google Cloud Datastore, and Microsoft Azure SQL Data Warehouse were designed as a simple and cost-effective way to analyze data using standard SQL and your existing Business Intelligence (BI) tools. But to get the most cost savings benefits, you’ll want to identify any clusters that appear to be underutilized and rightsize them to lower costs on your monthly bill.
Underutilized relational databases
Relational databases such as Amazon RDS, Azure SQL, and Google Cloud SQL offer the ability to directly run and manage a relational database without managing the infrastructure that the database is running on or having to worry about patching of the database software itself.
As a best practice, Amazon recommends that you check the configuration of your RDS for any idle DB instances. You should consider a DB instance idle if it has not had a connection for a prolonged period of time, and proceed by deleting the instance to avoid unnecessary charges. If you need to keep storage for data on the instance, there are other cost-effective alternatives to deleting altogether, like taking snapshots. But remember – manual snapshots are retained, taking up storage and costing you money until you delete them.
We often preach about idle instances and how they waste money, but sizing your instances incorrectly is just as detrimental to your monthly bill. It’s easy to overspend on large instances or VMs that are you don’t need. With any cloud service, whether it’s AWS, Azure, or GCP, you should always “rightsize” your instances and VMs by picking the instance size that is optimized for the size of your workload – be it compute optimized, memory optimized, GPU optimized, or storage optimized.
Once your instance has been running for some time, you’ll have a better idea of whether not the chosen size is optimal. Review your usage and make cost estimates with AWS Management Console, Amazon CloudWatch, and AWS Trusted Advisor if you’re using AWS. Azure users can review their metrics from Azure Monitor data, and Google users can import GCP metrics data for GCP virtual machines. Use this information to find under-utilized resources that can be resized to better optimize costs
Application containerization allows multiple applications to be distributed across a single host operating system without requiring their own VM, which can lead to significant cost savings. It’s possible that developers will launch multiple containers and fail to terminate them when they are no longer required, wasting money. Due to the number of containers being launched compared to VMs, it will not take long for container-related cloud waste to match that of VM-related cloud waste.
The problem with controlling cloud spend using cloud management software is that many solutions fail to identify unused containers because the solutions are host-centric rather than role-centric.
Idle hosted caching tools (Redis)
Hosted caching tools like Amazon ElastiCache offer high performance, scalable, and cost-effective caching. ElastiCache also supports Redis, an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. While caching tools are highly useful and can save money, it’s important to identify idle cluster nodes and delete them from your account to avoid accruing charges on your monthly bill. Be cognizant of average CPU utilization and get into the practice of deleting the node if your average utilization is under designated minimum criteria that you set.
How to Combat Overprovisioned Resources (and lower your cloud costs)
Now that you have a good idea of ways you could be overprovisioning your cloud resources and needlessly running up your cloud bill – what can you do about it? The end-all-be-all answer is “be vigilant.” The only way to be sure that your resources are cost-optimal is with constant monitoring of your resources and usage metrics. Luckily, optimization tools can help you identify and automate some of these best practices and do a lot of the work for you, saving time and money.
When we talk about cloud migration challenges, the conversation is about a company switching their workloads from an on-premise datacenter to a public cloud environment. But what about cloud to cloud migration?
The Benefits of Cloud to Cloud Migration
Why would a company go through the trouble of moving its entire infrastructure to the cloud, investing in one cloud service provider only to switch to another?
The cloud shift is no longer anything new. Companies have accepted cloud adoption and are becoming more comfortable with using cloud services. Now with AWS, Azure, and Google Cloud Platform currently leading the market (plus others growing rapidly), and constantly offering new and better options in terms of pricing and services, switching providers could prove to be fruitful.
Choosing a cloud provider to begin with is a monumental task. Businesses have to make choices regarding a number of factors – cost, reliability, security, and more. But even with all factors considered, business environments are always changing. Cost can become more or less important, your geographical region might evolve (which affects cost and availability of services), and priorities can shift to the point where another platform might be a better fit.
Perhaps your migration to AWS a few years ago was driven mainly by reliability and risk mitigation. While other providers were up and coming, you wanted to go with the gold standard. A few years later, productivity tools like Google’s G Suite became useful to your business. You now have business partners using other platforms like Azure or Google Cloud. You realize that your needs for software have changed, business partnerships have influence, and it becomes clear that another provider could be of greater benefit. Not to mention, cloud services themselves are ever-changing, and you might find better pricing, service-level agreements, scalability, and improved performance with another provider as offerings change over time.
While all of this makes sense, theoretically speaking, let’s take a look at a real example:
The Case of GitLab
A number of users were up in arms over Microsoft’s acquisition of Github, so much so that hundreds of thousands have already moved to another Git-repository manager – GitLab. And in a twist of fate, GitLab has made the announcement that they’ve decided to swap Microsoft Azure for another cloud provider – Google Cloud Platform.
Ask Andrew Newdigate, the Google Cloud Platform Migration Project Lead at GitLab, about why they’re making the move to GCP and he’ll likely mention service performance, reliability, and something along the lines of Kubernetes is the future.
Kubernetes, the open source project first released by Google and designed for application management of multiple software containers “makes reliability at massive scale possible.” What’s also appealing is that GitLab gets to use Google Kubernetes Engine, a service designed to simplify operating a Kubernetes cluster, as part of their cloud migration. The use of GKE has been cited as another driving factor for GitLab, looking to focus on “bumping up the stability of scalability of GitLab.com, by moving our worker fleet across to Kubernetes using GKE.”
Sid Sijbrandij, CEO of GitLab, adds better pricing and superior performance as reasons behind the migration. In an interview with VentureBeat, he said:
“Google as a public cloud, they have more experience than the other public cloud providers because they basically made a cloud for themselves […] And you find that in things such as networking, where their network quality is ahead of everyone else. It’s more reliable, it has less jitter, and it’s just really, really impressive how they do that, and we’re happy to start hosting Gitlab.com on that.”
The Challenges of Cloud to Cloud Migration
There’s a long list of factors that influence a company’s decision in selecting a cloud provider, and they don’t stop once you start building infrastructure in a particular cloud. Over time, other providers may prove to be better for the needs of your business. But just as there are challenges with cloud adoption in the first place, similar challenges apply when making the switch from cloud to cloud:
- Data transfer. Transferring data between different cloud service providers is a complex task, to say the least. Like data transfer from enterprise to cloud, information is transferred over the internet, but between cloud providers instead from server to cloud. This presents the issue of speed at which data downloads, and as a rule of thumb you should avoid transferring large chunks of data at a time. There can even be massive transfer costs of moving the data out of or into a cloud.
- Potential downtime. Downtime is also a risk. It’s important to account for inconsistencies in data, examine network connections, and prepare for the real possibility of applications going down during the migration process.
- Adapting to technologies for the new cloud. You built an application for Azure, but now you’re going Google – it’s not as simple as picking it up from one platform and expecting it to run on another (and with the same benefits). Anticipate a heavy amount of time spent reconfiguring the application code to get the most out of your new platform.
- Keeping costs in check. Consider the time and costs to migrate to the cloud, which tend to be misunderstood or drastically understated. Again, the same applies for cloud to cloud migration. By now, you have a better understanding of cloud service offerings, pricing models, and the complexity of a cloud adoption budget – for the service you were using. Once again, you’ll have to evaluate all of these costs and look into options that will help you save post-migration, like optimization tools.
Cloud to Cloud Migration – Is it worth it?
Before shifting to the cloud, you probably asked yourself the same thing. And just like before, you’ll have to dive deeply into factors like costs, technologies, and risk versus reward to assess whether or not a cloud to cloud migration is the right move for your business.
At first glance, a cloud to cloud migration is just as complicated and time-consuming as moving to the cloud in the first place, and it might seem like it’s just not worth the effort. But why did you move to the cloud? If you did to save costs over time, create better business opportunities, improve reliability and performance – then why would you NOT go with another provider that will benefit your business more in those areas? Not to mention, the more time you spend with one provider, building more applications as you go, the harder it will be to make the switch.
So cloud to cloud migration – is it worth it? Yes – but only if you’ve considered all the factors to determine whether or not another cloud is better for your business.