$12.9 Billion in wasted cloud spend this year.

Wake up and smell the wasted cloud spend. The cloud shift is not exactly a shift anymore, it’s an evident transition. It’s less of a “disruption” to the IT market and more of an expectation. And with enterprises following a visible path headed towards the cloud, it’s clear that their IT spend is going in the same direction: up.

Enterprises have a unique advantage as their cloud usage continues to grow and evolve. The ability to see where IT spend is going is a great opportunity to optimize resources and minimize wasted cloud spend, and one of the best ways to do that is by identifying and preventing cloud waste.

So, how much cloud waste is out there and how big is the problem? What difference does this make to the enterprises adopting cloud services at an ever-growing rate? Let’s take a look.

The State of the Cloud Market in 2018

The numbers don’t lie. For a real sense of how much wasted cloud spend there is, the first step is to look at how much money enterprises are spending in this space at an aggregate level.

Gartner’s latest IT spending forecast predicts that worldwide IT spending will reach $3.7 trillion in 2018, up 4.5 percent from 2017. Of that number, the portion spent in the public cloud market is expected to reach $305.8 billion in 2018, up $45.6 billion from 2017.

The last time we examined the numbers back in 2016, the global public cloud market was sitting at around $200 billion and Gartner had predicted that the cloud shift would affect $1 trillion in IT spending by 2020. Well, with an updated forecast and over $100 billion dollars later, growth could very well exceed predictions.

The global cloud market and the portion attributed to public cloud spend are what give us the ‘big picture’ of the cloud shift, and it just keeps growing, and growing, and growing. You get the idea. To start understanding wasted cloud spend at an organizational level, let’s break this down further by looking at an area that Gartner says is driving a lot of this growth: infrastructure as a service (IaaS).

Wasted Cloud Spend in IaaS

As enterprises increasingly turn to cloud service providers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) to provide compute resources for hosting components of their infrastructures, IaaS plays a significant role in both cloud spend and cloud waste.

Of the forecasted $305.8 billion dollar public cloud market  for 2018, $45.8 billion of that will be spent on IaaS, ⅔ of which goes directly to compute resources. This is where we get into the waste part:

  • 44% of compute resources are used for non-production purposes (i.e. development, staging, testing, QA)
  • The majority of servers used for these functions only need to run during the typical 40-hour work week (Monday through Friday, 9 to 5) and do not need to run 24/7
  • Cloud service providers are still charging you by the hour (or minute, or even by the second) for providing compute resources

The bottom line: for the other 128 hours of the week (or 7,680 minutes, or 460,800 seconds) – you’re getting charged for resources you’re not even using. And there’s a large percent of your waste!

What You Can Do to Prevent Wasted Cloud Spend

Turn off your cloud resources.

The easiest and fastest way to save money on your idle cloud resources when  is by simply by not using them. In other words, turn them off. When you think of the cloud as a utility like electricity, it’s as simple as turning off the lights every night and when you’re not at home. With ParkMyCloud you can automatically schedule your cloud resources to turn off when you don’t need them, like nights and weekends, and eliminate 65% or more on your monthly bill with AWS, Azure, and Google. Wham. bam.

Turn on your SmartParking.

You already know that you don’t need your servers to be on during nights and weekends, so you shut them off. That’s great, but what if you could save even more with valuable insight and information about your exact usage over time?

With ParkMyCloud’s new SmartParking feature, the platform will track your utilization data, look for patterns and create recommended schedules for each instance, allowing you to turn them off when they’re typically idle.

There’s a lot of cloud waste out there, but there’s also something you can do about it: try ParkMyCloud today.

Yeah, Yeah, Yeah we Park %$#@, but what really matters to Enterprises? – Frequently Asked Questions

Yeah, Yeah, Yeah we Park %$#@, but what really matters to Enterprises? – Frequently Asked Questions

Here at ParkMyCloud we get to do product demos for a lot of great companies all over the world, from startups to Fortune 500’s, and in many different industries – Software, IT, Financial, Media, Food and Beverage, and many more. And as we talk to industry analysts and venture capitalists they always ask about vertical selling and the like — we used to do this back at Micromuse where had Federal, Enterprise, Service Provider and SMB sales teams, for example. But here at ParkMyCloud we notice in general the questions from enterprises are vertical-agnostic, and since cloud is the great IT equalizer in my book, we decided to summarize the 8 Most Frequently Asked Questions we get from prospects of all shapes and sizes.

These are the more common questions we get beyond turning cloud resources off / on:

How does ParkMyCloud handle system patching?

Answer: The most common way of dealing with patching is to use our API.  The workflow would be to log in through the API, get a list of the resources, then choose which resources you want and choose to “snooze” the schedule (which is a temporary override of the schedule, if you haven’t played with that yet) for a couple of hours, or however long the patching takes.  Once the schedule is snoozed, you can toggle the instance on, then do the patching.  After the patching is complete, you can either cancel the snooze to go back to the original schedule or wait for the snooze to finish and timeout.

If your patching is done on a weekly basis, you could also just implement the patch times into the schedules so the instances turn on, say at 3am on Sunday.

How do I start and stop instances in a sequential order?

Answer: ParkMyCloud has created a feature that we call ‘Logical Groups’, basically you group cloud resources into a group or cluster within the platform and then assign the order you wish them to stop and start, you can also set how long it takes before resource 1 starts / stops and then resource 2 starts / stops and so forth. This way, your web server can stop first and the database can stop second so all the connections close properly. As this feature is very popular, we have had many requests to fully automate this using our policy engine and tags, a work in progress – that will be way cool.

My developers hate UI’s, how does he/she manage the schedules without using your UI?

Answer: Yes, this is an easy one but always gets asked. If you are anti-UI or just don’t want to use yet another UI, you can use the following channels to manage your resources in ParkMyCloud:

Can I govern user access and permissions?

Answer: Yes, we have support for Single-Sign On (SSO) and a full on Role-based Access Control model (RBAC) in the platform that allows you to import users, add them to teams and assign them roles. The common scenario around this is ‘I only want my SAP QA team to have access to the cloud resources they need for that project and nothing else, and limit their permissions’ – handled.

Can I automatically assign schedules based on tags?

Answer: Yes, and in general this what most companies do using ParkMyCloud. We have a Policy Engine where you can create policies that allow you to fully automate your cloud resource scheduling. Basically the policy reads the AWS, Azure, or Google Cloud metadata that is brought into the platform, and based on those tags (or even other data like resource name, size, region, etc.) and the corresponding policy, we can automatically assign schedules to cloud resources. And we take that a step further, as those resources can also be automatically parsed to Teams and Users as well based on their roles (see RBAC).

You can only park stuff based on tags? That’s so weak!

Answer: Not so fast my friend … I must admit we sort of threw this one in there but it does come up quite often, and we recently solved this problem with our release of SmartParking, which allows you to bring in metric data, trend it for a period of time, and then automatically create schedules based on those usage patterns – cool stuff.

Can we pick which instances we bring into ParkMyCloud?

Answer: Sort of, through their API the cloud providers don’t allow you to choose which cloud resources in an account you bring into the platform, if you link a cloud account to ParkMyCloud all the cloud resources in that account will populate (assuming our API supports those resources and the cloud provider allows you to ‘park’ them). But we do let you choose which accounts you bring into ParkMyCloud, so link accounts and bring in as many or as few accounts as you wish, and by the way AWS recommends you create accounts based on on function like Production, Dev, Test, QA, etc., and then breaks that down even more granular to Dev 1, Dev 2, Dev 3, etc. – this is ideal for ParkMyCloud.

Where is ParkMyCloud located?

Answer: Northern Virginia of course, in Sterling at Terminal 68 to be precise. It’s a co-working space we share with several other startups; we would also be remiss if we did not mention this area is also one of the finalist locations for Amazon’s H2Q – it’s a hotbed of cloud and data center activity.

We hope this was helpful and would value your feedback on the 8 Most Frequently Asked Questions we get, and if yours are the same or different, or of course our favorite … have you thought of XYZ as a feature? Let us know at info@parkmycloud.com.

The Cost of Cloud Computing Is, in Fact, Dropping Dramatically

The Cost of Cloud Computing Is, in Fact, Dropping Dramatically

You might read the headline statement that the cost of cloud computing is dropping  and say “Well, duh!”. Or maybe you’re on the other side of the fence. A coworker recently referred me to a very interesting blog on the Kapwing site that states Cloud costs aren’t actually dropping dramatically. The author defines“dramatically” based on the targets set by Moore’s Law or the more recently proposed Bezos’ Law, which states that “a unit of [cloud] computing power price is reduced by 50 percent approximately every three years.” The blog focused on the cost of the Google Cloud Platform (GCP) n1-standard-8 machine type, and illustrated historical data for the Iowa region:

Date N1-standard-8 Cost per Hour
January 2016 $0.40
January 2017 $0.40
January 2018 $0.38

The Kapwing blog also illustrates that the GCP storage and network egress costs have not changed at all in three years. These figures certainly add up to a conclusion that Bezos’ Law is not working…at least not for GCP.

Whose law is it anyway?

If we turn this around and try to apply Bezos’ Law to, well, Bezos’ Cloud we see a somewhat different story.

The approach to measuring AWS pricing changes needs to be a bit more systematic than for GCP, as the AWS instance types have been evolving quite a bit over their history. This evolution is shown by the digit that follows the first character in the instance type, indicating the version or generation number of the given instance type . For example, m1.large vs. m5.large. These are similar virtual machines in terms of specifications, with 2 vCPUs and about 8GB RAM, but the m1.large was released in October 2007, and the m5.large in November 2017. While  the “1” in the GCP n1-standard-8 could also be a version number,  it is still the only version I can see back to at least 2013. For AWS, changes in these generation numbers happen more frequently and likely reflect the new generations of underlying hardware on which the instance can be run.

Show me the data!

In any event, when we make use of the Internet Archive to look at  pricing changes of the specific instance type as well as the instance type “family” as it evolves, we see the following (all prices are USD cost per hour for Linux on-demand from the us-east-1 region in the earliest available archived month of data for the quoted year):

m1.large m3.large m4.large m5.large Reduction from previous year/generation 3-year reduction
2008 $0.40
2009 $0.40 0%
2010 $0.34  -18%
2011 $0.34 0% -18%
2012 $0.32 -6% -25%
2013 $0.26 -23% -31%
2014 $0.24 $0.23 -13% -46%
2015 $0.175 $0.14 -64% -103%
2016 $0.175 $0.133 $0.120 -17% -80%
2017 $0.175 $0.133 $0.108 -11% -113%
2018* $0.175 $0.133 $0.100 $0.096 -13% -46%

*Latest Internet Archive data from Dec 2017 but confirmed to match current Jan 2018 AWS pricing.

FWIW: The second generation m2.large instance type was skipped, though in October 2012 AWS released the “Second Generation Standard” instances for Extra Large and Double Extra Large – along with about an 18% price reduction for the first generation.

To confirm that we can safely compare these prices, we need to look at how the mX.large family has evolved over the years:

Instance type Specifications
m1.large (originally defined as the “Standard Large” type) 2vCPU w/ECU of 4, 7.5GB RAM
m3.large 2vCPU w/ECU of 6.5, 7.5GB RAM
m4.large 2vCPU w/ECU of 6.5, 8GB RAM
m5.large 2vCPU w/ECU of 10, 8GB RAM

A couple of notes on this:

  • ECU is “Elastic Compute Unit” –  a standardized measure AWS uses to support comparison between CPUs on different instance types. At one point, 1 ECU was defined as the compute-power of a 1GHz CPU circa 2007.
  • I realize that the AWS mX.large family is not equivalent to the GCP n1-standard-8 machine type mentioned earlier, but I was looking for an AWS machine type family with a long history and fairly consistent configuration(and this is not intended to be a GCP vs AWS cost comparison).

The drop in the cost of cloud computing looks kinda dramatic to me…

The net average of the 3-year reduction figures is -58% per year, so Bezos’ Law is looking pretty good. (And there is probably an interesting grad-student dissertation somewhere about  how serverless technologies fit into Bezos’ Law…)  When you factor the m1.large ECU of 4 versus the m5.large ECU of 10 into the picture, more than doubling the net computing power, one could easily argue that Bezos’ Law significantly understates the situation. Overall, there is a trend here of not just a significantly declining prices, but also greatly increased capability (higher ECU and more RAM), and certainly reflecting an increased value to the customer.

So, why has the pricing of the older m1 and m3 generations gone flat but is still so much more expensive? On the one hand, one could imagine that the older generations of underlying hardware consume more rack space and power, and thus cost Amazon more to operate. On the other hand, they have LONG since amortized this hardware cost, so maybe they could drop the prices. The reality is probably somewhere in between, where they are trying to motivate customers to migrate to newer hardware, allowing them to eventually retire the old hardware and reuse the rack space.

Intergenerational Rightsizing

There is definite motivation here to do a lateral inter-generation “rightsizing” move. We most commonly think of rightsizing as moving an over-powered/under-utilized virtual machine from one instance size to another, like m5.large to m5.medium, but intergenerational rightsizing can add up to some serious savings very quickly. For example, an older m3.large instance could be moved to an m5.large instance in about 1 minute or less (I just did it in 55 seconds: Stop instance, Change Instance Type, Start Instance), immediately saving 39%. This can frequently be done without any impact to the underlying OS. I essentially just pulled out my old CPU and RAM chips and dropped in new ones. Note that it is not necessarily this easy for all instance types – some older AMI’s can break the transition to a newer instance type because of network or other drivers, but it is worth a shot, and the AWS Console should let you know if the transition is not supported (of course: as always make a snapshot first!)

Conclusion

For the full view of cloud compute cost trends, we need to look at both the cost of specific instance types, and the continually evolving generations of that instance type. When we do this, we can see that the cost of cloud computing is, in fact, dropping dramatically…at least on AWS.

Why Serverless Computing Will Be Bigger Than Containers

Why Serverless Computing Will Be Bigger Than Containers

One of the more popular trends in public cloud adoption is the use of serverless computing in AWS, Microsoft Azure, and Google Cloud. All of the major public cloud vendors offer serverless computing options, including databases, functions/scripts, load balancers, and more. When designing new or updated applications, many developers are looking at serverless components as an option. This new craze is coming at a time when the last big thing, containers, is still around and a topic of conversation. So, when users are starting up new projects or streamlining applications, will they stick with traditional virtual machines or go with a new paradigm? And out of all these buzzy trends, will anything come out on top and endure?

Virtual Machines: The Status Quo

The “traditional” approach to deployment of an application is to use a fleet of virtual machines running software on your favorite operating system. This approach is what most deployments have been like for 20 years, which means that there are countless resources available for installation, management, and upkeep. However, that also means you and your team have to spend the time and energy to install, manage, and keep that fleet going. You also have to plan for things like high availability, load balancing, and upgrades, as well as decide if these VMs are going to be on-prem or in the cloud. I don’t see the use of virtual machines declining anytime soon, but there are better options for some use cases.

Containers: The New Hotness, But Too Complex to be Useful

Containerization involves isolating an application by making it think it’s the only application on a server, with only the hardware available that you allow. Containers can divide up a virtual machine in a similar way that virtual machines can divide up a physical server. This idea has been around since the early 1980s, but has really started to pick up steam due to the release of Docker in 2013. The main benefits of containerization are the ability to maximize the utilization of physical hardware while deploying pieces of a microservices architecture that can easily run on any OS.

This sounds great in theory, but there are a couple of downsides to this approach. The primary problem is the additional operational complexity, as you still have to manage the physical hardware and the virtual machines, along with the container orchestration without much of a performance boost. The added complexity without removing any current orchestration means that you now have to think about more, not less, You also need to build in redundancy, train your users and developers, and ensure communication between pieces on top of your existing physical and virtual infrastructure.

Speaking of container orchestration, the other main downside is the multitude of options surrounding containers and their management, as there’s no one clear choice of what to use (and it’s hard to tell if any of the existing ones will just go away one day and leave you with a mess). Kubernetes seems to be the front runner in this area, but Apache Mesos and Docker Swarm are big players as well. Which do you choose, and do you force all users and teams to use the same one? What if the company who manages those applications makes a change that you didn’t plan for? There’s a lot of questions and unknowns, along with just having to make the choice that could have ramifications for years to come.

Serverless Computing: Less Setup, More Functionality

When users or developers are working on a project that involves a database and some python scripts, they just want the database and the scripts, not a server that is running database software and a server that runs scripts. That’s because the main idea behind serverless architecture is the goal of trying to eliminate all the overhead that comes along with these requests for specific software. This is a big benefit to those who just want to get something up and running without installing operating systems, tweaking configuration files, and worrying about redundancy and uptime.

This isn’t all sunshine and rainbows, however. One of the big downsides to serverless comes hand-in-hand with that reduced complexity, in that you also typically have reduced customization. Running an older database version or having a long-running python function might not be possible using serverless services. Another possible downside is that you are typically locked in to a vendor once you start developing your applications around serverless architecture, as the APIs are often going to be vendor-specific.

That being said, it appears that the reduced complexity is a big deal for the users who want things to “just work”. Dealing with less headaches and less management so they can get creative and deploy some cool applications is one of the main goals of folks who are trying to push the boundaries of what’s possible. If Amazon, Microsoft, or Google want to handle database patching and python versioning so you don’t have to, then let them deal with it and move on to the fun stuff!

Here at ParkMyCloud, we’re doing a mix of serverless and traditional virtual machines to maximize the benefits and minimize the overhead for what we do.  By using serverless where it makes sense without forcing a square peg into a round hole, we can run virtual machines to handle the code we’ve already written while using serverless architecture for things like databases, load balancing, and email messages.  We’re starting to see more customers going with this approach as well, who then use ParkMyCloud to keep the costs of virtual machines low when they aren’t in use. (If you’d like to do the same, check out a trial of ParkMyCloud to get your hybrid infrastructure optimized.)

When it comes to development and operations, there are numerous decisions to make that all have pros and cons. Serverless architecture is the latest deployment option available, and it clearly helps reduce complexity and accounts for things that may give you headaches. The reduced mobility is something that containers can handle really well, but involves more complexity in deployment and ongoing management. Software installed on virtual machines is a tried-and-true method, but does mean you are doing a lot of the work yourself. It’s the fact that serverless computing is so simple to implement that makes it more than a trend: this is a paradigm that will endure, where containers won’t.

ParkMyCloud Reviews – Customer Video Testimonials

ParkMyCloud Reviews – Customer Video Testimonials

A few weeks ago at the 2017 AWS re:Invent conference in Las Vegas, we had the opportunity to meet some of our customers at the booth, get their product feedback, and a few shared their ParkMyCloud reviews as video testimonials. As part of our ongoing efforts to save money on cloud costs with a fully automated, simple-to-use SaaS platform, we rely on our customers to give us insight into how ParkMyCloud has helped them. Here’s what they had to say:

TJ McAteer, Prosight Specialty Insurance

“It’s all very well documented. We got it set up within an afternoon with our trial, and then it was very easy to differentiate and show that value – and that’s really the most attractive piece of it.”

As the person responsible for running the cloud engineering infrastructure at ProSight Specialty Insurance, ParkMyCloud had everything TJ was looking for. Not only that, but it was easy to use, well managed, and demonstrated its value right away.

James LaRocque, Decision Resources Group

“What’s nice about it is the ability to track financials of what you’re actually saving, and open it up to different team members to be able to suspend it from the parked schedules and turn it back on when needed.”

As a Senior DevOps engineer at Decision Resources Group, James LaRocque discovered ParkMyCloud at the 2016 AWS re:Invent and has been a customer ever since. He noted that while he could have gone with scripting, ParkMyCloud offered the increased benefits of financial tracking and user capabilities.

“The return on investment is huge.”

Kurt Brochu, Sysco Foods

“We had instant gratification as soon as we enabled it.”

Kurt Brochu, Senior Manager of the Cloud Enablement Team at Sysco Foods, was immediately pleased to see ParkMyCloud saving money on cloud costs as soon as they put it into action. Once he was able to see how much they could save on their monthly cloud bill, the next step was simple.   

“We were able to save over $500 in monthly spend by just using it against one team. We are rolling out to 14 other teams over the course of the next 2 weeks.”

Mark Graff, Dolby Labs

“The main reason why we went for it was that it was easy to give our users the ability to start and stop instances without having to give them access to the console.”

Mike Graff, the Senior Infrastructure Manager at Dolby Labs, became a ParkMyCloud customer thanks to one of his engineers in Europe.

“We just give them credentials, they can hop into ParkMyCloud and go to start and stop instances. You don’t have to have any user permissions in AWS – that was a big win for us.”


We continue to innovate and improve our platform’s cloud cost management capabilities with the addition of SmartParking recommendations, SmartSizing, Alicloud and more. Customer feedback is essential to making sure that not only are we saving our customers time and money, but also gaining valuable insight into what makes ParkMyCloud a great tool.

If you use our platform, we’d love to get a ParkMyCloud review from you and hear about how ParkMyCloud has helped your business – there’s a hoodie in it for you! Please feel free to participate in the comments below or with a direct email to info@parkmycloud.com

 

Page 1 of 3123
Want tips, tricks, and insights for an optimized cloud?

No, I like wasting time and money.