July 2017 - ParkMyCloud

Cloud Nine, Ten or Eleven: What all those cloud computing growth statistics really mean?

cloud computing growth statistics

Photo by Abigail Keenan on Unsplash

 

Growth in the various cloud platforms has become a dinner party conversation staple of those in the tech industry, in much the same way that house price appreciation was in the mid-2000’s. It’s interesting, everyone has an opinion about cloud computing growth statistics and it’s not entirely clear how it ends.

So let’s start with some industry projections. According to Gartner, the global infrastructure as a service (IaaS) market will grow by 39% in 2017 to reach $35 billion by the end of the year. IaaS growth shows no sign of slowing down and is expected to reach $72 billion by 2021, a CAGR of 30%. This revenue is principally split by the big-four players in public cloud: Amazon Web Services (AWS), Microsoft Azure (Azure), Google Compute Platform (GC) and IBM.

The approximate market share of these four public cloud platforms at the end of the first quarter of 2017 can be seen in the Canalys chart below. The reasons these numbers are only approximate is that each of these vendors include (or exclude) different facets of their cloud business and each seek to ensure their growth remains opaque to the investor community.

However, Amazon reported their earning in April 2017 and showed revenue growing 43 percent in the quarter to $3.66 billion, an annualized run rate of some $14.6 billion. Meanwhile Microsoft reported their cloud earnings in July 2017 and that its annualized revenue run rate was just under $19 billion. However, this includes a lot more than just IaaS and once non-IaaS is removed, analysts suggest that revenue is likely at the $6 billion run rate. Google cloud business is even harder to separate but its cloud revenue was estimated to be some $1 billion at the end of 2015, and although they seem to have hit their stride in the last year or so they clearly have a lot of ground to make. Current estimates are for approximately $2.5 billion in 2017. Lastly, IBM are estimated to be of a similar size to Google but appear to have a lot less momentum than the others, and certainly based on the requests we hear from our customer base IBM is not very often, if ever, referenced.

OK so other than guessing on the winners and losers why does this matter? In our humble opinion, it matters because this scenario creates increased competition and competition is good for consumers. It’s also relevant as companies have a choice, and many are looking at more than one cloud platform, even if they have not yet done anything about it. But what is really interesting, and what keeps us awake at night, is how much of this consumption is being wasted. We think and talk about this waste in terms of three buckets:

1) Always on means always paying – 44% of workloads are classified as non-production (i.e., test, development etc.) and don’t need to run 24×7

2) Over provisioning – 55% of all public cloud resources are not correctly sized for their workloads

3) Inventory Waste – 15% of spend on paying resources which are no longer used.

Combine these three buckets and by our reckoning you are looking at some estimated $6 billion in wasted cloud spend cloud in 2016, growing to $20 billion by 2020. Now that is something to really care about.

Few tools exist to actively monitor and manage this waste, and today there is not a cloud waste management industry per se, and currently tech analysts tend to lump everything under ‘cloud management’. We think that will change in the near future as cloud cost control becomes top-of-mind and the industry is able to leverage cloud computing growth statistics to calculate the scale of this industry problem.If you are in the cloud this is definitely something you should know about. Maybe you should consider optimizing your cloud spend now (before your CTO, CIO or CFO ask you to do so).

Read more ›

Real Time Cases – How AWS Free Credits is Helping this Startup

AWS Free Credits

We sat down with Brian Park from Real Time Cases to talk to him about his company, how he uses AWS, and AWS free credits. We found out that the AWS startup package is a crucial part of making his business run.

Can you start by telling us about Real Time Cases and what you guys do?

So Real Time Cases is an education tech startup that is a new generation experiential learning platform. The new form of learning for today’s student is “learning by doing” and not just learning by reading antiquated textbooks. So Real Time Cases, through our partners, approach high level executives and say: “if you can hire 70-80 students to solve any problem in your department, then what would it be?” This forms the foundation for a “Real Time Case”. We film and document the issue, and professors can use that to drive concepts, theories and frameworks that they are trying to teach in the classroom, and use current, real life examples. Our cases are ongoing and happen “in real time” so they are like mini projects. This also opens the door for students to pitch some of these ideas to local business executives, which is exciting.

What is your role in the company?

I am the Director of Product. We have a platform that hosts the cases, videos are the primary content, due to the fact that most students would prefer to watch, rather than to read – think YouTube and Netflix. I am responsible for overseeing the technical team, both developers and designers. Amazon Web Services (AWS) is our cloud provider of choice and our entire infrastructure is hosted there

Why AWS over others?

We chose AWS because of the startup package, we get $10,000 of AWS free credits to use as we wish – compute, databases, and storage all for free! As with any startups, we have to bootstrap operations by keeping costs as low as possible and in addition AWS services are easy to use and access. If we had launched this company 10 years ago, we couldn’t operate at this cost point. So the credits and service offerings were very important to getting us successfully off the the ground and to market quickly and in a cost effective way.  We have both domestic and international customers, and we can host and publish content for any university in the cloud at negligible cost which translates into affordable price points for students, and at our current cloud burn we can further sustain our operations for many months to come.

What technologies do you use in AWS?

We don’t have an official DevOps team, but we use Github for our code repository, Jira for agile processes, and Slack for communication . These low cost, SaaS tools plus AWS have been very productive for us. We are able to push code out in either 1 or 2 week cycles depending on the size of our stories. Our output used to be a 2 week sprint, but is now a 1 week sprint due to improved tools and processes. We follow agile development practices, participate in scrums and try to utilize the latest DevOps tools. As we have a distributed development and QA team. It’s best to use a tool like Jira to coordinate over time zones and accomplish the harder logistical tasks. We don’t have an overly complex architecture in AWS and use EC2, RDS and S3. S3 is used to store and host the video content we create for the professors and students.

 

Do you have any cost control measures in place for AWS?

Right now, no. When our AWS free credits expire we don’t expect our costs to be very high, but  as a startup being able to leverage cost control tools like ParkMyCloud, to save 20-30% will be important – every dollar counts in a startup. We have been using AWS since our inception and haven’t had to move into the paid area yet – Bezos has created a truly disruptive business model that enables the startup community to rapidly prototype and test their thesis by quickly and inexpensively getting to market.

Read more ›

Implementing a Cloud Cost Management Tool

cloud cost management tool

What do people say when they evaluate and implement a cloud cost management tool? Are they concerned with automation? Projected savings? Or are they interested in the ease of access of the product? An experience that I had when I started with ParkMyCloud shed some light on these questions for me.

One of the first tasks that I was assigned as an intern this summer at ParkMyCloud was to go through our Capterra reviews and pull out some compelling customer quotes that helped answer those questions. What I found interesting in reading the quotes is that what’s important to you depends on the size and type of company, your role in that company, and the outcome you’re looking for. We went back and talked to several of the people who left reviews:

One customer was excited to see how easy it was to start saving with ParkMyCloud:

“Try it out as soon as you can if you’re running on AWS and watch the savings add up!”

-John K. Manager of Solutions Analytics

He even followed up with the long-term savings he was able to get:

“ParkMyCloud is an excellent service that allows us to easily manage our AWS instances so that we’re only paying for our AWS instances when we’re actively using them. We were able to save almost 50% off of our monthly bill after about only 20 minutes of setup!”

-John K. Manager of Solutions Analytics

Other customers were excited about the usability of ParkMyCloud. They viewed it as incredibly important that just about anyone can use the product for cloud cost management, you don’t have to be an IT pro. In fact, it usually only takes around 15 minutes to get going with ParkMyCloud!

“As a tool that you can give to ANYONE in your organization, and have them be responsible for their own AWS costs, it is certainly unmatched. I’ve given it to execs who had no technical ability at all, and told them “here you go – you can only control your specific servers, design a power schedule that works for you”, and they’ve done it with zero assistance.”

-Reed S.

Our role-based access controls that allows multiple members of a team or different teams to dictate their own schedules was worth mentioning by some of our reviewers:

“The ability to distribute rights to groups has made the ability for our teams to take advantage of individual application sleep schedules.”

-Edward P. Software Engineer
So what do people say when they are implementing a cloud cost management tool?  Every CFO says that it needs to happen today, because those cloud bills arent getting any smaller.  Every manager says that the tool needs to make it easy to implement governance on a per-team basis.  Every developer says they need something that works right out of the box without getting in their way.  Whatever your role might be, ParkMyCloud will have you saying Its about time!  Try it out for free today!

Read more ›

AWS vs Google Cloud Pricing – A Comprehensive Look

aws vs google cloud pricing

Back in May 2017 I wrote a very popular blog about Cutting through the AWS and Azure Cloud Pricing Confusion.

Since ParkMyCloud also provides cost control for Google Cloud Platform (GCP) resources, I thought it might be useful to compare AWS vs Google Cloud pricing. An addition I will take a look at the terminology, and billing differences NOTE: There are other “services” involved, such as networking, storage and load balancing, when looking at your overall bill. I am going to be focused mainly on compute charges in this article.

AWS and GCP Terminology Differences

As mentioned before, in AWS, their compute service is called “Elastic Compute Cloud” (EC2). The virtual servers are called “Instances”.

In GCP, the service is referred to as “Google Compute Engine” (GCE). The servers are called also called “instances”. However, in GCP there are “preemptible” and non-preemptible instances.  Non-preemptible instances are the same as AWS “on demand” instances.  

Preemptible instances are similar to AWS “spot” instances, in that they are a lot less expensive, but can be preempted with little or no notice. The difference is that GCP preemptible instances can actually be stopped without being terminated. That is not true for AWS spot instances.

Flocks of these instances spun up from a snapshot according scaling rules are called “auto scaling groups” in AWS.

The similar concept can be created within GCP using “instance groups”. However, instance groups are really more of a “stack”, which are created using an “instance group template”. As such, they are more closely related to AWS CloudFormation stacks.

 

aws vs. google cloud pricing

 

AWS and GCP Compute Sizing

Both AWS and GCP have a dizzying array of instance sizes to choose from, and doing an apples-to-apples comparison between them can be quite challenging. These predefined instance sizes are based upon number of virtual cores, amount of virtual memory and amount of virtual disk.

They have different categories.

AWS offers:

  • Free tier – inexpensive, burst performance (t2 family)
  • General purpose (m3/m4 family)
  • Compute optimized (c4 family)
  • GPU instances (p2 family)
  • FPGA instances (f1 family)
  • Memory optimized (x1, r3/r4 family)
  • Storage optimized (i3, d2 family)

 

GCP offers the following predefined types:

  • Free tier – inexpensive, burst performance (f1/g1 family)
  • Standard, shared core (n1-standard family)
  • High memory (n1-highmem family)
  • High CPU (n1-highCPU family)

 

However, GCP also allows you to make your own custom machine types, if none of the predefined ones fit your workload. You pay for uplifts in CPU/Hr and memory GiB/Hr. You can also add GPUs and premium processors as uplifts.

Both providers take marketing liberties with things like memory and disk sizes.  For example, AWS lists its memory size in GiB (base2) and disk size in GB (base10).
GCP reports its memory size and disk size as GB. However, to make things really confusing this is what they say on their pricing page: “Disk size, machine type memory, and network usage are calculated in gigabytes (GB), where 1 GB is 230 bytes. This unit of measurement is also known as a gibibyte (GiB).”

This, of course, is pure nonsense. A gigabyte (GB) is 10^9 bytes. A gibibyte (GiB) is 230 bytes. The two are definitely NOT equal. It was probably just a typo.
If you look at what is actually delivered, neither seems to match what is shown on their pricing pages. For example, an AWS t2.micro is advertised as having 1 GiB of memory. In reality, it is 0.969 GiB (using “top”).

For GCP, their f1.micro is advertised as “0.6 GB”. Assuming they simply have their units mixed up and “GB” should really be “GiB”, they actually deliver 0.580 GiB. So, both round up, as marketing/sales people are apt to do.

With respect to pricing, this is how the two seem to compare, by looking at some of the most common “work horses” and focusing on CPU, memory and cost. (One would have to run actual benchmarks to more accurately compare):

 

aws vs. google cloud pricing

 

The bottom line:

In general, for most workloads AWS is less expensive on a CPU/Hr basis. For compute intensive workloads, GCP instances are less expensive

Also, as you can see from the table, both providers charge uplifts for different operating systems, and those uplifts can be substantial! You really need to pay attention to the fine print. For example, GCP charges a 4 core minimum for all their SQL uplifts (yikes!). And, in the case of Red Hat Enterprise Licensing (RHEL) in GCP, they charge you a 1 hour minimum for the uplifts and in 1 hour increments after that. (We’ll talk more about how the providers charge you in the next section.)

AWS vs. Google Cloud Pricing – Examining the Differences

Cost/Hr is only one aspect of the equation, though. To better understand your monthly bill, you must also understand how the cloud providers actually charge you. AWS prices their compute time by the hour, but requires a 1 hour minimum. If you start an instance and run it for 61 minutes then shut it down, you get charged for 2 hours of compute time.

Google Compute Engine pricing is also listed by the hour for each instance, but they charge you by the minute, rounded up to the nearest minute, with a 10 minute minimum charge. So, if you run for 1 minute, you get charged for 10 minutes. However, if you run for 61 minutes, you get charged for 61 minutes. On the surface, this sounds very appealing (and makes me want to wag my finger at AWS and say, “shame on you, AWS”).

You also really need to pay attention to the use case and the comparable instance prices. Let me give you a concrete example. So, here is a graph of 6 months worth of data from an m4.large instance. Remember that our goal at ParkMyCloud is to help you “park” non-production instances automatically, when they are not being used, to save you money.

This instance is on a ParkMyCloud parking schedule, where it is RUNNING from 8:00 a.m. to 7:00 p.m. on weekdays and PARKED evenings and weekends. This instance, assuming Linux pricing, costs $0.10 per hour in AWS. From November 6, 2016 until May 9, 2017, this instance ran for 111,690 minutes. This is actually about 1,862 hours, but AWS charged for 1,922 hours and it cost $192.20 in compute time.

 

aws vs. google cloud pricing

 

Why the difference? ParkMyCloud has a very fast and accurate orchestration engine, but when you start and stop instances, the cloud provider and network response can vary from hour-to-hour and day-to-day, depending on their load, so occasionally things will run that extra minute. And, even though this instance is on a parking schedule, when you look at the graph, you can see that the user took manual control a few times, perhaps to do maintenance. Stuff happens!

What would it have cost to run the similar instance in GCP?  If you look at the comparable GCP instance, (the n1-standard-2), it costs $0.1070/hour. So, this workload running in GCP would have cost $199.18 (not including Sustained Use Discounts). Since this instance really only ran 42.6% of the time (111,690 minutes out of 262,140 minutes), it would qualify for a partial Sustained Use Discount. With those discounts the actual cost would have been about $182.72. This is about $10 cheaper than AWS, even though per hour cost for AWS was lower). That may not seem much, but if you have hundreds or thousands of instances, it adds up.

AWS Reserved Instances vs GCP Committed Use

Both providers offer deeper discounts off their normal pricing, for “predictable” workloads that need to run for sustained periods of time, if you are willing to commit to capacity consumption upfront. AWS offers Reserved Instances. Google offers Committed Use Discounts (currently in beta). An in-depth comparison of these is beyond the intent of this blog (and you have already been very patient, if you made it this far). Therefore, I’ll reserve that discussion for a future blog.

Conclusion

If you are new to public cloud, once you get past all the confusing jargon, the creative approaches to pricing and the different ways providers charge for usage, the actual cloud services themselves are much easier to use than legacy on-premise services.

The public cloud services do provide much better flexibility and faster time-to-value. The cloud providers simply need to get out of their own way. Pricing is but one example where AWS and GCP could stand to make things a lot simpler, so that newcomers can make informed decisions.

When comparing AWS vs. Google Cloud pricing AWS oEC2 n-demand pricing may on the surface appear to be more competitive than GCPPpricing for comparable compute engine’s. However, when you examine specific workloads and factor in Google’s more enlightened approach to charging for CPU/Hr time and their use of Sustained Use Discounts, GCP may actually be less expensive. AWS really needs to get in-line with both Azure and Google, who charge by the minute and have much smaller minimums. Nobody likes being charged extra for something they don’t use.

In the meantime, ParkMyCloud will continue to help you turn off non-production cloud resources, when you don’t need them and help save you a lot of money on your monthly cloud bills, regardless of which public cloud provider you use.

Read more ›

Was the Acquisition of Cloudyn About the need to Manage Microsoft Azure? Sort of.

batch workloads

Perhaps you heard that Microsoft recently acquired Cloudyn in order to manage Microsoft Azure cloud resources, along with of course Amazon Web Services (AWS), Google Cloud Platform (GCP), and others. Why? Well the IT landscape is becoming more and more a multi-cloud landscape. Originally this multi-cloud (or hybrid cloud) approach was about private and public cloud, but as we recently wrote here the strategy as we talk to large enterprises is becoming more about leveraging multiple public clouds for a variety of reasons – risk management, vendor lock in, and workload optimization seem to be the three main reasons.

 

That said, according to TechCrunch and quotes from Microsoft executives the acquisition is meant to provide Microsoft a cloud billing and management solution that provides it with an advantage over competitors (particularly AWS and GCP) as companies continue to pursue, drum roll please … a multi-cloud strategy. Additional, benefits for Microsoft include visibility into usage patterns, adoption rates, and other cloud-related data points that they can leverage in the ‘great cloud war’ to come … GOT reference of course.

 

Why are we writing about this – a couple reasons. One of course is that this a relevant event in the cloud management platform (CMP) space, as this is really the first big cloud visibility and governance acquisition to date. The other acquisitions by Dell (Enstratius), Cisco (Cliqr), and CSC (ServiceMesh) for example were more orchestration and infrastructure platforms than reporting tools. Second, this points to the focus enterprises have on cost visibility, cost management and governance as they look to optimize their spend and usage as one does with any utility. And third, this proves that a ‘pushback’ from enterprises to more widely adopt Azure has been, “I am already using AWS, I don’t want to manage through yet another screen / console”, and that multi-cloud visibility and governance helps solve that problem.

 

Now, taking this one step farther: the visibility, recommendations, and reporting are all well and good, but what about the actions that must be taken off those reports, and integration into enterprise Devops processes for automation and continuous cost control? That’s where something like Cloudyn falls short, and where a platform like ParkMyCloud kicks in:

 

  • Multi-cloud Visibility and Governance- check
  • Single-Sign On (SSO) – check
  • REST API for DevOps Automation – check
  • Policy Engine for Automated Actions (parking) – check
  • Real-time Usage and Savings data – check
  • Manage Microsoft Azure (AWS + GCP) – check

 

The next step in cloud cost control is automation and action, not just visibility and reporting. Let technology automate these tasks for you instead of just telling you about it.

Read more ›

AWS Slack Integration for Interactive Cost Control

AWS slack integration

Today we’re happy to announce a new chatbot for AWS Slack integration that allows you to fully interact with ParkMyCloud without having to access the GUI.  Combined with the recent addition of Notifications in ParkMyCloud, you can manage your continuous cost control from the Slack channels you live in every day!

 

Developers and operations engineers are increasingly utilizing ChatOps to manipulate their environments and help users self-manage the servers and databases they require for their work.  There’s a few different chat systems and bot platforms available, but the most common used today is Slack.  By setting up the SlackBot to interact with your ParkMyCloud account, you can allow users to assign schedules, temporarily override parked instances, or toggle instances to turn off or on as needed.

 

Combine this with notifications from ParkMyCloud, and you can have full visibility into your cost control initiatives right from your standard Slack chat channels.  Notifications allow you to have ParkMyCloud post messages for things like schedule changes or instances that are being turned off automatically.  Now, with the new ParkMyCloud Slackbot, you can reply back to those notifications to snooze the schedule, turn a system back on temporarily, or assign a new schedule.

 

The chatbot is open-source, so you can feel free to modify the bot as necessary to fit your environment or use cases.  It’s written in Python using the slackclient library, but even if you’re not a Python expert, you’ll find it easy to modify to suit your needs.  We’d love to have you send your ideas and modifications back to us for rapid improvement.

 

If you haven’t already signed up for ParkMyCloud, then start a free trial and get the Slackbot hooked up for easy AWS Slack integration.  You’ll find that ParkMyCloud can make continuous cost control easy and help reduce your cloud spend, all while integrating with your favorite DevOps tools!

Read more ›

New on ParkMyCloud: Notifications via Slack and Email

New on ParkMyCloud: you can now receive notifications about your environment and ParkMyCloud account via email as well as Slack and other webhooks. We’re happy to deliver this user-requested feature, and look forward to an improved user experience.

The notifications are divided into system-level notifications and user-level notifications, as outlined below.

Administrators: Configure Notifications of Account-Level Actions via Slack/Webhooks

Administrators can now set up shared account-level notifications for parking actions and/or system errors. You can choose to receive these actions via Slack or a custom webhook.

These notifications include information about:

  • Parking Actions
    • Resource stop/start as a result of a schedule
    • Manual resource start/stop via toggles
    • Manual schedule snoozes
    • Attach/detach of schedules to resources
    • Manual changes to schedules
  • System Errors
    • Permissions issues, such as a lack of permissions on an instance or credential that prevents parking actions
    • Errors related to your cloud service provider, for example, errors due to service outages.

For instructions on how to configure these notifications, please see this article on our support portal.

All Users: Get Notified via Email

While system-level notifications must be configured by an administrator, individual ParkMyCloud users can choose to set up email notifications as well. These notifications include the same information listed above for the teams you choose.

Email notifications will be sent as a rollup every 15 minutes. If no actions occur, you will not receive an email. For instructions on how to configure these notifications, please see this article on our support portal.

Let Us Know What You Think

To our current users: we look forward to your feedback on the notifications, and welcome any suggestions you have to improve the functionality and usability of ParkMyCloud.

If you aren’t yet using ParkMyCloud, you can get started here with a free trial.

Read more ›
Copyright © ParkMyCloud 2016. All rights reserved|Privacy Policy