We are happy to share the latest release in ParkMyCloud: you can now see resource utilization data for your AWS EC2 instances! This data is viewable through customizable heatmaps.
This update gives you information about how your resources are being used – and it also provides the necessary information that will allow ParkMyCloud to make optimal parking and rightsizing recommendations when this feature is released next month. This is part of our ongoing efforts to do what we do best – save you money, automatically.
Utilization metrics that ParkMyCloud will now report on include:
Average CPU utilization
Peak CPU utilization
Total instance store read operations
Total instance store write operations
Average network data in
Average network data out
Average network packets in
Average network packets out
Here is an example of an instance utilization heatmap, which allows you to see when your instances are used most often:
In a few weeks, we will release the ability for ParkMyCloud to recommend parking schedules for your instances based on these metrics. In order to take advantage of this, you will need to have several weeks’ worth of CloudWatch data already logged, so that we can recommend based on your typical usage. Start your ParkMyCloud trial today to start tracking your usage patterns so you can get usage-based parking recommendations.
If you are an existing customer, you will need to update your AWS policies to enable ParkMyCloud to access your AWS CloudWatch data. Detailed instructions can be found in our support portal.
Feedback? Anything else you’d like to see ParkMyCloud do? Let us know!
When making a cloud service provider comparison, you would probably think of the “big three” providers: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). Thus far, AWS has led the cloud market, but the other two are gaining market share, driving us to make comparisons between Azure vs AWS and Google vs AWS. But that’s not the whole story.
In recent years, a few other “secondary” cloud providers have made their way into the market, offering more options to choose from. Are they worth looking at, and could one of them become the next big provider?
Andy Jassy, CEO of AWS, says: “There won’t be just one successful player. There won’t be 30 because scale really matters here in regards to cost structure, as well as the breadth of services, but there are going to be multiple successful players, and who those are I think is still to be written. But I would expect several of the older guard players to have businesses here as they have large installed enterprise customer bases and a large sales force and things of that sort.”
So for our next cloud service provider comparison, we are going to do an overview of what could arguably be the next biggest provider in the public cloud market (after all, we need to add a 4th cloud provider to the ParkMyCloud arsenal).:
Alibaba is a cloud provider not widely known about in the U.S., but it’s taking China by storm and giving Amazon a run for its money in Asia. It’s hard to imagine a cloud provider (or e-commerce giant) more successful than what we have seen with Amazon, let alone a provider that isn’t part of the big three, but Alibaba has their sights set on surpassing AWS to dominate the world wide cloud computing market.
In 2016: Cloud revenue was $675 million, surpassing Google Cloud’s $500 million. First quarter revenue was $359 million and in the second quarter rose to $447 million.
Alibaba was dubbed the highest ranking cloud provider in terms of revenue growth, with sales increasing 126.5 percent from 2015 ($298 million) to 2016
Gartner research places Alibaba’s cloud in fourth place among cloud providers, ahead of IBM and Oracle
Alibaba Cloud was introduced to cloud computing just three years after Amazon launched AWS. Since then, Alibaba has grown at a faster pace than Amazon, largely due to their domination of the Chinese market, and is now the 5th largest cloud provider in the world.
Alibaba’s growth is attributed in part to the booming Chinese economy, as the Chinese government continues digitizing, bringing its agencies online and into the cloud. In addition, as the principal e-commerce system in China, Alibaba holds the status as the “Amazon of Asia.” Simon Hu, senior vice president of Alibaba Group and president of Alibaba Cloud, claims that Alibaba will surpass AWS as the top provider by 2019.
For the time being, Amazon is still dominating the U.S. cloud market, exceeding $400 billion in comparison to Alibaba’s $250 billion. Still, Alibaba Cloud is growing at incredible speed, with triple digit year-over-year growth over the last several quarters. As the dominant cloud provider in China, Alibaba is positioned to continue growing, and is still in its early stages of growth in the cloud computing market. Only time will reveal what Alibaba Cloud will do, but in the meantime, we’ll definitely be keeping a lookout. After all, we have customers in 20 countries around the world, not just in the U.S.
Next Up: IBM & Oracle
Apart from the big three cloud providers, Alibaba is clearly making a name for itself with a fourth place ranking in the world of cloud computing. While this cloud provider is clearly gaining traction, a few more have made their introduction in recent years. Here’s a snapshot of the next 2 providers in our cloud service provider comparison:
At the end of June 2017, IBM made waves when it outperformed Amazon in total cloud computing revenue at $15.1 billion to $14.5 billion over a year-long period
In fiscal Q1 of 2018, growth was at 51 percent, down from a 60 percent average in the last four quarters
Q4 for fiscal 2017 was at 58 percent
Since last quarter, shares have gone down by 10 percent
When making a cloud service provider comparison, don’t limit yourself to the “big three” of AWS, Azure, and GCP. They might dominate the market now, but as other providers grow, innovate, and increase their following in the cloud wars – we’ll continue to track and compare as earnings are reported.
Blue-green deployments are a great way to minimize downtime and risk — however, users should remember to keep cost in mind as well when optimizing deployments.
Why You Should Use Blue-Green Deployments
One approach to continuous deployment of applications that has really taken off in popularity recently is the use of blue-green deployments.
The main idea behind this system is to have two full production deployments in existence that are running the last two versions of code, with only the latest version actively in use. For instance, if the current version of your software is running in your “blue” environment, your next deployment would take place in the “green” environment. When you’re ready to flip the switch, you start pointing users at green instead of blue.
This deployment method has a couple of great benefits. First, it helps with minimizing downtime when cutting over to newly-deployed code. Instead of upgrading your current system and having to make users wait until the upgrade is complete, the cutover downtime is minimized. Second, along the same lines, you have a fresh deployment each time instead of upgrading an existing system repeatedly. Third, you have a system that has been already working for you that you can roll back to if necessary.
How to Optimize Costs With Two Production Deployments
Of course, running two production environments means that you are paying twice the cost for your infrastructure. ParkMyCloud users have asked how they can optimize costs while using the blue-green deployment strategy. We use AWS internally for our blue-green deployments, so we’ll discuss some options in terms of AWS terminology, but you can use other clouds like Azure and Google as well.
One approach is to use AWS Auto-Scaling Groups as your deployment mechanism. With ASGs, you decide how many instances you want as a minimum, a maximum, and a desired amount for your environment. When setting up ASGs in ParkMyCloud, you can have two different settings for min/max/desired for when the ASG is “on” and “off”. This way, you can have an ASG for blue and one for green, then use ParkMyCloud to set the min/max/desired as needed, so each of these environments is only running when necessary, and not wasting money.
Another option is to use Logical Groups in ParkMyCloud. This allows you to group together instances into one entity, so you could have a database and a web server start and stop together. If you go this route, you can put all of your blue instances together in a group, then start the whole group up when you are ready to switch over. When going between blue and green, you can just update the logical group to have the newest instances as you deploy. Again, this allows you to park the inactive environment, saving its cost.
If your continuous deployment is fully automated, a third option is to utilize the ParkMyCloud API to change schedules and toggle servers as deployments are completed. Typically, you’ll want your current active deployment on an “always on” schedule, so ParkMyCloud will turn things on even if someone tries to turn them off, and the standby deployment on an “always off” schedule so you are saving money.
This idea of using ParkMyCloud with blue-green deployments is one way to start implementing Continuous Cost Control in your pipeline. This way, you can save money while delivering software quickly and automatically. Try it out with ParkMyCloud today and get the most out of your cloud!
Analyst 451 Research has released a new report on ParkMyCloud, highlighting that “ParkMyCloud continues to build out its multi-cloud scheduling software, maintaining the clean interface but adding functionality with a reporting dashboard, single sign-on and notifications, including a Slackbot for automated parking.”
It’s true! We’ve been steadily adding features to ParkMyCloud as our customers ask for them. Recent examples include:
Mobile app – easy access to your ParkMyCloud account for cost management on the go
Slack integration – get notifications and manage your continuous cost control via Slack
Here’s the full “451 take” on ParkMyCloud:
“ParkMyCloud is one of a handful of products that automate cloud resource scheduling via a lightweight SaaS application. With support for Azure and Google Cloud Platform as well as AWS, it offers a bird’s-eye view of provisioned public cloud resources and a slick interface for ‘parking’ idle capacity, either according to a schedule or ad hoc. With a clear ROI story and plans to improve the user experience with a mobile app and a more robust policy engine, the company benefits from a focus on doing one thing and doing it well.”
That “clear ROI story” that 451 Research noted is clear to our customers, too. In fact, most customers have an ROI of less than two months of using the product. The savings rapidly pays for the cost of premium features.
They also noted that the number of instances managed in the platform has tripled, just from Q2 to Q3 this year. More and more AWS, Azure, and GCP users are relying on ParkMyCloud for continuous cost control.
It has been a little over a month since Amazon and Google switched some of their cloud services to per-second billing and so the first invoices with the revised billing are hitting your inboxes right about now. If you are not seeing the cost savings you hoped for, it may be a good time to look again at what services were slated for the pricing change, and how you are using them.
Google Cloud Platform
Starting with the easiest one, Google Cloud Platform (GCP), you may not be seeing a significant change, as most of their services were already billing at the per-minute level, and some were already at the per-second level. The services moved to per-second billing (with a one-minute minimum) includedCompute Engine, Container Engine, Cloud Dataproc, and App Engine VMs. Moving from per-minute billing to per-second billing is not likely to change a GCP service bill by more than a fraction of a percent.
Let’s consider the example of an organization that has ten GCP n1-standard-8 Compute Engine machines in Oregon at abase cost of $0.3800 per hour as of the date of this blog. Under per-minute billing, the worst-case scenario would be to shut a system down one second into the next minute, for a cost difference of about $0.0063. Even if each of the ten systems were assigned to the QA or development organization, and they were shut down at the end of every work day, say 22 days out of the month, your worst-case scenario would be an extra charge of 22 days x 10 systems x $0.0063 = $1.3860. Under per-second billing, the worst case is to shut down at the beginning of a second, with a highest possible cost for these same machines (sparing you the math) being about $0.02. So, the best this example organization can hope to save over a month with these machine with per-second billing is $1.39.
Amazon Web Services
On the Amazon Web Services (AWS) side of the fence, the change is both bigger and smaller. It is bigger in that they took the leap from per-hour to per-second billing forOn-Demand, Reserved, and Spot EC2 instances and provisioned EBS, but smaller in that it is only for Linux-based instances; Windows instances are still at per-hour.
Still, if you are running a lot of Linux instances, this change can be significant enough to notice. Looking at the same example as before, let’s run the same calculation with the roughly equivalent t2.2xlarge instance type,charged at $0.3712 per hour. Under per-hour billing, the worst-case scenario is to shut a system down even a second into the next higher hour. In this example, the cost would be an extra charge of 22 days x 10 systems x $0.3712 = $81.664. Under per-second billing, the worst case is the same $0.02 as with GCP (with fractions of cents difference lost in the noise). So, under AWS, one can hope to see significantly different numbers in the bill.
The scenario above is equally relevant to other situations where instances get turned on and off on a frequent basis, driving those fractions of an hour or a minute of “lost” time. Another common example would be auto-scaling groups that dynamically resize based on load, and see enough change over time to bring instances in and out of the group. (Auto-scale groups are frequently used as a high-availability mechanism, so their elastic growth capabilities are not always used, and so savings will not always be seen.) Finally, Spot instances are built on the premise of bringing them up and down frequently, and they will also enjoy the shift to per-second billing.
However, as you look at your cloud service bill, do keep in mind some of the nuances that still apply:
Windows: GCP applies per-second billing to Windows; AWS is still on one-hour billing for Windows.
Marketplace Linux: Some Linux instances in the AWS Marketplace that have a separate hourly charge are also still on hourly billing (perhaps due to contracts or licensing arrangements with the vendors?), so you may want to reconsider which flavor of Linux you want to use.
Reserved instances: AWS does strive to “use up” all of the pre-purchased time for reserved instances, spreading it across multiple machines with fractions of usage time, and per-second billing can really stretch the value of these instances.
Minimum of one-minute charge: Both GCP and AWS will charge for at least a minute from instance start before per-second billing comes into play.
Overall, per-second billing is a great improvement for consumers of cloud resources…and will probably drive us all more than ever to make each second count.