We attended AWS re:Invent last fall (and we’re already looking forward to the 2016 event in December!), but this was ParkMyCloud’s first AWS Summit.
Although the event was substantially smaller than re:Invent, we were pleasantly surprised by the number of people who visited our booth and on the number of meaningful conversations we had with companies across the board, from those just assessing AWS as an option to those with established and growing AWS footprints.
Keynote from Dr. Matt Wood
The event’s keynote was given by Dr. Matt Wood, the General Manager for Product Strategy at AWS. You can watch the entire keynote recording below:
Amazingly, AWS continues to grow quickly at a rate of 70% year-over-year and is now a $10B company. EC2 makes up more than 2/3 of that business, and at last report, was growing at 95% year-over-year.
There were a few announcements of new AWS products in the keynote (full list here).
AWS now uses SSD storage as the default for EBS (Elastic Block Store). SSD is wonderful for small block, random I/O (for example for OLTP databases), but it is not cost-effective for large block sequential workloads, such as video or image processing.
Therefore, AWS announced two new magnetic EBS storage offerings:
A throughput optimized EBS offering (ST1) providing up to 500 MB/sec of sequential performance. It costs $0.045 / GB / month.
There is also a cold storage version (SC1), providing up to 250 MB/sec, but costing only $0.025 / GB / month.
AWS Inspector also became generally available. Inspector is an automated security and vulnerability assessment. It checks the security posture of your application relative to defined best practices.
AWS also announced non-disruptive, automatic platform (OS) updates for Elastic BeanStalk deployments. This will definitely save time on managing BeanStalk implementations.
They leverage “Blue” / “Green” auto scaling groups behind the load balancer to do this non-disruptively. Blue is the current production environment. Green is the version being updated. Once complete, operation is cutover to “Green”.
AWS announced as beta the ability to create managed identity pools for Cognito. I am intrigued by this and how this, with multi-factor authentication (MFA), would compare with something like SAML 2.0 and MFA.
As usual there were some interesting customer talks. I found the one from Duolingo to be quite interesting. They offer 80 different language courses for free, supporting over 18 million users per month, running 6 billion exercises. Their whole environment is managed by 2 DevOps folks!
Not only is it eye-opening to see the innovations that AWS is continuously releasing into their ecosystem, it’s also great to meet and talk with AWS customers face-to-face and talk shop.
I have nothing but the utmost respect for DevOps (development operations) people. They are unsung heroes in my opinion. Living in that precarious place between the developers, IT operations, and the business people, their job is to streamline and stabilize operations related to rollout of new applications and code updates to support the business.
When everything is working well, most people forget they are there. Much like offensive linemen in football, the only time people seem to notice them is on those rare occasions when something goes wrong. It doesn’t seem fair, but such is the life of DevOps.
To achieve near continuous deployment for applications, a high degree of automation is essential from the time new code changes hit the source code repository until they are pushed through test, QA, staging and into production. To accomplish that, DevOps teams require a working knowledge of their applications at a system level, as well as a deep understanding of the IT infrastructure (servers, storage, databases and network), to properly marry the two.
Inherent in this process is constant optimization to streamline the process and keep costs low. They are constantly evaluating build vs. buy for the tools they use in their trade. The preference is to use commercial off-the-shelf products if they are more cost-effective. This frees up their team to focus on keeping the “main thing the main thing”.
/* Begin Shameless Plug */
The whole idea of ParkMyCloud is to help out that part of the DevOps community, who run their environments in Amazon Web Services (AWS).
With ParkMyCloud, you can schedule on/off times for development, testing, QA and staging environments without AWS scripting for as little as $1-$2 per instance per month.
A number of our larger customers have walked away from their own scripted solutions to do this in favor of ParkMyCloud for a few reasons:
It was costing their team more to maintain their AWS scripts
The time spent working on those scripts, was time that could have been spent on mainline business applications (a huge opportunity cost)
Their scripts provided no reporting on cost savings, so that had they no idea whether they were getting a return on their investment. (With ParkMyCloud, the payback is usually within 2-3 months.)
/* End Shameless Plug*/
/* Begin Rant */
So, I told you all of that to air a real pet peeve that I have.
Imagine my surprise when I still talk to potential customers, bent on writing their own AWS scripts to turn instances on & off. It just doesn’t make sense.
When they tell me, “Well, we can do that.” Then my response is, “Does your DevOps team also clean toilets?”
Then they give me this weird look (kind of like the look on your face right now), and respond, “Well, no.”
“Why not?” I ask. “Are they not smart enough to clean toilets?”
“Well of course they are smart enough, but it is not worth their time. We hire a janitorial service to clean our restrooms.”
“So, let me get this straight: You are enlightened enough to realize that cleaning toilets would be a waste of your team’s time, so you hired a janitorial service. Why on earth would you waste your precious DevOps resources to do the moral equivalent of this in IT, by having them waste time writing scripts to schedule on/off times for EC2 instances?”
“They should be spending that time on your main business applications. Leave that to us!”
Increasingly, they get point.
/* End Rant */
In closing, please remember: Friends don’t let DevOps friends waste time on AWS scripting for things not related to application delivery (especially when there are more cost-effective commercial products available to help save time and money). Friends do tell their DevOps friends about ParkMyCloud.
Today, we reach the fifth and final post in our series of ways to save money on AWS. Way #5 to save is… ParkMyCloud! Read on, or watch the video version of this post:
ParkMyCloud is purpose-built to do one thing well, and that’s to schedule on/off times for EC2 instances in non-production environments, without scripting. We call that “parking.” You can think of “parked” as a new instance state between running and stopped.
Depending on the schedule you use, ParkMyCloud can achieve savings of up to 50-73%, making it better than Reserved Instances for non-production, without the annual commitment, without concerns about price cuts, and without having to pay upfront. It provides the same savings as Spot Instances without the risk of abrupt termination.
ParkMyCloud vs. Reserved Instances
So let’s look at the comparison with Reserved Instances a little more closely. I’ll show you why I think ParkMyCloud is better than Reserved Instances for non-production.
You’ll notice that I’ve added ParkMyCloud this time. The first blue ParkMyCloud column shows the costs when you use a typical schedule of running 12 hours a day, parked 12 hours a day, and parked on weekends. That savings is around 64%. To match that with Reserved Instances, you would actually have to pay the three-year upfront cost for Reserved Instances. Most people, because of the concern over price cuts, don’t use the three-year contracts, they stick to the one-year term.
The concern over price cuts does not apply to ParkMyCloud. If you have the 30% cut that you see in the “On-Demand w/ Price Cut” column, with ParkMyCloud, you would still get that 64% savings, but it’s against that new price point — see the “PMC w/ Price Cut” column.
How to Use ParkMyCloud
Unlike managing Reserved Instances, ParkMyCloud is very simple to use. Essentially, all you do is create a schedule, name it and save it (and that’s if you don’t want to use one of the schedules we provide):
Then you go to the dashboard and attach that schedule to one or more of your non-production instances.
Once you’ve attached those, we will predict what your 30-day savings will be. If you leave the schedules on for a period of time, once they start parking, you’ll see what savings you’ve achieved to date.
So let’s summarize the 5 ways to save on AWS EC2 we discussed.
Reserved Instances will save you about 31-43%, but the downside is, you’re locked in for 1-3 years. Additionally, there’s no protection against future AWS price cuts, and it’s definitely a use-it-or-lose-it situation. Although they can be used in both production and non-production, production is probably the best place for them.
We also talked about Spot Instances, which can routinely save 70-90%. However, because of long delays in request fulfillment and termination of instances on short notice, and the need for complex mitigation strategies, these are much higher risk, but there are definite use cases for both production and non-production as we saw.
We looked at Auto Scaling, which takes advantage of all these instance purchasing options. It makes it hard to pin down the savings and the risk, which depends on the purchasing options, and the rules you have.
We talked about scripting, but honestly I don’t think scripting is particularly cost-effective. It would give similar savings to what ParkMyCloud would, but take a greater cost to get there.
Lastly, today we talked about ParkMyCloud, where you can get 50-73% savings pretty simply, without the risks of Spot or Reserved Instances. One limitation is that we don’t recommend that you use ParkMyCloud in production instances. You can’t park Spot Instances, because you can’t stop them – they just terminate. And we don’t currently park Auto Scaling groups, although we are looking into doing this. But those limitations are small compared to the amount and simplicity of savings.
If this is of interest to you, I encourage you to sign up for a 30-day free trial. You can create an account and pocket the savings for the next 30 days while you try it out.
So far in the “How to Save Money on AWS” series, we’ve looked at three ways to save using different AWS purchasing options. Today, let’s look at the first way outside of AWS: scheduling on/off times for your idle EC2 instances.
Read on, or watch the video version below:
Using AWS Scripting to Schedule On/Off Times
Often non-production environments like development and staging are running 24×7, even though they are not being used during off-peak hours like at nights and on weekends. Therefore, the simplest way to save money on these environments is to turn them off when not in use. Of course, that’s easier said than done.
For example, in AWS, you could use data pipeline and script it up, but there isn’t anything native to the platform, and AWS has no plans to add anything.
You might also turn to any number of the cloud analytics platforms in the market. One thing they’ll recommend is for you to turn off these non-production instances when not in use. However, they’re basically like tattle tales – they tell you what’s wrong but don’t fix it for you.
Now, there are some cloud management platforms that can not only identify instances but actually take action, but they can be bloated, expensive, and quite complicated, and there’s a steep learning curve associated with those.
So what do people do when there’s a lack of viable options? Well, as a recovering command-line guy, I know that when the going gets tough, the tough start scripting. Scripting tends to be in many of our comfort zones: I know I’ve scripted a lot of things, and I’m sure I’m not alone. I get the appeal: you’re in control of your own destiny, you get to get your hands dirty, and at the end of the process, you get the satisfaction of having built something from start to finish.
The Problem with AWS Scripting
However, AWS scripting is not cost-effective, for several reasons.
First, creating the script is only half the battle. Once it’s built, you have to maintain it as your environment changes. If you have a large environment that’s changing frequently, even with the help of Chef, Puppet, or SaltStack, there’s added cost to keep up with that environment. And if you don’t maintain your scripts, you’re missing out on instances you could have turned off, which has a big cost associated with it — perhaps even more than you’re spending to maintain the scripts.
Lastly, how do you justify the fact that you’re working on these scripts to your boss? That requires some way of being able to show the cost benefit and do the tracking, and that will take time. Heaven forbid that your boss actually likes the report, thus requiring you to implement a more formal cost-tracking system.
While scripting on/off times for your instances might be a relatively easy fix in the short term, it’s not a sustainable long-term solution to the cost problem.
Stay tuned for next week’s post, the final in our series on how to save money on AWS.
Today, let’s take a look at how you can manage parking recommendations in ParkMyCloud to ensure you’re getting the maximum savings on your AWS EC2 environment.
What are Parking Recommendations, and What Instances Does ParkMyCloud Recommend as Parkable?
In ParkMyCloud, you can park any On Demand EC2 instance you choose to simply by selecting it – which is great when you already know what you want to park. However, sometimes, especially in larger environments, it may be difficult to determine what should be parked — or you may simply miss an instance or two. That’s why ParkMyCloud provides parking recommendations; that is, we highlight instances that may be good candidates to be assigned a parking schedule of on/off times.
Today, parking recommendations are based on keywords, searched for in the instance name and tags. When you create your ParkMyCloud account, we’ll automatically recommend based on a few common keywords: dev, test, staging, QA, and sandbox. These particular keywords are usually associated with non-production environments that are not required to run 24×7. When those keywords are matched in instance names and in keywords, we’ll recommend that instance to be parked. We only recommend instances that don’t already have a schedule attached to them.
Coming soon, we will also recommend instances to park based on usage metrics, such as CPU utilization, and allow you to create policies to automatically park instances when they match your keyword or usage parameters. (Let us know if you’d like to be updated when these capabilities are rolled out!)
How to Manage What’s Recommended
Watch the video below for a demonstration of how to manage your parking recommendations:
To edit your recommendation keywords:
Log in to your ParkMyCloud account and start at the dashboard. On the Parking Recommendations bar above the list of instances, click “show” on the right side to see your recommendations. They will come up highlighted in yellow.
Click “edit keywords” to see what keywords you currently have in place. Add or remove keywords to match your environment and click “save.”
(Optional) If you want certain instances to be omitted from parking recommendations in the future, select those instances and click “ignore recommendations” in the parking recommendations bar.
To park your recommended instances, simply select them via the Bulk Action column on the left side (or individually via the Schedule column on the right side) and park as usual.