At the halfway mark of each year, CRN takes a look back at the previous six months in the channel and throughout the IT world. The Cloud Application Startups category seeks Software as a Service (SaaS) applications offering solutions that broaden notions of what can be achieved by deploying applications in the cloud while solving diverse real-world problems for all types of business users.
ParkMyCloud was selected for our ability to slash AWS bills by more than 60 percent, through our simple app with no up-front costs.
Amazon Web Services (AWS) has grown immensely and recently cracked a $10 billion annual run rate. This means AWS is growing 70 percent year-over-year, which is staggering for a company of their size. Moreover, AWS recently doubled their compute power and now boasts 10 times more power than their nearest 14 competitors combined.
Elastic Compute Cloud (EC2) makes up nearly 70 percent of AWS’s revenue. That is almost $7 billion, and it is growing at 88 percent year-over-year. A little over half of that compute power supports nonproduction workloads, such as development, testing, QA, staging, training and sandbox environments. Many of these environments run 24 x 7, even though they are not being used. It would be like you leaving your car running all the time, even when it’s at home in your garage, which is insane.
So how can companies unlock savings from these non-production environments? Here are four key ways…
We pushed out a release over the weekend – most notably including the ability to park Auto Scaling Groups! Here’s what’s new.
Parking Auto Scaling Groups
You now have the ability to park full Auto Scaling Groups. You can control Auto Scaling Groups in the same way that you can individual instances – with schedules and on/off toggles.
You will see these groups included on your dashboard:
When an Auto Scaling Group is scheduled to be parked, ParkMyCloud will store the existing settings for minimum, desired, and maximum number of instances. Then, it will change those settings to Min, Desired, Max = (0,0,0) in AWS (while keeping the original values in our system) for the scheduled parking duration. Note that this means the individual instances will be terminated. When the instances are scheduled to run, we’ll set the parameters back to their original values and new instances will be spun up.
By the way, if you are using spot instances in those Auto Scaling Groups, we will be able to park those as well.
Since you can now manage your Auto Scaling Groups in ParkMyCloud, your downloadable reports will now be based on “resources” rather than “instances” – instances and Auto Scaling Groups will be considered resources. You’ll now be able to see parking savings for your Auto Scaling groups.
Detailed Information on Your Resources
You now have a much more detailed amount of information about each of your resources – whether instances or Auto Scaling Groups. Just click the “i” information button on the right side of the dashboard, and you’ll see a modal screen like the following:
Plus, you now have access to our shiny new User Guide!
Our CTO, Dale, presented the other day on 5 Ways to Control Your AWS Spend (or, How to Make Your CFO Happy). Check out the recording below, and comment if you have any questions we didn’t already address!
0:29 Dale’s intro to the webinar
1:11 AWS Elastic Compute Cloud (EC2)
AWS has grown immensely, and cracked a $10 billion/year run rate, growing 70% year-over-year, which is staggering when you think about it for a company of their size1They recently doubled their compute power, and now boast 10x more compute power than their nearest 14 competitors combined.
EC2 makes up almost 70% of AWS’s revenue, or almost $7 billion, and is growing at 88% year-over-year growth rate. A little over half of that compute power supports nonproduction workloads, such as development, testing, QA, staging, training and sandbox environments and comprises about $3.5 B, or almost 1/3 of AWS’s revenue.
Many of these environments run 24 x 7, even though they are not being used. It would be like you leaving your car running all the time, even when it’s at home in your garage, which is insane. Are there services which AWS provides to help you reduce your spend? Are there other ways outside of AWS?
That’s the focus of this webinar: how can we unlock savings from these non-production environments?
2:34 AWS Has Reduced Prices to Drive Demand
EC2 has come a long way, since it was first introduced in 2006. There was one Purchasing Option (Pay as you go, or On-Demand), andthere were only a couple of Instance Types and 1 Region. Interestingly, most of AWS’ customers were developers and most of the workloads were non-production.
Today, looking at the current generation of EC2 instances, AWS has 40 Instance Types running in 13 Regions. They actually have 4 additional regions that are about to become generally available. And, as you saw, about half of their workloads are production environments, which is pretty amazing in that short amount of time.
AWS has done a great job of making their services, especially EC2, easy to adopt. Perhaps a little too good! As customer adoption increased, so did their “monthly sticker shock”.
AWS realized early on that it’s important to keep their customers happy and their services “sticky”. So, besides a number of price cuts on their On-Demand instances, AWS introduced Reserved Instances, Spot Instances and Auto Scaling Groups to help their customers save money. These moves, along with the wealth of new services every year, has rocketed EC2 to its current levels.
AWS also has a vested interest in having their customers save money. How? By reducing waste they [AWS] avoid building new data centers, allowing them to oversubscribe the infrastructure they currently have in place, making them more profitable.
Reserved Instances, Spot Instances and Auto Scaling Groups will definitely save money in both production and non-production environments, but like many things there are tradeoffs to consider.
Each of these Instance Purchasing Options and Auto Scaling Groups could easily fill their own series of seminars. Time will only permit me to focus in on these options at a high level, as they relate to non-production cost savings.
So, with that backdrop, let’s look at our first way to save money: EC2 Reserved Instances.
4:36 How Reserved Instances Work
A Reserved Instance is a contract or a commitment – where you agree pay now to reserve capacity for a set period of time: 1 year or 3 years.
With that commitment, and your agreement to pay that contract upfront, partially upfront or monthly, AWS agrees to give you a discount. In general, that discount, as you will see on the next slide, increases the more you pay upfront and the longer the commitment time period.
There are also some caveats you must be aware of, if you are not already:
It is very much a use it or lose it proposition. It’s like those annoying gift cards that have a hidden monthly fee and have a zero balance by the time you get around to using them. If you’re not careful, Reserved Instances can be that way if you don’t manage them properly.
Managing these contracts can be very complex. In fact, a whole industry of analytics applications has grown up around helping you track Reserved Instances.
These contracts are specific to a Region, Availability Zone, Instance Type (e.g., m4.large), Platform Type (e.g., Linux or Windows) and Tenancy. As you launch instances, AWS automatically (and randomly) attempts to match what is launched to the contracts you have in place.
If there is a match, they apply the benefit. If there is no match, AWS decrements the contract amount and your ROI decreases.
The nightmare scenario is when your users are launching the types of instances for which you DON’T have contracts. You end up essentially paying twice: once for the RI’s you paid for and then for the new instances.
That said, How much can you save with Reserved Instances?
6:42 Reserved Instance Savings
I have two graphs here. These are for an m4.large Instance Type, running Linux, in the US-East-1 Region. The graph on the left is for a 1-year commitment; the graph on the right is for a 3-year commitment. Notice that there is no green bar for the 3-year term, because AWS only allows the No Upfront option for the 1-year term.
The purple bar shows the On-Demand pricing in both.
Notice that for the 1-year commitment, you’ll save between 31% and 43% in this particular case, and that the savings improves as you pay more upfront.
For example, for the longer 3-year commitment, the savings improves to between 60% and 64%, which is not bad.
However, what happens if AWS cuts their On-Demand price as they have done in the past? For example, in 2014, AWS dropped their price by 30%.
If that happens,then the savings you hoped to achieve evaporates. The longer the commitment, the greater the chance that this will happen, which is why more companies make the shorter 1 year commitment and settle for the lower savings.
That said, as long as you are aware of these caveats, the best use for Reserved Instances is in Production.
However, we can do better than that for non-production!
8:00 2. How Spot Instances Work
Here’s how they work:
AWS runs a “spot market” on spare EC2 capacity. Anyone familiar with derivatives markets or energy trading should be familiar with the concept.
This spare capacity is bundled in Spot Pools, based on Instance Type, Platform and Availability Zone.
You place a bid. If there is spare capacity and your bid price is above the current spot price, your request is fulfilled and your instances start. They will keep running as long as there is capacity and your bid price stays above the market price.
As we will see on the next slide, you can reap some great savings. However, there are risks involved.
If there is no spare capacity for the instance type you want, you may have to way a very long time before your request is filled.
As soon as the market price rises above your bid price, or if they run out of capacity, then your instances terminate abruptly after a 2 minute warning.
Of course, building applications that can withstand instance termination is the best way to mitigate this. However, the spot market has led to all sorts of creative mitigation strategies, including:
Using a mixture of on-demand and spot instances with persistent requests
Avoiding the latest and greatest EC2 types and settling for older types, with more stable pricing
Being flexible in the Instances Types you use. In fact, AWS has come out with Spot Fleets: the ability launch a mix of different Spot Instances in one request
9:46 Spot Instance Savings
What are the potential savings with Spot?
Here is an example of an m4.large Instance Type running in Northern Virginia, running Linux.
You are looking at a 3 month price history, where the price has been quite low about $0.014 per hour for months.
If you had been willing to pay $0.03 per hour, your instance would have run for over 3 months.
You are billed on the Spot Price, not the Bid Price, so, mitigation strategies aside, you would have saved about 89%.
In fact, that’s pretty typical. Savings in the 70% to 90% range are not uncommon for Spot Instances.
10:27 Where Are Spot Instance Being Used?
So, even with the risks mentioned, Spot instances are used in both production and non-production.
They are NOT generally used in interactive production workloads, such as web applications, nor are they used in real-time production workloads.
However, they are used a lot in scientific research such as high-performance scientific computing, analytics batch jobs and in batch video processing.
In non-production, Spot Instances are used for performance and scalability testing.
11:04 How Auto Scaling Groups Work
The third way you can save money in AWS is by leveraging Auto Scaling Groups.
An auto scaling group allows you to scale the number of instances up or down automatically:
To the right is a simple web application, leveraging several web servers, sitting behind an elastic load balancer
You provide a launch configuration (one or more AMIs)
You set the minimum, desired and maximum number of instances. In my example, I set the minimum to 2 nodes, the desired to 4 and the max to 10.
You provide the CloudWatchmetrics to use (e.g., CPU utilization, disk I/O, etc.) and the thresholds you want, and use those to cause the system scales up and down automatically, by either the number of nodes or the percent capacity you swant
The cool thing about Auto Scaling Groups is that they can leverage any or all of the Purchasing Optionsdiscussed, making them quite flexible – On Demand, Reserved Instances, and Spot.
They can quickly scale-up to meet demand, for example if you’re a web commerce company and Christmas hits, you can scale up to hit the Christmas rush, then when it’s over, quickly scale down again to save money.
They can be used to providefault tolerance for applications.
While they can help save money in both production and non-production, the amount of savings is rather hard to pin down, as it depends heavily on Instances Types used, whether they are On-Demand, Reserved Instances or Spot, and what the scaling policies/rules are.
That said, for Production, Auto Scaling Groups + On-Demand (backed by Reserved Instances) is probably the best bet.
For non-production, particularly if you’re doing performance and scalability testing, Auto-Scaling Groups and spot instances are the way to go.
12:58 Scheduling “On/Off” Times with Scripting
We talked about the fact that people often leave non-production environments running, even when they are not being used.
The best way to save money is to simply turn this stuff off when not in use. Which is our fourth approach – which is easier said than done.
Why? AWS does not offer a “parked” state that’s off by default, and from our discussions with them, they have no plans to do so.
Despite the variety of Cloud Analytics platforms out there telling people to stop to doing that, those platforms don’t actually do anything to help them with those recommendations, and they can be costly if you don’t already own one.
So, what do people do when there is a lack of viable options?
When the going gets tough, the tough start scripting!
13:56 The Problem with Scripting
I used to be one of those people – as a recovering command line interface guy, who has done his fare share of scripting, I get the appeal:
You’re in control of your own destiny
It’s an opportunity to get your hands dirty
At the end of the process you have the satisfaction of actually building something from start to finish that actually can provide some cost-savings.
However, scripting is just NOT cost-effective. Why?
Building it is only half the battle. You now have to maintain those scripts as the environment changes, even with the help of something like Chef or Puppet there’s added cost
If you don’t keep up with your environment, then you miss stuff you could have turned off. You then miss out on a lot of savings.
Then, your boss taps you on the shoulder and wants to know why you are wasting your time on scripting up this stuff, when you are supposed to be working on more mission critical work, like the applications that actually earn money for your company. So, you have to come up with a way to quantify the savings, which takes time.
Heaven forbid that he likes your report, because now he is going to want to see a report every week. Who knows how long that will take.
Are you really going to have time to keep up with the changes in AWS, or add other service providers, like if your company decides to expand beyond AWS to something like Azure or Google?
And, what is the opportunity cost of not having you work on the company’s main mission? That probably makes these other costs pale in comparison!
15:38 ParkMyCloud: Purpose-Built to Save Money
So, let’s talk about the fifth and better way to save money in non-production environments, let’s talk about my company and our application, ParkMyCloud.
ParkMyCloud is purpose-built to do one thing really well: Schedule on/off times for EC2 instances WITHOUT SCRIPTING and without being a DevOps expert. We call that “Instance Parking”. It’s like NEST for the Cloud, which is what we see ourselves becoming for public cloud
Think of “Parked” as a new instance “state” between Running and Stopped, and it’s under scheduled control.
Depending on the instance and schedule used, ParkMyCloud can achieve savings of between 50% and 73%, making it better than Reserved Instances for non-production, without an annual commitment or an upfront payment
It provides almost the savings of Spot instances without the risk of abrupt instance termination
Let’s look at this in comparison with Reserved Instances a little more closely, to show you why I think ParkMyCloud is better for non-production environments.
16:34 Reserved Instance Savings
Here is that graph I showed you before, except now we have added the ParkMyCloud savings.
Here we used a ParkMyCloud schedule where instances were ON 12 hours & OFF 12 hours on weekdays and OFF on weekends. When you do the math, that results in a downtime of 64%.
To achieve that level of savings with Reserved Instances, you would have had to commit to a 3-year contract and pay the whole thing upfront.
Also, remember what happens when AWS cuts their On-Demand prices? Your Reserved Instance savings is decimated.
Not so with our application: Since there is no annual commitment nor upfront payment, you would just ride the new price curve, which would be 64% below the new On-Demand price.
And unlike Reserved Instance management, ParkMyCloud is simple to use.
17:25 Create Schedules
You create a parking schedule – once again I mentioned you can do this without scripting, you just click on the on/off times, set the proper timezone, and give it a name and description, save it …
17:41 Apply them to Non-Production Instances
Then attach it to one or more of your non-production instances in your dashboard.
In fact, we even recommend instances to park, based on criteria you provide.
17:53 Reap the Savings
Once the parking schedules are applied, we predict your savings for the next 30 days.
Leave the schedules in place and we’ll also show you your actual savings month-to-date.
18:08 Without Breaking the Bank
We do this for about $3 per instance per month.
For the folks who script, do you think you can maintain your scripts that inexpensively? From everyone we’ve talked to, from our large customers, the answer is no.
18:23 ParkMyCloud Product Demo
So I come into my environment here. The first thing, if you’ve already parked something, you’ll see what your savings is, projected over the next 30 days.
This is an environment where I’ve got 126 instances. There are a couple of things I want you to notice right away: what you’re looking at is an environment that’s not just one AWS account and not just one region. You’re actually looking at 126 instances spread out over 4 AWS accounts, spread out over all the regions, and you get all that in one view. That’s one thing that’s different between ParkMyCloud and the AWS console, that single-pane-of-glass view.
The other thing you’ll notice is that these things are organized in teams. I have 4 teams in here. You have an unlimited number of teams you can add to the platform. We use teams to organize users and instances.
19:54 Demo: Keyword Recommendations
The other thing I want you to notice is that we’re showing you live, the 30-day project savings – that’s based on a schedule configuration we currently have in place. Here’s an example with the demo team. I’ve got some keywords here recommending that I can park some things, so let me go ahead and show you this. We give you a series of keywords to help you determine candidate instances when you first log in. We give you 6-7 keywords like dev, test, QA, staging, training, demo, things like that. I’ve added a “parkmycloud-yes” because I know my instances have that, and you an change these, and delete our keywords if you don’t want them, and add your own.
20:44 Demo: Creating and Attaching Parking Schedules
In my particular case, I’ve got 96 instances here recommended for parking, and I know right away there’s a bunch of demo instances. So I’m going to go ahead and select the demo team, and take all these instances, and put them on a schedule using a bulk action. We give you a few schedules in the platform, and we allow you to add your own. In this particular case, suppose I don’t like any of these schedules. I’ll call this the acme webinar. I’ll select the time zone, and let’s say I want it on 7am – 7 pm on weekdays. I’ll come down here and select what I want off and what I want on, in this case I want it to start at 7 am and go off at 7 pm, and off on weekends. I can go ahead and create and attach that.
Immediately, you can see my forward prediction of savings has gone up commensurate with that.
22:02Demo: Snoozing Schedules to Work Outside Normal Hours
Now suppose you have an instance that’s parked in here, and I come in on the weekend and I need to do work. I can log in to the application here, click on the toggle button. It will warn me that there’s a schedule attached and give me the option to do something, which is snooze it. I can snooze the schedule, and pick a set amount of time until this time and date. In this case, I want to run it for an hour, select that, hit okay. It’s snoozed the schedule to move it out of the way, and now it’s going out and starting the instance so I can do work. The cool thing about it is, if you then are done with your work, you can just walk away if you want to. The snooze will expire, and the schedule will kick in and park it again.
One of the unintended consequences is that some of our large customers have decided to use an “off” calendar – something that turns their non-production environments from the “on” default state to the “off’ default state 24×7. They’ve told their developers to just log in to the platform and snooze the schedule for the amount of time they’re going to work. As a result, they’re maximizing their savings. We thought that was so cool that we actually added that “always off” schedule as one of the default ones.
You can see here in the schedule menu, or when you look in the tool tip for the schedule, you’ll see if it actually is snoozed, it will tell you when it will expire.
24:07 Multiple Teams Demo
We handle multiple teams but also multiple users and multiple accounts. That’s a big benefit, because we can use that construct to hide instances from people so they only see what they need to see. We can also add multiple accounts – here I have 4 AWS accounts – and at any time, do a manual ingest. We do an automatic ingest once every 6 hours.
Here’s an example. I can take this instance, select it, and move it to any of the other teams if I want to. I’ll move it from the dev team to the demo team. Now any users on the demo team can see that amongst the other instances they have.
In summary, we have talked about 5 ways to reduce costs in AWS, focusing on non-production environments:
Reserved Instances, as compared to On Demand. They’re a little bit more risky, if AWS cuts your price, the savings goes away, so there’s no protection against price drops. But the savings are pretty good. Even with a 1-year contract the savings are 31-43%.
Spot Instances can save a lot, routinely 70-90%. However, because of potentially long delays in request fulfillment, termination of instances on short notice and the need for complex mitigation strategies, these are much higher risk, but there are definite use cases in both production and non-production.
Auto-Scaling Groups allow you to leverage all of the other instance options, allowing you to drive up availability and scalability in both production and non-production. However, the cost savings are difficult to pin down, as they are very configuration specific.
We talked about scheduling on/off times with scripting, but suggested that approach is not cost-effective, so it is not shown here.
We talked about ParkMyCloud, which provides better savings than Reserved Instances in non-production environments, without the need for annual commitments or upfront payments, like Reserved Instances; and without risks of Spot. The downside is that it is limited to non-production, on-demand instances. It cannot park auto scaling groups (yet).
I hope you found the information presented here to be of use.
If you haven’t tried ParkMyCloud, we offer a no-strings-attached, free trial.
Question: How easy is the configuration for ParkMyCloud? Is this something I need to install in our AWS server?
Answer: ParkMyCloud is a SaaS application that runs inside of AWS, and you don’t install anything. When you start the 30-day trial, you just enter contact information, enter an AWS credential – either IAM user or IAM role – and you’re up and running. Customers can be up and running and parking within 7 minutes.
Question: If I were using ParkMyCloud, how would I make sure other people on my team don’t park instances that I don’t want them to park?
Answer: Here’s an example. I have 4 teams here. If I looked at the environment on this Sandbox team, I would see that when Jon logs in, he would see just a few instances that he’s been allowed to see. You can use teams to hide instances from people.
Question: Can I override a schedule? For example, if ParkMyCloud shut down a server but my team is working on a release over the weekend, can I override the schedule?
Answer: Yes, if there is a parking schedule on the system right now and it’s running, and you wanted to prevent the schedule from shutting down the server, you can use the snooze button to delay the schedule action for a certain period of time. The instance stays in whatever state it was in for that period of time.
Question: What reporting do you offer? For example, if I want to show a proof of savings we’ve achieved with ParkMyCloud and I want to make sure I know which of my team members are parking which instances, what do you provide?
Answer: We allow you to download a few Excel spreadsheets that show reports of the savings. We have 4 reports in the system with customizable start and end dates. You can get a detailed cost by resource, cost summary by team, cost summary by AWS account/credential, and also a roster of your team members.
Question: You mentioned that you’re adding parking for Auto Scaling Groups. What will you do to Auto Scaling Groups?
Answer: We’re rolling out the ability to park Auto Scaling groups. You’ll be able to click on each group and see the instances running within the group. You can look at tags on the group and information about the group. There will be controls at the group level that you can control for individual instances – parking, toggle on and off, and snooze the schedule, all at the group level.
Below is the transcript of an interview with our friend Jonathan Chashper of Product Savvy about his experience in rapidly building an app, Wolfpack, using various AWS tools. From getting his team in a room and unpacking laptops, to releasing a minimum viable product (MVP) for beta testing took 14 weeks, which Jonathan attributes not only to the skill of his team but to the ease-of-use and agility they gained from AWS.
Thanks for speaking with us, Jonathan! First of all, can you tell us a little bit about Wolfpack? What is it, and why did you decide to start it?
I am a motorcycle rider. A few years ago, I went on a group ride, and very quickly, the group broke apart. Some people missed a turn, some people got stuck at a red light, and a group of six suddenly became three groups of two. It took us about half an hour to figure out where everyone was, since you need to pull over, call everyone, and then – since everyone is riding their motorcycles – wait for them to pull over and call you back. It’s one big mess.
So I thought, there has to be a technical solution to this. I decided we should build a system that would allow me to track everyone I’m riding with, so I could see where the people riding with me are at any given time. If I got disconnected from the group, I could see where they are and pull over to gather back together. This was Eureka #1
Eureka #2 was understanding that communication is the second big problem for moving in groups. When you ride in a group, on motorcycles, you’re usually riding in a column. Let’s say you’re rider #4 and you need gas. You cannot just pull over into a gas station, because you will get separated from the group. So usually what happens is that you speed up, you try to signal to the guy at the head of the column, and you point to the gas tank, you hope he understands and actually pulls into a gas station. It’s dangerous. So this is the second problem that people have when they move in packs, and these are the two problems that Wolfpack is solving: Keeping the group together and allowing for communication during the ride.
Wolfpack is a system for moving in groups. It doesn’t have to be motorcycles, but that’s the first niche we’re releasing it for. It’s also relevant for a group of cars, or even walking on foot with ten people around you, people get separated, and so on.
So we built a system that allows you as a user to install an app on a mobile device (both iOS and Android), that will allow you to manage the groups you want to travel with. Then, once you have the groups defined, you can define a trip with a starting point and an ending point. Everyone in the group then gets a map, and everyone can hop on it and start traveling together.
Here’s WolfPack’s About video, if you’re interested:
What AWS tools did you leverage when building Wolfpack?
Wolfpack is built on AWS, and we’re using CloudFront, we’re using SNS, we’re using S3 buckets, we’re using RDS, and of course EC2 instances, load balancing, Auto Scaling Groups, all the pretty buzzwords. We use them all – even AWS IoT, actually.
Have you had any interaction with AWS?
No, we’ve done it 100% ourselves. We’ve never talked to any solutions architects or anyone at AWS. It’s that easy to use.
What Amazon is doing is unbelievable. Things that used to take months or years to accomplish, you can now accomplish in days by clicking a couple of buttons and writing a little bit of code.
Why did you choose to develop on AWS?
The ecosystem they’ve created. This is why I think AWS is awesome: they’ve identified the pain points for people who want to build software.
The basic problem they identified is the need to buy servers. That’s the very basic solution they’ve given you: you can stand up a server in two minutes, you don’t need to buy or pay ten thousand dollars out of pocket, and so on and so forth, these are the good old EC2 Instances.
Then they went step by step and they said, okay, the next problem is managing databases. Before RDS, I had to have my own database from Oracle, and you’d have to buy a solution for load balancing, a solution for failover, back-up, recovery, etc., and this would cost tens of thousands, if not hundreds of thousands of dollars. AWS took that pain away by providing RDS.
The next step was message queues. Again, in the past, we would go to IBM, we would go to Oracle, back in the day, and you would use their message queues. It was complex, one message queue didn’t work with the other, and it was a mess. So AWS created the SNS to solve that.
And so on and so forth, like a domino. They have the buckets to solve the storage issue. Now the newest thing is IoT, where they understand that there’s billions of devices out there trying to send messages to each other, and very quickly, you clog the system. So AWS said, “okay, we’ll solve that problem now.” And they created the AWS IoT system which allows you to connect any device you want, very quickly, and support, I don’t know, probably billions and billions of messages. Almost for free, it doesn’t really cost anything. It’s a great system.
Have you had any challenges with AWS so far?
No, actually, no technological challenges so far. What they offer is really easy to use and understand. The one thing we do want to do is pay as little as we can for the EC2 servers, which is where we’re using ParkMyCloud to schedule on/off times for our non-production servers.
Are you using any other tools for automation and DevOps?
Yes, we are using Jenkins – we have a continuous integration machine. Our testing is still manual, unfortunately.
Continuous integration is the idea that every time someone completes a piece of code, they submit that to a repository. Jenkins has a script that takes that out of the repository, compiles everything, and deploys it. So at any given time, every time someone submits something, it’s immediately ready for my QA guy to test. The need for “Integration Sessions” went down, drastically. .
How long has the development taken?
From the minute we put the team together, until we had an MVP, we had seven sprints, which is just 14 weeks. And when I say “putting the team together,” I mean they went into a room and unpacked their laptops on March 1st. Now, fourteen weeks later, we had our MVP, which we’re now using for beta testing.
And did your team have deep AWS experience, or were some of them beginners?
Some of them had a little bit of AWS experience, but most of it came from us as on-the-job training. If you’re a software engineer, it’s really easy to get it.
On your non-production servers, where you’re using ParkMyCloud, do you know what percent of savings you’re getting?
We’re running those instances 12 hours a day, 5 days a week. So we’re running them 60 hours a week, so, let’s see, we’re getting about 65% savings. That’s pretty awesome.
It is hard to believe that this week, ParkMyCloud is one year old. Wow! Time has flown by so quickly, perhaps it is more appropriate to measure it in dog years. Throughout the past year, we’ve been plugging away in our little corner of the cloud world. But how did we get here?
One Year Ago: The Pivot
Prior to ParkMyCloud, we were part of another company called Ostrato. There, we built a hybrid cloud governance and management platform, called cloudSM, which worked across several cloud providers (AWS, Azure, OpenStack, Vmware, SoftLayer, etc.).
At the time, any platforms offering “cloud analytics” seemed to get most of the attention and funding. I have, in the past, referred to these platforms as “Cloud Tattletales”: They tell customers a lot about what is wrong with their environments, but don’t actually do anything to help them to fix it. The platform we built, cloudSM, actually helped them do something about those problems. However, adoption was poor.
The fact that we were making little progress, despite our best efforts, begged the question: Were we early? Did we build something the market was just not ready for? Perhaps what we were witnessing was a “cloud analytics wave”. I figured that at some point, customers would wake up and realize that they needed to do something with all that wonderful analytics advice, like actually govern their hybrid cloud environments (the “cloud control wave”). If true, how long that would take was anyone’s guess, but 3-5 years or longer was not out of the question.
As with most startup founders we were impatient, and wanted to start making a difference. It was time for something new. But what?
Bridging the Gap Between Cloud Analytics and Cloud Control
New cloud analytics companies were popping up all over the place, so it would be hard to differentiate ourselves in that market. Both waves did have something in common: cost control. So, instead of spreading ourselves thin trying to control all aspects of cloud, we decided to have a laser focus on cost control.
One thing we repeatedly observed, especially in public cloud, was the tendency for people to leave stuff running all the time. Obviously, in production environments that made sense, but not in non-production environments. Those systems could be turned off when people went home at night. We decided to take our most popular cloudSM “global policy”, parking VMs and instances on a schedule, and use that as the vehicle to deliver immediate cost savings. We would allow people to turn their instances off when not in use, and power them on when needed, without scripting and without being a devops expert. We also set our sights on a single cloud provider (AWS), rather than boil the “digital ocean”, and decided to make a dedicated product out of it, in a new company. The cost savings we could deliver would fit right in with the other purchasing options AWS developed, helping customers save money and contributing to AWS’ ability to oversubscribe their infrastructure.
We modeled ourselves after Nest, the intelligent thermostat, seeking to become “Nest for the public cloud”. We wanted to change the default state for AWS non-production environments from ON to OFF. That was the genesis of ParkMyCloud and the rest, as they say, is history.
The Little Startup That Could … Save You Thousands
We launched our new company, ParkMyCloud, on July 1, 2015. We went live on our SaaS application, also called ParkMyCloud, in September of 2015. We had our first customer by the end of the month. Fast forward a year to today and we have built a compelling, very simple-to-use cost control product. We’re continuing to improve our application, adding new features all the time (e.g., parking entire auto scaling groups, which is almost done). Customers love the savings we provide ($3 to $7 for every dollar spent). Our customers are truly global with several in Europe, Asia, and even as far from our home base in the Washington, DC area as New Zealand. All like our clean user interface, which allows them to see all their instances across all AWS regions and accounts. In fact, some use our interface as a proxy to the EC2 portion of the AWS console, obviating the need to add a lot of users in their AWS accounts.