How to save on AWS costs is a question that many project managers will have likely asked themselves since Amazon launched its Elastic Compute Cloud (EC2) in 2006. Indeed, as the number of organizations using the EC2 platform has grown – and the amount each organization spends on EC2 balloons – the question has probably been asked more frequently than you might imagine.

There are a number of answers to this question. Organizations can take advantage of Amazon´s flexible pricing plans to assign instances to the most cost-effective option or, if they have the organizational foresight to plan three years ahead, use Amazon´s Reserved Instances pricing model to save on AWS costs.

Where organizations use a high proportion of non-production instances, one way how to save on AWS costs is to reassign resources in order to develop scheduling scripts. Although this solution may be counter-productive in terms of reducing costs, it is a lot more reliable than asking teams to switch off their non-production instances before they go home at night.

Another way how to save on AWS costs is to implement an AWS management solution from ParkMyCloud. ParkMyCloud is a lightweight Software-as-a-Service app that automates the scheduling process in a simple user-friendly process called “parking”. The app even suggests which non-production instances are most suitable for parking – saving internal resources as well as AWS costs.

ParkMyCloud is also an exceptionally versatile cloud management solution. It enables administrators to set permission levels for development teams, provides a single-view dashboard across all accounts for easy governance, and allows for the schedule to be snoozed when developers want to access their development, testing and staging instances while they are parked.

To find out more about how to save on AWS costs with ParkMyCloud, contact us today.

Interview: ParkMyCloud Empowers Sysco Foods’ Cloud-Only Strategy

We talked with Kurt Brochu, Senior Manager of the Cloud Enablement Team at Sysco Foods, about how his company has been using ParkMyCloud to empower end users to keep costs in check with the implementation of their cloud-only strategy.

Thanks for taking the time to speak with us today. I know we chatted before at re:Invent, where you gave us some great feedback, and we’re excited to hear more about your use of ParkMyCloud since it rolled out to your other teams.

To get started, can your describe your role at Sysco and what you do?

I’m senior manager here in charge of the cloud enablement team. The focus is on public cloud offerings, where we function as the support tier for the teams that consume those services. I also have ownership of ensuring that cost containment and appropriateness of use is being performed, as well as security and connectivity, network services, authentication, and DNS.

We don’t consider ourselves IT, our department is referred to as Business Technology. Our CTO brought us on 3 or 4 years ago with the expectation that we understand the business needs, wants, and desires, to actually service them as they would need versus passively telling them that their server is up or down.

As well as security and the dev team, teams using cloud also include areas that are customer facing, like sales, or internal, like finance, business reporting, asset management, and the list goes on.

Tell us about your company’s cloud usage.

We’ve had our own private cloud since 2003, offered on-prem. We’ve been in public cloud since 2013. Now, our position has gone from a “cloud-first” to a “cloud-only” strategy in the sense that any new workload that comes along is primarily put in public cloud. We primarily use AWS and are adding workloads to Azure as well.

Talk to me about how cost control fits into your cloud-only strategy. How did you realize there was a problem?

We were seeing around 20% month over month growth in expenditure between our two public clouds. Our budget wasn’t prepared for that type of growth.

We realized that some of the teams that had ability to auto-generate workloads weren’t best managing their resources. There wasn’t an easy way to show the expenses in a visual manner to present them to Sysco, or to give them some means to manage the state of their workloads.

The teams were good at building other pipelines for bringing workloads online but they didn’t have day-to-day capabilities.

How did you discover ParkMyCloud as a solution to your cost control problem?

We first stumbled upon ParkMyCloud at the 2016 AWS re:Invent conference and were immediately intrigued but didn’t have the cycles to look into it until this past summer, when when we made the switch from cloud-first to a cloud-only strategy.

We’ve been running ParkMyCloud since the week before re:Invent in 2017. From there, we had our first presentation to our leadership team in December 2017, where we showed that the uptick in savings was dramatic. It’s leveled off right now because we have a lot of new workloads coming in, but the savings are still noticeable. We still have developers who think that their dev system has to be always be on and at will, but they don’t understand that now that we have ParkMyCloud, making it “at will” is as simple as an API call or the click of a button. I expect to see our savings to grow over the rest of the calendar year.

We have 50+ teams and over 500 users on ParkMyCloud now.

That’s great to hear! So how much are you saving on your cloud costs with ParkMyCloud?

Our lifetime savings thus far is $28,000, and the tool has paid for itself pretty quickly.

We have one team who has over 40% savings on their workloads. They were spending on average about $10,000 a month, and now it’s at $5,800 because they leverage ParkMyCloud’s simplified scheduling start/stop capabilities.

What other benefits are you getting from your use of the platform?

What I really like is that we have given most of our senior directors, who actually own the budgets, access to the tool as well. It lets the senior directors, as well as the executives when I present to them, see the actual cost savings. It gives you the ability to shine light in places that people don’t like to have the light shine.

The development team at ParkMyCloud has also been very open to receiving suggestions and capabilities that will help us improve savings and increase user adoption.

That’s great, and please continue to submit your feedback and requests to us! And in that regard, have you tried our SmartParking feature to get recommended schedules based on your usage?

Yes, we have started to. When I’m asked by a team to show them how we suggest they use the tool, they get to decide whether or not to enforce it. I’ll say that they are exceedingly happy by the fact that they can go and see their usage. One developer is telling their team that the feature has to be on at all times.

Are there any other cost savings measures that you use in conjunction with ParkMyCloud or in addition?

We pull numbers and look at Amazon’s best prices guide for sizing. We also take the recommendations from ParkMyCloud and we cross compare those.

Do you have any other feedback for us?

The magic of ParkMyCloud is that it empowers the end user to make decisions for the betterment of business, and gives us the needed visibility to do our jobs effectively. That’s the bottom line. Each user has a decision: I can spend money on wasted resources or I can save it where I can and apply the savings to other projects. Once you start to understand that, then you have that “AHA” moment.

Before using ParkMyCloud, most developers have no awareness of the expense of their workloads. This tool allows me to unfilter that data so they can see, for example: this workload is $293 a month, every month. If you look at your entire environment, you’re spending $17,000 a month, but if you take it down just for the weekend, you could be saving $2-3,000 a month or more depending on how aggressive you want to be, without hurting your ability to support the business. It’s that “AHA” moment that is satisfying to watch.

That’s what we noticed immediately when we looked at the summary reports  – the uptick that appears right after you have these presentations with the team makes your heart feel good.

Well thank you Kurt, again we really appreciate you taking the time to speak with us.

Thank you.

Read more ›

Interview: Qcentive Saves Significant Amounts on AWS while Enabling Cloud Computing in Healthcare

We talked with Bill Gullicksen, Director of IT at Qcentive, about how the company is using ParkMyCloud to save money on their AWS costs while enabling cloud computing in healthcare.

Thanks for taking the time to speak with us today. Can you start by telling me about Qcentive and how you are using the cloud?

We are a 2-year-old healthcare startup founded through the venture capital arm of Blue Cross Blue Shield of Massachusetts (BCBSMA). We build systems for the healthcare industry to help reduce costs in healthcare and provide efficiencies. Our cloud-based payment platform will facilitate the development and management of value-based contracts between healthcare companies. We are excited to be one of the earliest vendors authorized to take healthcare information and move it securely to the cloud.

What do you think made Qcentive stand apart as the best option for moving data to the cloud?

Healthcare has historically been cloud-averse due to issues like privacy and security concerns. In order to prove the use case for cloud computing in healthcare, we needed to build out a prototype and go through many months of meetings–producing artifacts to prove that we could move data to the cloud in a HIPAA/HITECH-compliant and secure manner.

We’ve recently released our first prototype module of the application by taking years of patient and healthcare contract information, loading it all into AWS, and then putting our application on top of it. Our application allows our health plan customers and their value-based contracting provider partners to analyze healthcare claim records, emergency room visits, etc. and to quickly calculate how to potentially realize savings in those areas.

So as you’re helping healthcare companies transition to the cloud, how did you come to find ParkMyCloud as a useful tool for your mission?

We had a few architects just going to town on AWS during the first year we were in business. They were building away, and all of a sudden our monthly AWS costs began to ramp up. We were spending a lot of money on Amazon and we didn’t even have a working application yet!

Last summer I was put in charge of our AWS operations and was asked to address our AWS costs. I asked, “what can we do to get some of these costs under control?” We started out with some rightsizing exercises and scaled some stuff back and that got us some savings. We found areas where we have had some stability and used Reserved Instances there, allowing us to get a 30-40% discount, but we didn’t want to do long-term commitments so we only did those for a year.

For the remaining instances, I realized we pay by the minute and we really don’t need to be running instances 24/7. That’s that’s when we started thinking about how to schedule instances to shut down. I could do that and turn them off with AWS tools, but then telling an instance to turn itself back on at 6 in the morning–I didn’t have a way to do that. And that’s when I found out about ParkMyCloud and said this looks perfect – we can schedule instances to get them running 12 hours a day, 5 days a week instead of 24/7 and we’ll probably cut our costs in half.

Have you discovered any other benefits while using ParkMyCloud?

ParkMyCloud was the perfect tool for what we needed at the time and it also gave us a side benefit where we could give developers, QA people, and even data analysts and business folks the ability to turn an instance off when they’re done, or turn it on without having to write a bunch of complex policies within AWS.

Before, if we only wanted certain people to be able to manipulate a handful of instances, I had to put those instance IDs in the policies. Instance IDs frequently change, so running custom policies was taking a lot of overhead and we got the benefit from ParkMyCloud of just assigning them teams. Now, whether the instance IDs change or not, there’s no extra work for our IT team.

That’s why we chose ParkMyCloud and why we’ve been using it for 6-7 months now. For me it was great, very simple to set up, simple to use, easy for non-technical users and with very little effort from me and my technical staff, so it’s been perfect.

Great. So it seems like you were using a good mix of different cost savings efforts between the reserved instances, the rightsizing, and ParkMyCloud. Is there anything else you’re doing to manage cloud re-infrastructure costs?

Those are the bulk of it. We have other cloud-tracking subscriptions that we use sometimes. They are very simple but I just use it for looking at the daily spend, seeing if there’s any unexpected spikes, things like that. I can use it for finding resources that are no longer being used. It’s nice to have for identifying orphaned volumes and gives me a simple, easy way to clean some of that up, but we get our biggest use out of ParkMyCloud.

What percent of your resources are currently on ParkMyCloud schedules?

We’ve taken some schedules off just to keep some systems up for a while, but our rule of thumb has been to put a schedule and a team on everything. Even if a schedule is running 24/7/365, we want to at least have a schedule on it and know that it’s a conscious business decision we made to keep that up versus “it just slipped through the cracks and we never looked at it.”

About how many people in your team or organization are using ParkMyCloud?

Somewhere around 15 or so users.

Where do those users sit within your organization?

I’m Director of IT and we’ve got a Director of DevOps and a DevOps engineer–we are the three technical resources around infrastructure. Then we’ve got around 10 or so software developers that all have access so they can spin up their dev environments and spin them down when they’re not working.

We have a flexible schedule. Some of our software developers do their best coding at 3 in the morning. If they get up with an idea and they want to code, they need the ability to start up instances, do what they need to do, and then turn them off when they’re done. So they’re all in there, our QA department and some business analysts that do a lot of data analysis and database querying are also using ParkMyCloud.

That makes sense. So, how much are you saving on your AWS bills using ParkMyCloud?

Our initial savings with ParkMyCloud were significant and the product paid for itself quickly. Based on business needs, our costs can escalate rapidly so we estimate we’re saving up to 20% on our costs on a monthly basis.

We’ve got a lot of instances that we keep normally parked now and we only turn them on when there’s a workload to run. And then we’ve got probably another 40 or 50% of our instances that only run Monday through Friday, from 7:00 AM to 7:00 PM, so we’re getting that savings there which to me is bigger savings than dealing with Reserved Instances.

Things like Reserved Instances look great the day you buy them, but then the first time you have to change the size on something, all of the sudden you’ve got Reserved Instances that you’re not using anymore. With ParkMyCloud that never happens, it’s all savings.

How did you first hear about ParkMyCloud?

We were interviewing an external technology company, G2 Technologies in Boston last summer that was being brought in to augment our CI/CD process. While they were in we asked, “hey, do you know any good methods for doing scheduling?” – and they said take a look at ParkMyCloud.

Any other feedback for us?

I was surprised how simple ParkMyCloud was to get up and running. It was a couple of hours from signing up for the trial to having most of the work done and realizing savings, which was great. The release of your mobile app has been fantastic because it’s nice if we need to turn something on for somebody that doesn’t have access on a Saturday when I’m 30 miles away from my computer. I can do it anywhere with the mobile app.

Glad to hear it! I think that wraps things up for now. Thank you Bill, I appreciate your time.

You’re welcome!

Read more ›

Don’t Let Your Server Patching Schedule Get in the Way of Cost Control

Don’t let your server patching schedule get in the way of saving money. The idea of minimizing cloud waste was a very new concept two years ago, but as cloud use has grown, so has the need for minimizing wasted spend. CFOs now demand that the cloud operations teams turn off idle systems in the face of rising cloud bills, but the users of these systems are the ones that have to deal with servers being off when they need them.

Users of ParkMyCloud are able to overcome some of the common objections to scheduling non-production resources. The most common objection is, “What if I need the server or database when it’s scheduled to be off?” That’s why ParkMyCloud offers the ability to “snooze” the schedule, which is a temporary override that lets you choose how long you need the system for. This snooze can be done easily from our UI, or through alternative methods like our API, mobile app, or Slackbot.

A related objection is related to how your parking schedule can work with your server patching schedule. The most common way of dealing with patching in ParkMyCloud is to use our API. The workflow would be to log in through the API, get a list of the resources, then choose which resources you want and choose to “snooze” the schedule for a couple of hours, or however long the patching takes. Once the schedule is snoozed, you can toggle the instance on, then do the patching. After the patching is complete, you can either cancel the snooze to go back to the original schedule or wait for the snooze to finish and timeout. If you have an automated patching tool that can make REST calls, this can be an easy way to patch on demand with minimal work.

If you’re on a weekly server patching schedule, you could also just implement the patch times into your pre-set schedules so that the instances turn on, say, at 2:00 a.m. on Wednesdays. By plugging this into your normal schedules, you can still save money during most off-hours, but have the instances on when the patch window is open. This can be a great way to do weekly backups as well, with minimal disruption.

This use of ParkMyCloud while plugging in to external tools and processes is the best way to get every developer and CloudOps engineer on board with continuous cost control. By reducing these objections, you can reduce your cloud costs and be the hero of your organization. Start up a free trial today to see these plug-ins in action!

Read more ›

AWS Neptune Preview – Amazon’s Graph Database Service

At the AWS DC Meetup we organized last week, we got a preview of AWS Neptune, Amazon’s new managed graph database service. It was announced at AWS re:Invent 2017, is currently in preview and will launch for general availability this summer.

What is a graph database?

A graph database is a database optimized to store and process highly connected data – in short, it’s about relationships. The data structure for these databases consists of vertices and direct links called edges.

Use cases for such highly-connected data include social networking, restaurant recommendations, retail fraud detection, knowledge graphs, life sciences, and network & IT ops. For a restaurant recommendations use case, for example, you may be interested in the relationships between various users, where those users live, what types of restaurants those users like, where the restaurants are located, what sort of cuisine they serve, and more. With a graph database, you can use the relationships between these data points to provide contextual restaurant recommendations to users.

Tired of SQL?

If you’re tired of SQL, AWS Neptune may be for you. A graph database is fundamentally different from SQL. There are no tables, columns, or rows – it feels like a NoSQL database. There are only two data types: vertices and edges, both of which have properties stored as key-value pairs.

AWS Neptune is fully managed, which means that database management tasks like hardware provisioning, software patching, setup, configuration, and backups are taken care of for you.

It’s also highly available and shows up in multiple availability zones. This is very similar to Aurora, the relational database from Amazon, in its architecture and availability.

Neptune supports Property Graph and W3C’s RDF. You can use these to build your own web of data sets that you care about, and build networks across the data sets in the way that makes sense for your data, not with arbitrary presets. You can do this using the graph models’ query languages: Apache TinkerPop Gremlin and SPARQL.

There is no cost to use Neptune during the preview period. Once it’s generally available, pricing will rely on On Demand EC2 instances – which means ParkMyCloud will be looking into ways to assist Neptune users with cost control.

If you’re interested in the new service, you can check out more about AWS Neptune and sign up for the preview.

Read more ›

7 Ways Cloud Services Pricing is Confusing

Beware the sticker shock – cloud services pricing is nothing close to simple, especially as you come to terms with the dollar amount on your monthly cloud bill. While cloud service providers like AWS, Azure, and Google were meant to provide compute resources to save enterprises money on their infrastructure, cloud services pricing is complicated, messy, and difficult to understand. Here are 7 ways that cloud providers obscure pricing on your monthly bill:  

1 – They use varying terminology

For the purpose of this post, we’ll focus on the three biggest cloud service providers: AWS, Azure, and Google. Between these three cloud providers alone, different analogies are used for just about every component of services offered.

For example, when you think of a virtual machine (VM), that’s what AWS calls an “instance,” Azure calls a “virtual machine,” and Google calls a “virtual machine instance.” If you have a group of these different machines, or instances, in Amazon and Google they’re called “auto-scaling” groups, whereas in Azure they’re called “scale sets.” There’s also different terminology for their pricing models. AWS offers on-demand instances, Azure calls it “pay as you go,” and Google refers to it as “sustained use.” You’ve also got “reserved instances” in AWS, “reserved VM instances” in Azure, and “committed use” in Google. And you have spot instances in AWS, which are the same as low-priority VMs in Azure, and preemptible instances in Google.

2 – There’s a multitude of variables

Operating systems, compute, network, memory, and disk space are all different factors that go into the pricing and sizing of these instances. Each of these virtual machine instances also have different categories: general purpose, compute optimized, memory optimized, disk optimized and other various types. Then, within each of these different instance types, there are different families. In AWS, the cheapest and smallest instances are in the “t2” family, in Azure they’re called the “A” family. On top of that, there are different generations within each of those families, so in AWS there’s t2, t3, m2, m3, m4, and within each of those processor families, different sizes (small, medium, large, and extra large). So there’s lots of different options available. Oh, and lots confusion, too.  

3 – It’s hard to see what you’re spending

If you aren’t familiar with AWS, Azure, or Google Cloud’s consoles or dashboards, it can be hard to find what you’re looking for. To find specific features, you really need to dig in, but event just trying to figure out the basics of how much you’re currently spending, and predicting how much you will be spending – all can be very hard to understand. You can go with the option of building your own dashboard by pulling in from their APIs, but that takes a lot of upfront effort, or you can purchase an external tool to manage overall cost and spending.

4 – It’s based on what you provision…not what you use

Cloud services pricing can charge on a per-hour, per-minute, or per-second basis. If you’re used to the on-prem model where you just deploy things and leave them running 24/7, then you may not be used to this kind of pricing model. But when you move to the cloud’s on-demand pricing models, everything is based on the amount of time you use it.

When you’re charged per hour, it might seem like 6 cents per hour is not that much, but after running instances for 730 hours in a month, it turns out to be a lot of money. This leads to another sub-point: the bill you get at the end of the month doesn’t come until 5 days after the month ends, and it’s not until that point that you get to see what you’ve used. As you’re using instances (or VMs) during the time you need them, you don’t really think about turning them off or even losing servers. We’ve had customers who have servers in different regions, or on different accounts that don’t get checked regularly, and they didn’t even realize they’ve been running all this time, charging up bill after bill.

You might also be overprovisioning or oversizing resources — for example, provisioning multiple extra large instances thinking you might need them someday or use them down the line. If you’re used to that, and overprovisioning everything by twice as much as you need, it can really come back to bite you when you go look at the bill and you’ve been running resources without utilizing them, but are still getting charged for them – constantly.

5 – They change the pricing frequently

Cloud services pricing has changed quite often. So far, they have been trending downward, so things have been getting cheaper over time due to factors like competition and increased utilization of data centers in their space. However, don’t jump to conclude that price changes will never go up.

Frequent price changes make it hard to map out usage and costs over time. Amazon has already made changes to their price more than 60 times since they’ve been around, making it hard for users to plan a long-term approach. Also for some of these instances, if you have them deployed for a long time, the prices of instances don’t display in a way that is easy to track, so you may not even realize that there’s been a price change if you’ve been running the same instances on a consistent basis.

6 – They offer cost savings options… but they’re difficult to understand (or implement)

In AWS, there are some cost savings measures available for shutting things down on a schedule, but in order to run them you need to be familiar with Amazon’s internal tools like Lambda and RDS. If you’re not already familiar, it may be difficult to actually implement this just for the sake of getting things to turn off on a schedule.  

One of the other things you can use in AWS is Reserved Instances, or with Azure you can pay upfront for a full year or two years. The problem: you need to plan ahead for the next 12 to 24 months and know exactly what you’re going to use over that time, which sort of goes against the nature of cloud as a dynamic environment where you can just use what you need. Not to mention, going back to point #2, the obscure terminology for spot instances, reserved instances, and what the different sizes are.

7 – Each service is billed in a different way

Cloud services pricing shifts between IaaS (infrastructure as a service), which uses VMs that are billed one way, and PaaS (platform as a service) gets billed another way. Different mechanisms for billing can be very confusing as you start expanding into different services that cloud providers offer.

As an example, the Lambda functions in AWS are charged based on the number of requests for your functions, the duration, and the time it takes for your code to execute. The Lambda free tier includes 1M free requests per month and 400,000 GB-seconds of compute time per month, or you can get 1M request free and $0.20 per 1M requests thereafter, OR use “duration” tier and get 400,000 GB-seconds per month free, $0.00001667 for every GB-second used thereafter – simple, right? Not so much.

Another example comes from the databases you can run in Azure. Databases can run as a single server or can be priced by elastic pools, each with different tables based on the type of database, then priced by storage, number of databases, etc.

With Google Kubernetes clusters, you’re getting charged per node in the cluster, and each node is charged based on size. Nodes are auto-scaled, so price will go up and down based on the amount that you need. Once again, there’s no easy way of knowing how much you use or how much you need, making it hard to plan ahead.

What can you do about it?

Ultimately, cloud service offerings are there to help enterprises save money on their infrastructures, and they’re great options IF you know how to use them. To optimize your cloud environment and save money on costs, we have a few suggestions:

    • Get a single view of your billing. You can write your own scripts (but that’s not the best answer) or use an external tool.  
    • Understand how each of the services you use is billed. Download the bill, look through it, and work with your team to understand how you’re being billed.
    • Make sure you’re not running anything you shouldn’t be. Shut things down when you don’t need them, like dev and test instance on nights and weekends.Try to plan out as much as you can in advance.
    • Review regularly to plan out usage and schedules as much as you can in advance
    • Put governance measures in place so that users can only access certain features, regions, and limits within the environment. 

Cloud services pricing is tricky, complicated, and hard to understand. Don’t let this confusion affect your monthly cloud bill. Try ParkMyCloud for an automated solution to cost control.

Read more ›

How to Use Terraform Provisioning and ParkMyCloud to Manage AWS

Recently, I’ve been on a few phone calls where I get asked about cost management of resources built in AWS using Terraform provisioning. One of the great things about working with ParkMyCloud customers is that I get a chance to talk to a lot of different technical teams from various types of businesses. I get a feel for how the modern IT landscape is shifting and trending, plus I get exposed to the variety of tools that are used in real-world use cases, like Atlassian Bamboo, Jenkins, Slack, Okta, and Hashicorp’s Terraform.

Terraform seems to be the biggest player in the “infrastructure as code” arena. If you’re not already familiar with it, the utilization is fairly straightforward and the benefits quickly become apparent. You take a text file, use it to describe your infrastructure down to the finest detail, then run “terraform apply” and it just happens. Then, if you need to change your infrastructure, or revoke any unwanted changes, Terraform can be updated or roll back to a known state. By working together with AWS, Azure, VMware, Oracle, and much more, Terraform can be your one place for infrastructure deployment and provisioning.

How to Use Terraform Provisioning and ParkMyCloud with AWS Autoscaling Groups

I’ve talked to a few customers recently, and they utilize Terraform as their main provisioning tool, while ParkMyCloud is their ongoing cloud governance and cost control tool. Using these two systems together is great, but one main confusion comes in with AWS’s AutoScaling Groups. The question I usually get asked is around how Terraform handles the changes that ParkMyCloud makes when scheduling ASGs, so let’s take a look at the interaction.

When ParkMyCloud “parks” an ASG, it sets the Min/Max/Desired to 0/0/0 by default, then sets the values for “started” to the values you had originally entered for that ASG. If you run “terraform apply” while the ASG is parked, then terraform will complain that the Min/Max/Desired values are 0 and will change them to the values you state. Then, when ParkMyCloud notices this during the next time it pulls from AWS (which is every 10 minutes), it will see that it is started and stop the ASG as normal.

If you change the value of the Min/Max/Desired in Terraform, this will get picked up by ParkMyCloud as the new “on” values, even if the ASG was parked when you updated it. This means you can keep using Terraform to deploy and update the ASG, while still using ParkMyCloud to park the instances when they’re idle.

How to Use Terraform to Set Up ParkMyCloud

If you currently leverage Terraform provisioning for AWS resources but don’t have ParkMyCloud connected yet, you can also utilize Terraform to do the initial setup of ParkMyCloud. Use this handy Terraform script to create the necessary IAM Role and Policy in your AWS account, then paste the ARN output into your ParkMyCloud account for easy setup. Now you’ll be deploying your instances as usual using Terraform provisioning while parking them easily to save money!

Read more ›

$12.9 Billion in wasted cloud spend this year.

Wake up and smell the wasted cloud spend. The cloud shift is not exactly a shift anymore, it’s an evident transition. It’s less of a “disruption” to the IT market and more of an expectation. And with enterprises following a visible path headed towards the cloud, it’s clear that their IT spend is going in the same direction: up.

Enterprises have a unique advantage as their cloud usage continues to grow and evolve. The ability to see where IT spend is going is a great opportunity to optimize resources and minimize wasted cloud spend, and one of the best ways to do that is by identifying and preventing cloud waste.

So, how much cloud waste is out there and how big is the problem? What difference does this make to the enterprises adopting cloud services at an ever-growing rate? Let’s take a look.

The State of the Cloud Market in 2018

The numbers don’t lie. For a real sense of how much wasted cloud spend there is, the first step is to look at how much money enterprises are spending in this space at an aggregate level.

Gartner’s latest IT spending forecast predicts that worldwide IT spending will reach $3.7 trillion in 2018, up 4.5 percent from 2017. Of that number, the portion spent in the public cloud market is expected to reach $305.8 billion in 2018, up $45.6 billion from 2017.

The last time we examined the numbers back in 2016, the global public cloud market was sitting at around $200 billion and Gartner had predicted that the cloud shift would affect $1 trillion in IT spending by 2020. Well, with an updated forecast and over $100 billion dollars later, growth could very well exceed predictions.

The global cloud market and the portion attributed to public cloud spend are what give us the ‘big picture’ of the cloud shift, and it just keeps growing, and growing, and growing. You get the idea. To start understanding wasted cloud spend at an organizational level, let’s break this down further by looking at an area that Gartner says is driving a lot of this growth: infrastructure as a service (IaaS).

Wasted Cloud Spend in IaaS

As enterprises increasingly turn to cloud service providers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) to provide compute resources for hosting components of their infrastructures, IaaS plays a significant role in both cloud spend and cloud waste.

Of the forecasted $305.8 billion dollar public cloud market  for 2018, $45.8 billion of that will be spent on IaaS, ⅔ of which goes directly to compute resources. This is where we get into the waste part:

  • 44% of compute resources are used for non-production purposes (i.e. development, staging, testing, QA)
  • The majority of servers used for these functions only need to run during the typical 40-hour work week (Monday through Friday, 9 to 5) and do not need to run 24/7
  • Cloud service providers are still charging you by the hour (or minute, or even by the second) for providing compute resources

The bottom line: for the other 128 hours of the week (or 7,680 minutes, or 460,800 seconds) – you’re getting charged for resources you’re not even using. And there’s a large percent of your waste!

What You Can Do to Prevent Wasted Cloud Spend

Turn off your cloud resources.

The easiest and fastest way to save money on your idle cloud resources when  is by simply by not using them. In other words, turn them off. When you think of the cloud as a utility like electricity, it’s as simple as turning off the lights every night and when you’re not at home. With ParkMyCloud you can automatically schedule your cloud resources to turn off when you don’t need them, like nights and weekends, and eliminate 65% or more on your monthly bill with AWS, Azure, and Google. Wham. bam.

Turn on your SmartParking.

You already know that you don’t need your servers to be on during nights and weekends, so you shut them off. That’s great, but what if you could save even more with valuable insight and information about your exact usage over time?

With ParkMyCloud’s new SmartParking feature, the platform will track your utilization data, look for patterns and create recommended schedules for each instance, allowing you to turn them off when they’re typically idle.

There’s a lot of cloud waste out there, but there’s also something you can do about it: try ParkMyCloud today.

Read more ›

The Cost of Cloud Computing Is, in Fact, Dropping Dramatically

You might read the headline statement that the cost of cloud computing is dropping  and say “Well, duh!”. Or maybe you’re on the other side of the fence. A coworker recently referred me to a very interesting blog on the Kapwing site that states Cloud costs aren’t actually dropping dramatically. The author defines“dramatically” based on the targets set by Moore’s Law or the more recently proposed Bezos’ Law, which states that “a unit of [cloud] computing power price is reduced by 50 percent approximately every three years.” The blog focused on the cost of the Google Cloud Platform (GCP) n1-standard-8 machine type, and illustrated historical data for the Iowa region:

Date N1-standard-8 Cost per Hour
January 2016 $0.40
January 2017 $0.40
January 2018 $0.38

The Kapwing blog also illustrates that the GCP storage and network egress costs have not changed at all in three years. These figures certainly add up to a conclusion that Bezos’ Law is not working…at least not for GCP.

Whose law is it anyway?

If we turn this around and try to apply Bezos’ Law to, well, Bezos’ Cloud we see a somewhat different story.

The approach to measuring AWS pricing changes needs to be a bit more systematic than for GCP, as the AWS instance types have been evolving quite a bit over their history. This evolution is shown by the digit that follows the first character in the instance type, indicating the version or generation number of the given instance type . For example, m1.large vs. m5.large. These are similar virtual machines in terms of specifications, with 2 vCPUs and about 8GB RAM, but the m1.large was released in October 2007, and the m5.large in November 2017. While  the “1” in the GCP n1-standard-8 could also be a version number,  it is still the only version I can see back to at least 2013. For AWS, changes in these generation numbers happen more frequently and likely reflect the new generations of underlying hardware on which the instance can be run.

Show me the data!

In any event, when we make use of the Internet Archive to look at  pricing changes of the specific instance type as well as the instance type “family” as it evolves, we see the following (all prices are USD cost per hour for Linux on-demand from the us-east-1 region in the earliest available archived month of data for the quoted year):

m1.large m3.large m4.large m5.large Reduction from previous year/generation 3-year reduction
2008 $0.40
2009 $0.40 0%
2010 $0.34  -18%
2011 $0.34 0% -18%
2012 $0.32 -6% -25%
2013 $0.26 -23% -31%
2014 $0.24 $0.23 -13% -46%
2015 $0.175 $0.14 -64% -103%
2016 $0.175 $0.133 $0.120 -17% -80%
2017 $0.175 $0.133 $0.108 -11% -113%
2018* $0.175 $0.133 $0.100 $0.096 -13% -46%

*Latest Internet Archive data from Dec 2017 but confirmed to match current Jan 2018 AWS pricing.

FWIW: The second generation m2.large instance type was skipped, though in October 2012 AWS released the “Second Generation Standard” instances for Extra Large and Double Extra Large – along with about an 18% price reduction for the first generation.

To confirm that we can safely compare these prices, we need to look at how the mX.large family has evolved over the years:

Instance type Specifications
m1.large (originally defined as the “Standard Large” type) 2vCPU w/ECU of 4, 7.5GB RAM
m3.large 2vCPU w/ECU of 6.5, 7.5GB RAM
m4.large 2vCPU w/ECU of 6.5, 8GB RAM
m5.large 2vCPU w/ECU of 10, 8GB RAM

A couple of notes on this:

  • ECU is “Elastic Compute Unit” –  a standardized measure AWS uses to support comparison between CPUs on different instance types. At one point, 1 ECU was defined as the compute-power of a 1GHz CPU circa 2007.
  • I realize that the AWS mX.large family is not equivalent to the GCP n1-standard-8 machine type mentioned earlier, but I was looking for an AWS machine type family with a long history and fairly consistent configuration(and this is not intended to be a GCP vs AWS cost comparison).

The drop in the cost of cloud computing looks kinda dramatic to me…

The net average of the 3-year reduction figures is -58% per year, so Bezos’ Law is looking pretty good. (And there is probably an interesting grad-student dissertation somewhere about  how serverless technologies fit into Bezos’ Law…)  When you factor the m1.large ECU of 4 versus the m5.large ECU of 10 into the picture, more than doubling the net computing power, one could easily argue that Bezos’ Law significantly understates the situation. Overall, there is a trend here of not just a significantly declining prices, but also greatly increased capability (higher ECU and more RAM), and certainly reflecting an increased value to the customer.

So, why has the pricing of the older m1 and m3 generations gone flat but is still so much more expensive? On the one hand, one could imagine that the older generations of underlying hardware consume more rack space and power, and thus cost Amazon more to operate. On the other hand, they have LONG since amortized this hardware cost, so maybe they could drop the prices. The reality is probably somewhere in between, where they are trying to motivate customers to migrate to newer hardware, allowing them to eventually retire the old hardware and reuse the rack space.

Intergenerational Rightsizing

There is definite motivation here to do a lateral inter-generation “rightsizing” move. We most commonly think of rightsizing as moving an over-powered/under-utilized virtual machine from one instance size to another, like m5.large to m5.medium, but intergenerational rightsizing can add up to some serious savings very quickly. For example, an older m3.large instance could be moved to an m5.large instance in about 1 minute or less (I just did it in 55 seconds: Stop instance, Change Instance Type, Start Instance), immediately saving 39%. This can frequently be done without any impact to the underlying OS. I essentially just pulled out my old CPU and RAM chips and dropped in new ones. Note that it is not necessarily this easy for all instance types – some older AMI’s can break the transition to a newer instance type because of network or other drivers, but it is worth a shot, and the AWS Console should let you know if the transition is not supported (of course: as always make a snapshot first!)

Conclusion

For the full view of cloud compute cost trends, we need to look at both the cost of specific instance types, and the continually evolving generations of that instance type. When we do this, we can see that the cost of cloud computing is, in fact, dropping dramatically…at least on AWS.

Read more ›

ParkMyCloud Reviews – Customer Video Testimonials

A few weeks ago at the 2017 AWS re:Invent conference in Las Vegas, we had the opportunity to meet some of our customers at the booth, get their product feedback, and a few shared their ParkMyCloud reviews as video testimonials. As part of our ongoing efforts to save money on cloud costs with a fully automated, simple-to-use SaaS platform, we rely on our customers to give us insight into how ParkMyCloud has helped them. Here’s what they had to say:

TJ McAteer, Prosight Specialty Insurance

“It’s all very well documented. We got it set up within an afternoon with our trial, and then it was very easy to differentiate and show that value – and that’s really the most attractive piece of it.”

As the person responsible for running the cloud engineering infrastructure at ProSight Specialty Insurance, ParkMyCloud had everything TJ was looking for. Not only that, but it was easy to use, well managed, and demonstrated its value right away.

James LaRocque, Decision Resources Group

“What’s nice about it is the ability to track financials of what you’re actually saving, and open it up to different team members to be able to suspend it from the parked schedules and turn it back on when needed.”

As a Senior DevOps engineer at Decision Resources Group, James LaRocque discovered ParkMyCloud at the 2016 AWS re:Invent and has been a customer ever since. He noted that while he could have gone with scripting, ParkMyCloud offered the increased benefits of financial tracking and user capabilities.

“The return on investment is huge.”

Kurt Brochu, Sysco Foods

“We had instant gratification as soon as we enabled it.”

Kurt Brochu, Senior Manager of the Cloud Enablement Team at Sysco Foods, was immediately pleased to see ParkMyCloud saving money on cloud costs as soon as they put it into action. Once he was able to see how much they could save on their monthly cloud bill, the next step was simple.   

“We were able to save over $500 in monthly spend by just using it against one team. We are rolling out to 14 other teams over the course of the next 2 weeks.”

Mark Graff, Dolby Labs

“The main reason why we went for it was that it was easy to give our users the ability to start and stop instances without having to give them access to the console.”

Mike Graff, the Senior Infrastructure Manager at Dolby Labs, became a ParkMyCloud customer thanks to one of his engineers in Europe.

“We just give them credentials, they can hop into ParkMyCloud and go to start and stop instances. You don’t have to have any user permissions in AWS – that was a big win for us.”


We continue to innovate and improve our platform’s cloud cost management capabilities with the addition of SmartParking recommendations, SmartSizing, Alicloud and more. Customer feedback is essential to making sure that not only are we saving our customers time and money, but also gaining valuable insight into what makes ParkMyCloud a great tool.

If you use our platform, we’d love to get a ParkMyCloud review from you and hear about how ParkMyCloud has helped your business – there’s a hoodie in it for you! Please feel free to participate in the comments below or with a direct email to info@parkmycloud.com

 

Read more ›

Introducing SmartParking: Automatic On/Off Schedules based on AWS CloudWatch Metrics

Today, we’re excited to bring you SmartParkingTM – automatic, custom on/off schedules for individual resources based on AWS CloudWatch metrics!

ParkMyCloud customers have always appreciated parking recommendations based on keywords found in their instance names and tags – for example, ParkMyCloud recommends that an instance tagged “dev” can be parked, as it’s likely not needed outside of a Monday-Friday workday.

Now, SmartParking will look for patterns in your utilization data from AWS CloudWatch, and create recommend schedules for each instance to turn them off when they are typically idle. This minimizes idle time to maximize savings on your resources.

With SmartParking, you eliminate the extra step of checking in with your colleagues to make sure the schedules you’re putting on their workloads doesn’t interfere with their needs. Now you can receive automatic recommendations to park resources when you know they won’t be used.

SmartParking schedules are provided as recommendations, which you can then click to apply. This release supports SmartParking for AWS resources, with plans to add Azure and Google Cloud SmartParking.

Instance utilization report from AWS CloudWatch data

SmartParking schedule created from instance utilization data

Customize Your Recommendations like your 401K

Different users will have different preferences about what they consider “parkable” times for an instance. So, like your investment portfolios, you can choose to receive SmartParking schedules that are “conservative”, “balanced”, or “aggressive”. And like an investment, a bigger risk comes with the opportunity for a bigger reward.

If you’d like to prioritize the maximum savings amount, then choose aggressive SmartParking schedules. You will park instances – and therefore save money – for the most time, with the “risk” of occasional inconvenience by having something turned off when someone needs it. Your users can always log in to ParkMyCloud and override the schedule with the “snooze button” if they need to use the instance when it’s parked.

On the other hand, if you would like to ensure that your instances are never parked when they might be needed, choose a conservative SmartParking schedule. It will only recommend parked times when the instance is never used. Choose “balanced” for a happy medium.

What People are Saying: Save More, Easier than Ever

Several existing ParkMyCloud customers have previewed the new functionality. “ParkMyCloud has helped my team save so much on our AWS bill already, and SmartParking will make it even easier,” said Tosin Ojediran, DevOps Engineer at a FinTech company. “The automatic schedules will save us time and make sure our instances are never running when they don’t need to be.”

Already a ParkMyCloud user? Log in to your account to try out the new SmartParking. Note that you will need to have AWS CloudWatch metrics enabled for several weeks in order for us to see your usage trends and make recommendations. If you haven’t already, you will need to update your AWS policy.

New to ParkMyCloud? Start a free trial here.

Read more ›

ParkMyCloud’s Top 5 Blog Posts of 2017

Before we ring in the new year, ParkMyCloud is taking a look back at 2017. We get a lot of great feedback on our blogs so we decided to summarize our top 5 blog posts, as indicated by our readers (views and shares). In case you missed them, please take a moment and enjoy our most popular posts of 2017!  

Azure vs AWS 2017: Is Azure really surpassing AWS?

Azure vs AWS – what’s the deal? After both cloud providers reported their quarterly earnings at the same time, speculation grew as to whether Azure might have a shot at outpacing Amazon. Provocative headlines teased the idea that Azure is catching up with AWS, making it a great opportunity to compare two out of the ‘big three’ providers. While it may seem like AWS is the one to beat, this blog examines whether Azure is catching up, where they are gaining ground, and why the debate even matters.

AWS vs Google Cloud Pricing – A Comprehensive Look

When it comes to comparing cloud providers, a look at pricing is not only helpful, it’s imperative. AWS and Google Cloud Platform (GCP) use different terminology for their instances, different categories of compute sizing, and take marketing liberties in describing their offerings. To make matters even more confusing, each provider takes a different approach to pricing, charging you by the hour in some cases or by the minute in others, and both having minimums. This blog breaks down all of the jargon and gives you valuable insight into how AWS and GCP are charging you on their monthly cloud bill.

The Cloud Waste Problem That’s Killing Your Business (And What To Do About It)

As enterprises continue shifting to the cloud, service providers like AWS, GCP, Azure, and more offer cloud services as a valuable utility for cost savings. However, as a utility, the cloud has serious potential for waste if not used optimally. What is “cloud waste” and where does it come from? What are the consequences? What can you do to reduce it? This blog answers those burning questions and tells you how to prevent waste and optimize your cloud spend.

Start and Stop RDS Instances – and Schedule with ParkMyCloud

When Amazon announced the release of start and stop RDS instances, AWS users finally had the ability to ‘turn off’ their RDS instances and save money on their cloud bill – nice! However, they would still be charged for provisioned storage, manual snapshots, and automated backup storage. What if there was a solution to starting and stopping RDS instances on an automated schedule, ensuring that they’re not left running when not needed? This blog explains how ParkMyCloud offers automated cost control on a schedule, saving you even more on your monthly cloud bill.

Why We Love the AWS IoT Button

We talk a lot about how ParkMyCloud can save you money on your cloud bill, because we can, but we also love to share the exciting, fun, and innovative offerings that the could brings. The AWS IoT button is a device like no other. You can program it to integrate with any internet-connected device, opening up a whole world of possibilities for what you can do with it. Make a remote control for Netflix, brew coffee in the morning without getting out of bed, or place a takeout order for lunch, all with the push of a button. This blog tells you about how the button was created, how to use it, and some ways that creative developers are using the AWS IoT button.

To another great year…

As we wrap up 2017, the ParkMyCloud team is especially thankful to those of you who have made our blog and our Cloud Cost Control platform successful. We look forward to another great year of keeping up with the cloud, sharing our posts, and of course, saving you money on your cloud bill.

Cheers to 2018! Happy New Year from the ParkMyCloud team and keep an eye open for SmartParking and several great announcements in early January.    

Read more ›

Spot Instance Hibernation – What It Means For You

At AWS re:Invent 2017, one of the announcements that was made is that spot instance hibernation is now available. This change to how AWS spot instances works can mean some tweaks to how you approach this instance type. Let’s explore the ramifications of this and see what it means for you and your infrastructure.

What are Spot Instances?

When you use a cloud provider like AWS, they run data centers so you don’t have to. In doing so, those data centers have similar side effects as traditional on-prem deployments, including spare compute power when utilization is low. AWS decided to let free market forces work to their advantage by offering these spare resources at auction-style prices.

How this worked in practice (prior to this recent hibernation announcement) involved naming your bid price for how much you were willing to pay for an EC2 instance. Once the price of a spot instance went below your bid price, your instance started up and began doing work. Later, when the cost was above your bid price, your instance would be terminated.

Typical Spot Instance Use Cases

As you can tell, spot instances introduce a different way of thinking about your resources.  There are some use cases that don’t make any sense for spot instances, but others that can work well. For instance, high-performance computing scenarios that need a lot of machines for a short period of time can work well with spot instances, as long as the result isn’t extremely urgent. Another possibility is batch processing, like video conversions or scientific analysis, which can typically be done in off-hours without a human present to manually tweak things.

Hibernation vs. Termination

As mentioned above, the loss of a spot instance used to result in termination of the instance, regardless of the data or state of the machine. With Amazon’s recent announcement, you can now have spot instances hibernate instead. This means the system’s memory will be saved to the root EBS volume, then reloaded when the machine is resumed. It’s like time travel, but for your cloud infrastructure!

From a practical perspective, this can change how you approach spot instances. The main benefit is that you don’t have to prepare for sudden termination of your virtual machine, so more workloads could use spot instances with less preparation. The downside to this is that while your workload will eventually finish, you can’t quite be sure of when.

Spot Instances vs. Parking Schedules

The “not being sure when” part is the big differentiator between spot instance hibernation versus on-demand EC2 instances with parking schedules applied via ParkMyCloud. This new hibernation features means lots of benefits and cost savings, but introduces a nebulous time frame that tends to make developers (and executives) nervous. By utilizing known parking schedules that are automatically applied to instances, the cost savings can be quite comparable while maintaining business-hour uptime. The additional flexibility of manual or automated overrides via ParkMyCloud’s UI or API can mean all the difference to your cloud infrastructure team and the application owners who are running these workloads.

AWS claims that you can save up to 90% on your instance costs with spot instances. In the real world, various reports seem to be in the 50%-70% range, based on some stats from large companies like Pinterest and Vimeo. With parking schedules, most development teams turn off systems on nights and weekends, which is around 65% of the time. This means you can get similar savings, but with different timing structures for your use. The best way to save the most is to use a combination, so check out Amazon’s spot instances and try out ParkMyCloud for cost optimization for all workload types!

Read more ›

Amazon’s EC2 Scheduler – How Does it Compare with ParkMyCloud?

Amazon recently announced updates to their EC2 scheduler, responding to the already-answered question: “How do I automatically start and stop my Amazon EC2 instances?”

Been there, done that. ParkMyCloud has been scheduling instances and saving our customers 65% or more on their monthly cloud bills from Amazon, Azure, and Google since 2015. It looks like Amazon is stepping up to the plate with their EC2 scheduler, but are they?

The premise is simple: pay for what you use. It’s what we’ve been saying all along – cloud services are like any other utility (electricity, water, gas) – you should use them only when needed to avoid paying more than necessary. You wouldn’t leave your lights on all night, so why leave your instances running when you’re not using them?

Until now, Amazon had basic scripting suggestions for starting and stopping your instances. With the EC2 scheduler, you’re getting instructions for how to configure a custom start and stop scheduler for your EC2 instances. Implementing the solution will require some work on your part, but will inevitably reduce costs. Welcome to the club, Amazon? Sort of, not really.

EC2 Scheduler vs ParkMyCloud

While the EC2 scheduler sounds good, we think ParkMyCloud is better, and not just because we’re biased. We took a look at the deployment guide for the EC2 scheduler and noticed a few things we offer that Amazon still doesn’t

  • This solution requires knowledge and operation of DynamoDB, Lambda, CloudWatch custom metrics, and Cloudformation templates, including Python scripting and Cloudformation coding.
    • None of that is required with our simple, easy-to-use platform. You don’t need a developer background to use ParkMyCloud, in fact you can use your mobile phone (insert link) or tablet.
  • There’s no UI, so it’s not obvious which instances are on what schedules.
    • ParkMyCloud has a simple UI with an icon driven operational dashboard and reporting so you can easily see and manage not only your AWS resources in a single-pane but your Azure and Google resources as well.
  • Modifications require code changes and CloudFormation deployments, including simple overrides of schedules.
    • Again, ParkMyCloud is easy to use, no coding or custom scripting required. Users can also temporarily override schedules if they need to use an instance on short notice, but will only have access to the resources you grant. And you can use our API and Policy Engine to automate scheduling as part of your DevOps process.
  • No SSO, reporting, notifications
    • Check, check, check. Did we mention that ParkMyCloud added some new features recently? You can now see resource utilization data for EC2 instances, viewable through animated heatmaps.
  • Doesn’t have SmartParking – automated parking recommendations based on usage data.
    • We do.
  • You cannot “snooze” (temporarily override) schedules on parked instances. You would have to do that manually through the AWS interface.
    • You can snooze schedules in ParkMyCloud with a button click.
  • Doesn’t work with Azure or Google.
    • We do.
  • Doesn’t park ASG
    • We do.
  • No Slack integration.
    • We do.

Conclusion

Amazon – nice try.

If you’re looking for an alternative to writing your own scripts (which we’ve known for a long time is not the best answer), you’re purely using AWS and EC2 instances, and are comfortable with all the PaaS offerings mentioned, then you might be okay with the EC2 scheduler. The solution works, although it comes with a lot of the same drawbacks that custom scripting has when compared to ParkMyCloud.

If you’re using more than just EC2 instances or even working with multiple providers, if you’re looking for a solution where you don’t need to be scripting, and if you’d prefer an automated tool that will cut your cloud costs with ease of use, reporting, and parking recommendations, then it’s a no-brainer. Give ParkMyCloud a try.

Read more ›

New in ParkMyCloud: Visualize AWS Usage Data Trends

We are excited to share the latest release in ParkMyCloud: animated heat map displays. This builds on our previous release of static heat maps displaying AWS EC2 instance utilization metrics from CloudWatch. Now, this utilization data is animated to help you better identify usage patterns over time and create automated parking schedules.

The heatmaps will display data from a sequence of weeks, in the form of an animated “video”, letting you see patterns of usage over a period of time. You can take advantage of this feature to better plan ParkMyCloud parking schedules based on your actual instance utilization.

Here is an example of an animated heatmap, which allows you to visualize when instances are used over a period of eight weeks:

The latest ParkMyCloud update also includes:

  • CloudWatch data collection improvements to reduce the number of API calls required to pull instance utilization metrics data
  • Various user interface improvements to a number of screens in the ParkMyCloud console.

As noted in our last release, utilization data also provides the necessary information that will allow ParkMyCloud to make optimal parking and rightsizing recommendations (SmartParking) when this feature is released next month, part of our ongoing efforts to do what we do best – save you money, automatically.

AWS users who sign up now can take advantage of the latest release as we ramp up for automated SmartParking. In order to give you the most optimal cost control over your cloud bill, start your ParkMyCloud trial today to collect several weeks’ worth of CloudWatch data, track your usage patterns, and get recommendations as soon as the SmartParking feature becomes available in a few weeks.

If you are an existing customer, be sure to update your AWS policies to enable ParkMyCloud to access your AWS CloudWatch data. Detailed instructions can be found in our support portal.

Feedback? Anything else you’d like to see ParkMyCloud do? Let us know

Read more ›

New in ParkMyCloud: AWS Utilization Metric Tracking

We are happy to share the latest release in ParkMyCloud: you can now see resource utilization data for your AWS EC2 instances! This data is viewable through customizable heatmaps.

This update gives you information about how your resources are being used – and it also provides the necessary information that will allow ParkMyCloud to make optimal parking and rightsizing recommendations when this feature is released next month. This is part of our ongoing efforts to do what we do best – save you money, automatically.

Utilization metrics that ParkMyCloud will now report on include:

  • Average CPU utilization
  • Peak CPU utilization
  • Total instance store read operations
  • Total instance store write operations
  • Average network data in
  • Average network data out
  • Average network packets in
  • Average network packets out

Here is an example of an instance utilization heatmap, which allows you to see when your instances are used most often:

In a few weeks, we will release the ability for ParkMyCloud to recommend parking schedules for your instances based on these metrics. In order to take advantage of this, you will need to have several weeks’ worth of CloudWatch data already logged, so that we can recommend based on your typical usage. Start your ParkMyCloud trial today to start tracking your usage patterns so you can get usage-based parking recommendations.

If you are an existing customer, you will need to update your AWS policies to enable ParkMyCloud to access your AWS CloudWatch data. Detailed instructions can be found in our support portal.

Feedback? Anything else you’d like to see ParkMyCloud do? Let us know!

Read more ›

Cloud Per-Second Billing – How Much Does It Really Save?

It has been a little over a month since Amazon and Google switched some of their cloud services to per-second billing and so the first invoices with the revised billing are hitting your inboxes right about now. If you are not seeing the cost savings you hoped for, it may be a good time to look again at what services were slated for the pricing change, and how you are using them.

Google Cloud Platform

Starting with the easiest one, Google Cloud Platform (GCP), you may not be seeing a significant change, as most of their services were already billing at the per-minute level, and some were already at the per-second level. The services moved to per-second billing (with a one-minute minimum) included Compute Engine, Container Engine, Cloud Dataproc, and App Engine VMs.  Moving from per-minute billing to per-second billing is not likely to change a GCP service bill by more than a fraction of a percent.

Let’s consider the example of an organization that has ten GCP n1-standard-8 Compute Engine machines in Oregon at a base cost of $0.3800 per hour as of the date of this blog. Under per-minute billing, the worst-case scenario would be to shut a system down one second into the next minute, for a cost difference of about $0.0063. Even if each of the ten systems were assigned to the QA or development organization, and they were shut down at the end of every work day, say 22 days out of the month, your worst-case scenario would be an extra charge of 22 days x 10 systems x $0.0063 = $1.3860. Under per-second billing, the worst case is to shut down at the beginning of a second, with a highest possible cost for these same machines (sparing you the math) being about $0.02. So, the best this example organization can hope to save over a month with these machine with per-second billing is $1.39.

Amazon Web Services

On the Amazon Web Services (AWS) side of the fence, the change is both bigger and smaller.  It is bigger in that they took the leap from per-hour to per-second billing for On-Demand, Reserved, and Spot EC2 instances and provisioned EBS, but smaller in that it is only for Linux-based instances; Windows instances are still at per-hour.

Still, if you are running a lot of Linux instances, this change can be significant enough to notice.  Looking at the same example as before, let’s run the same calculation with the roughly equivalent t2.2xlarge instance type, charged at $0.3712 per hour. Under per-hour billing, the worst-case scenario is to shut a system down even a second into the next higher hour. In this example, the cost would be an extra charge of 22 days x 10 systems x $0.3712 = $81.664. Under per-second billing, the worst case is the same $0.02 as with GCP (with fractions of cents difference lost in the noise). So, under AWS, one can hope to see significantly different numbers in the bill.

The scenario above is equally relevant to other situations where instances get turned on and off on a frequent basis, driving those fractions of an hour or a minute of “lost” time. Another common example would be auto-scaling groups that dynamically resize based on load, and see enough change over time to bring instances in and out of the group. (Auto-scale groups are frequently used as a high-availability mechanism, so their elastic growth capabilities are not always used, and so savings will not always be seen.) Finally, Spot instances are built on the premise of bringing them up and down frequently, and they will also enjoy the shift to per-second billing.

However, as you look at your cloud service bill, do keep in mind some of the nuances that still apply:

  • Windows: GCP applies per-second billing to Windows; AWS is still on one-hour billing for Windows.
  • Marketplace Linux: Some Linux instances in the AWS Marketplace that have a separate hourly charge are also still on hourly billing (perhaps due to contracts or licensing arrangements with the vendors?), so you may want to reconsider which flavor of Linux you want to use.
  • Reserved instances: AWS does strive to “use up” all of the pre-purchased time for reserved instances, spreading it across multiple machines with fractions of usage time, and per-second billing can really stretch the value of these instances.
  • Minimum of one-minute charge: Both GCP and AWS will charge for at least a minute from instance start before per-second billing comes into play.

Overall, per-second billing is a great improvement for consumers of cloud resources…and will probably drive us all more than ever to make each second count.

Read more ›

AWS IAM Roles and Ways to Use them to Improve Security

What are AWS IAM Roles?

Within AWS Identity and Access Management system (IAM) there are a number of different identity mechanisms that can be configured to secure your AWS environment, such as Users, Groups, and AWS IAM Roles. Users are clearly the humans in the picture, and Groups are collections of Users, but Roles can be a bit more obscure. Roles are defined as a set of permissions that grant access to actions and resources in AWS. Unlike Users, which are tied to a specific Identity and a specific AWS account, an IAM Role can be used by or assumed by IAM User accounts or by services within AWS, and can give access to Users from another account altogether.

To better understand Roles, I like the metaphor of a hat.  When we say a Role is assumed by a user – it is like saying someone can assume certain rights or privileges because of what hat they are wearing.  In any company (especially startups), we sometimes say someone “wears a lot of hats” – meaning that person temporarily takes on a number of different Roles, depending on what is needed. Mail delivery person, phone operator, IT support, code developer, appliance repairman…all in the space of a couple hours.

IAM Roles are similar to wearing different hats this in that they temporarily let an IAM User or a service get permissions to do things they would not normally get to do.  These permissions are attached to the Role itself, and are conveyed to anyone or anything that assumes the role.  Like Users, Roles have credentials that can be used to authenticate the Role identity.

Here are a couple ways in which you can use IAM Roles to improve your security:

EC2 Instances

All too often, we see software products that rely on credentials (username/password) for services or accounts that are either hard-coded into an application or written into some file on disk. Frequently the developer had no choice, as the system had to be able to automatically restart and reconnect if the machine rebooted, without anyone to manually type in credentials during the rebootwhen the system rebooted. If the code is examined, or file system is compromised, then the credentials are exposed, potentially compromisingand can potentially used to compromise other systems and services. In addition, such credentials make it really difficult to periodically change the password. Even in AWS we sometimes see developers hard-code API Key IDs and Keys into apps in order to get access to some AWS service. This is a security accident waiting to happen, and can be avoided through the use of IAM Roles.

With AWS, we can assign a single IAM Role to an EC2 instance. This assignment is usually made when the instance is launched, but can also be done at runtime if needed. Applications running on the server retrieve the Role’s security credentials by pulling them out of the instance metadata through a simple web command. These credentials have an additional advantage over potentially long-lived, hard-coded credentials, in that they are changed or rotated frequently, so even if somehow compromised, they can only be used for a brief period.

Another key security advantage of Roles is that they can be limited to just the access/rights privileges needed to get a specific job done. Amazon’s documentation for roles gives the example of an application that only needs to be able to read files out of S3. In this case, one can assign a Role that contains read-only permissions for a specific S3 bucket, and the Role’s configuration can say that the role can only be used by EC2 instances. This is an example of the security principle of “least privilege,”, where the minimum privileges necessary are assigned, limiting the risk of damage if the credential is compromised. In the same sense that you would not give all of your users “Administrator” privileges, you should not create a single “Allow Everything” Role that you assign everywhere. Instead create a different Role specific to the needs of each system or group of systems.

Delegation

Sometimes one company needs to give access to their resources to another company. Before IAM Roles, (and before AWS) the common ways to do that were to share account logins (with the same issues identified earlier with hardcoded credentials) or to use complicated PKI/certificate based systems. If both companies using AWS, sharing access is much easier with Role-based Delegation. There are several ways to configure IAM Roles for delegation, but for now we will just focus on delegation between accounts from two different organizations.

At ParkMyCloud, our customers use Delegation to let us read the state of their EC2, RDS, and scaling group instances, and then start and stop them per the schedules they configure in our management console.

To configure Role Delegation, a customer first creates an account with the service provider, and is given the provider’s AWS Account ID and an External ID. The External ID is a unique number for each customer generated by the service provider.

The administrator of the customer environment creates an IAM Policy with a constrained set of access (principle of “least privilege” again), and then assigns that policy to a new Role (like “ParkMyCloudAccess”), specifically assigned to the provider’s Account ID and External ID.  When done, the resulting IAM Role is given a specific Amazon Resource Name (ARN), which is a unique string that identifies the role.  The customer then enters that role in the service provider’s management console, which is then able to assume the role.  Like the EC2 example, when the ParkMyCloud service needs to start a customer EC2 instance, it calls the AssumeRole API, which verifies our service is properly authenticated, and returns temporary security credentials needed to manage the customer environment.

Conclusions

AWS IAM Roles make some tasks a lot simpler by flexibly assigning roles to instances and other accounts. IAM Roles can help make your environment more secure by:

  • Using the principle of Least Privilege in IAM policies to isolate the systems and services to only those needed to do a specific job.
  • Prevent hard coding of credentials in code or files, minimizing danger from exposure, and removing the risk of long-unchanged passwords.
  • Minimizing common accounts and passwords by allowing controlled cross-account access.
Read more ›

AWS Lambda + ParkMyCloud = Supercharged Automation

Among the variety of AWS services and functionality, AWS Lambda seems to be taking off with hackers and tinkerers. The idea of “serverless” architecture is quite a shift in the way we think about applications, tools, and services, but it’s a shift that is opening up some new ideas and approaches to problem solving.  

If you haven’t had a chance to check out Lambda, it’s a “function-as-a-service” platform that allows you to run scripts or code on demand, without having to set up servers with the proper packages and environments installed. Your lambda function can trigger from a variety of sources and events, such as HTTP requests, API calls, S3 bucket changes, and more. The function can scale up automatically, so more compute resources will be used if necessary without any human intervention. The code can be written in Node.js, Python, Java, and C#.

Some pretty cool ideas already exist for lambda functions to automate processes.  One example from AWS is to respond to a Github event to trigger an action, such as the next step in a build process.  There’s also a guide on how to use React and Lambda to make an interactive website that has no server.

For those of you who are already using ParkMyCloud to schedule resources, you may be looking to plug in to your CI/CD pipeline to achieve Continuous Cost Control.  I’ve come up with a few ideas of how to use Lambda along with ParkMyCloud to supercharge your AWS cloud savings.  Let’s take a look at a few options:

Make ParkMyCloud API calls from Lambda

With ParkMyCloud’s API available to control your schedules programmatically, you could make calls to ParkMyCloud from Lambda based on events that occur.  The API allows you to do things like list resources and schedules, assign schedules to resources, snooze schedules to temporarily override them, or cancel a snooze or schedule.

For instance, if a user logs in remotely to the VPN, it could trigger a Lambda call to snooze the schedules for that user’s instances.  Alternatively, a Lambda function could change the schedules of your Auto Scaling Group based on average requests to your website.  If you store data in S3 for batch processing, a trigger from an S3 bucket can tell Lambda to notify ParkMyCloud that the batch is ready and the processing servers need to come online.

Send notifications from ParkMyCloud to Lambda

With ParkMyCloud’s notification system, you can send events that occur in the ParkMyCloud system to a webhook or email.  The events can be actions taken by schedules that are applied to resources, user actions that are done in the UI, team and schedule assignments from policies, or errors that occur during parking.

By sending schedule events, you could use a Lambda function to tell your monitoring tool when servers are being shut down from schedules.  This could also be a method for letting your build server know that the build environment has fully started before the rest of your CI/CD tools take over.  You could also send user events to Lambda to feed into a log tool like Splunk or Logstash.  Policy events can be sent to Lambda to trigger an update to your CMDB with information on the team and schedule that’s applied to a new server.

Think outside the box!

Are you already using AWS Lambda to kick off functions and run scripts in your environment?  Try combining Lambda with ParkMyCloud and let us know what cool tricks you come up with for supercharging your automation and saving on your cloud bill! Stop by Booth 1402 at AWS re:Invent this year and tell us.

Read more ›

5 Favorite AWS Training Resources

When it comes to AWS training resources, there’s no shortage of information out there. Considering the wide range of videos, tutorials, blogs, and more, it’s hard knowing where to look or how to begin. Finding the best resource depends on your learning style, your needs for AWS, and getting the most updated information available. With this in mind, we came up with our 5 favorite AWS training resources, sure to give you the tools you need to learn AWS:

1. AWS Self-Paced Labs

What better way to learn that at your own pace? AWS self-paced labs give you hands-on learning in a live AWS environment, with AWS services, and actual scenarios you would encounter in the cloud. Among the recommended labs you’ll find an Introduction to Amazon Elastic Compute Cloud (EC2), and for more advanced users, a lab on Creating Amazon EC2 Instances with Microsoft Windows. If you’re up for an adventure, enroll in a learning quest and immerse yourself in a collection of labs that will help you master any AWS scenario at your own pace. Once completed, you will earn a badge that you can boast on your resume, LinkedIn, website, etc.  

2. AWS Free Tier

Sometimes the best way to learn something is by jumping right in. With the AWS Free Tier, you can try AWS services for free. This is a great way to test out AWS for your business, or for the developers out there, to try services like AWS CodePipeLine, AWS Data Pipeline, and more. While you are still getting a hands-on opportunity to learn a number of AWS services, the only downside is that there are certain usage limits. You can track your usage with a billing alarm to avoid unwanted charges, or you can try ParkMyCloud and park your instances when they’re not in use to get the most out of your free tier experience. In fact, ParkMyCloud started its journey by using AWS’ free tier – we eat our own dog food!

3. AWS Documentation

AWS Documentation is like a virtual encyclopedia of tools, terms, training, and everything AWS. You’ll find white papers, case studies, tutorials, cloud computing basics, and so much more. This resource is a one-stop-shop for all of your AWS documentation needs, whether you’re a beginner or advanced user. No matter where you are in your AWS training journey, AWS documentation is always a useful reference and certainly deserves a spot in your bookmarks.

4. YouTube

So far, we’ve gone straight to the source for 3 out of 5 of our favorite AWS training resources. Amazon really does a great job of providing hands-on training, tutorials, and documentation for users with a range of experience. However, YouTube opens up a whole new world of video training that includes contributions from not only Amazon, but other great resources as well. Besides the obvious Amazon Web Services channel, there are also popular and highly rated videos by Edureka, Simplilearn, Eli the Computer Guy, and more.

5. Bloggers

As cloud technology usage continues to expand and evolve, blogs are a great way to stay up to speed with AWS and the world of cloud computing. Of course, in addition to labs, a free-trial, extensive documentation, and their own YouTube channel, AWS also has their own blog. Since AWS actually has a number of blogs that vary by region and technology, we recommend that you start by following Jeff Barr – Chief Evangelist at Amazon Web Services, and primary contributor. Edureka was mentioned in our recommended YouTube channels, they also have a blog that covers plenty of AWS topics. In addition, the CloudThat blog is an excellent resource for AWS and all things cloud, and was co-founded by Bhaves Goswami – a former member of the AWS product development team.

 


There’s plenty of information out there when it comes to AWS training resources. We picked our 5 favorite resources for their reliability, quality, and range of information. Whether you’re new to AWS or consider yourself an expert, these resources are sure to help you find what you’re looking for.

Read more ›

3 Enterprise Cloud Management Challenges You Should Be Thinking About

Enterprise cloud management is a top priority. As the shift towards multi-cloud environments continues, so has the need to consider the potential challenges. Whether you already use the public cloud, or are considering making the switch, you probably want to know what the risks are. Here are three you should be thinking about.

1. Multi-Cloud Environments

As the ParkMyCloud platform supports AWS, Azure, and Google, we’ve noticed that multi-cloud strategies are becoming increasingly common among enterprises. There are a number of reasons why it would be beneficial to utilize more than one cloud provider. We have discussed risk mitigation as a common reason, along with price protection and workload optimization. As multi-cloud strategies become more popular, the advantages are clear. However, every strategy comes with its challenges, and it’s important for CIOs to be aware of the associated risks.

Without the use of cloud management tools, multi-cloud management is complex and sometimes difficult to navigate. Different cloud providers have different price models, product features, APIs, and terminology. Compliance requirements are also a factor that must be considered when dealing with multiple providers. Meeting and maintaining requirements for one cloud provider is complicated enough, let alone multiple. And don’t forget you need a single pane to view your multi-cloud infrastructure.

2. Cost Control

Cost control is a first priority among cloud computing trends. Enterprise Management Associates (EMA) conducted a research study and identified key reasons why there is a need for cloud cost control, among them were inefficient use of cloud resources, unpredictable billing, and contractual obligation or technological dependency.

Managing your cloud environment and controlling costs requires a great deal of time and strategy, taking away from the initiatives your enterprise really needs to be focusing on. The good news is that we offer a solution to cost control that will save 65% or more on your monthly cloud bills – just by simply parking your idle cloud resources. ParkMyCloud was one of the top three vendors recommended by EMA as a Rapid ROI Utility. If you’re interested in seeing why, we offer a 14-day free trial.

3. Security & Governance

In discussing a multi-cloud strategy and its challenges, the bigger picture also includes security and governance. As we have mentioned, a multi-cloud environment is complex, complicated, and requires native or 3rd party tools to maintain vigilance. Aside from legal compliance based on the industry your company is in, the cloud also comes with standard security issues and of course the possibility of cloud breaches. In this vein, as we talk to customers they often worry about too many users being granted console access to create and terminate cloud resources which can lead to waste. A key here is limiting user access based on roles or Role-based Access Controls (RBAC). At ParkMyCloud we recognize that visibility and control is important in today’s complex cloud world. That’s why in designing our platform, we provide the sysadmin the ability to delegate access based on a user’s role and the ability to authenticate leveraging SSO using SAML integration . This approach brings security benefits without losing the appeal of a multi-cloud strategy.

Our Solution

Enterprise cloud management is an inevitable priority as the shift towards a multi-cloud environment continues. Multiple cloud services add complexity to the challenges of IT and cloud management. Cost control is time consuming and needs to be automated and monitored constantly. Security and governance is a must and it’s necessary to ensure that users and resources are optimally governed. As the need for cloud management continues to grow, cloud automation tools like ParkMyCloud provide a means to effectively manage cloud resources, minimize challenges, and save you money.

Read more ›
Page 1 of 3123
Copyright © ParkMyCloud 2016-2018. All rights reserved|Privacy Policy