Interview: ParkMyCloud Empowers Sysco Foods’ Cloud-Only Strategy

Interview: ParkMyCloud Empowers Sysco Foods’ Cloud-Only Strategy

We talked with Kurt Brochu, Senior Manager of the Cloud Enablement Team at Sysco Foods, about how his company has been using ParkMyCloud to empower end users to keep costs in check with the implementation of their cloud-only strategy.

Thanks for taking the time to speak with us today. I know we chatted before at re:Invent, where you gave us some great feedback, and we’re excited to hear more about your use of ParkMyCloud since it rolled out to your other teams.

To get started, can you describe your role at Sysco and what you do?

I’m a senior manager here in charge of the cloud enablement team. The focus is on public cloud offerings, where we function as the support tier for the teams that consume those services. I also have ownership of ensuring that cost containment and appropriateness of use is being performed, as well as security and connectivity, network services, authentication, and DNS.

We don’t consider ourselves IT, our department is referred to as Business Technology. Our CTO brought us on 3 or 4 years ago with the expectation that we understand the business needs, wants, and desires, to actually service them as they would need versus passively telling them that their server is up or down.

As well as security and the dev team, teams using cloud also include areas that are customer facing, like sales, or internal, like finance, business reporting, asset management, and the list goes on.

Tell us about your company’s cloud usage.

We’ve had our own private cloud since 2003, offered on-prem. We’ve been in public cloud since 2013. Now, our position has gone from a “cloud-first” to a “cloud-only” strategy in the sense that any new workload that comes along is primarily put in public cloud. We primarily use AWS and are adding workloads to Azure as well.

Talk to me about how cost control fits into your cloud-only strategy. How did you realize there was a problem?

We were seeing around 20% month over month growth in expenditure between our two public clouds. Our budget wasn’t prepared for that type of growth.

We realized that some of the teams that had the ability to auto-generate workloads weren’t best managing their resources. There wasn’t an easy way to show the expenses in a visual manner to present them to Sysco, or to give them some means to manage the state of their workloads.

The teams were good at building other pipelines for bringing workloads online but they didn’t have day-to-day capabilities.

How did you discover ParkMyCloud as a solution to your cost control problem?

We first stumbled upon ParkMyCloud at the 2016 AWS re:Invent conference and were immediately intrigued but didn’t have the cycles to look into it until this past summer, when we made the switch from cloud-first to a cloud-only strategy.

We’ve been running ParkMyCloud since the week before re:Invent in 2017. From there, we had our first presentation to our leadership team in December 2017, where we showed that the uptick in savings was dramatic. It’s leveled off right now because we have a lot of new workloads coming in, but the savings are still noticeable. We still have developers who think that their dev system has to always be on and at will, but they don’t understand that now that we have ParkMyCloud, making it “at will” is as simple as an API call or the click of a button. I expect to see our savings to grow over the rest of the calendar year.

We have 50+ teams and over 500 users on ParkMyCloud now.

That’s great to hear! So how much are you saving on your cloud costs with ParkMyCloud?

Our lifetime savings thus far is $28,000, and the tool has paid for itself pretty quickly.

We have one team who has over 40% savings on their workloads. They were spending on average about $10,000 a month, and now it’s at $5,800 because they leverage ParkMyCloud’s simplified scheduling start/stop capabilities.

What other benefits are you getting from your use of the platform?

What I really like is that we have given most of our senior directors, who actually own the budgets, access to the tool as well. It lets the senior directors, as well as the executives when I present to them, see the actual cost savings. It gives you the ability to shine light in places that people don’t like to have the light shine.

The development team at ParkMyCloud has also been very open to receiving suggestions and capabilities that will help us improve savings and increase user adoption.

That’s great, and please continue to submit your feedback and requests to us! And in that regard, have you tried our SmartParking feature to get recommended schedules based on your usage?

Yes, we have started to. When I’m asked by a team to show them how we suggest they use the tool, they get to decide whether or not to enforce it. I’ll say that they are exceedingly happy by the fact that they can go and see their usage. One developer is telling their team that the feature has to be on at all times.

Are there any other cost savings measures that you use in conjunction with ParkMyCloud or in addition?

We pull numbers and look at Amazon’s best prices guide for sizing. We also take the recommendations from ParkMyCloud and we cross compare those.

Do you have any other feedback for us?

The magic of ParkMyCloud is that it empowers the end user to make decisions for the betterment of business, and gives us the needed visibility to do our jobs effectively. That’s the bottom line. Each user has a decision: I can spend money on wasted resources or I can save it where I can and apply the savings to other projects. Once you start to understand that, then you have that “AHA” moment.

Before using ParkMyCloud, most developers have no awareness of the expense of their workloads. This tool allows me to unfilter that data so they can see, for example: this workload is $293 a month, every month. If you look at your entire environment, you’re spending $17,000 a month, but if you take it down just for the weekend, you could be saving $2-3,000 a month or more depending on how aggressive you want to be, without hurting your ability to support the business. It’s that “AHA” moment that is satisfying to watch.

That’s what we noticed immediately when we looked at the summary reports  – the uptick that appears right after you have these presentations with the team makes your heart feel good.

Well thank you Kurt, again we really appreciate you taking the time to speak with us.

Thank you.

Azure Region Pricing: Costs for Compute

Azure Region Pricing: Costs for Compute

In this blog we are going to examine how Microsoft Azure region pricing varies and how region selection can help you reduce cloud spending.

How Organizations Select Public Cloud Regions

There are many comparisons that go into pricing differences between AWS vs Azure vs GCP, etc. At the end of the day, however, most organizations select one primary cloud service provider (CSP) for most of their workloads, plus maybe another for multi-cloud redundancy of critical services. Once selected, organizations then typically put many of their workloads in the region closest to their offices, plus maybe some geographic redundancy in their production systems. In other situations, a certain region is selected because that is the first region to support some new CSP feature. As time goes by, other regions become options because either those new features are propagated through the system, or whole new regions are created.

CSP regions tend to cluster around certain larger geographic regions, that I will call “areas” for the purpose of this blog. Looking at Azure in particular, we can see that Azure has three major US areas (Western, Central, and Eastern). The Western and Eastern US areas each have two Azure regions, and the Central area has four Azure regions. The UK, Europe and Australia areas each have two Azure regions. There are a number of other Azure regions as well, but they are far enough dispersed that I would consider them to be areas with a single region.

How Does Azure Region Pricing Vary?

With this regional distribution as a starting point, let’s look next at costs for instances. Here is a somewhat random selection of Azure region pricing data, looking at a variety of instance types (cost data as of approximately March 1, 2018).

While this graphic is a bit busy, there are a couple things that jump out at us:

  • Within most of the areas, there are clearly more expensive regions and less expensive regions.
  • The least expensive regions, on average across these instance types are us-west-2, us-west-central, and korea-south.
  • The most expensive regions are asia-pacific-east, japan-east, and australia-east.
  • Windows instances are about 1.5-3 times more expensive than their Linux-based counterparts

Let’s zoom-in on Azure Standard_DS2_v2 instance type, which comprises almost 60% of the total population of Azure instances customers are managing in the ParkMyCloud platform.

We can clearly see the relative volatility in the cost of this instance type across regions. And, while the Windows instance is about 1.5-2 times the cost of the Linux instance, the volatility is fairly closely mirrored across the regions.

Of more interest, however, is how the costs can differ within a given area. From that comparison we can see that there is some real savings to be gained by careful region selection within an area:

Over the course of a year, strategic region selection of a Windows DS2 instance could save up to $578 for the asia-pacific regions, $298 for the us-east regions, and $228 for the Korean regions.  

How to Save Using Regions

By comparing regions within your desired “area” as illustrated above, the savings over a quantity of instances can be significant. Good region selection is fundamental to controlling Azure costs, and for costs across the other clouds as well.

Announcing SmartParking for Microsoft Azure: Automated On/Off Schedules Based on Azure Monitor Data

Announcing SmartParking for Microsoft Azure: Automated On/Off Schedules Based on Azure Monitor Data

Today, we’re excited to announce the release of SmartParkingTM for Microsoft Azure! SmartParking allows Azure customers to automate cloud cost optimization by creating parking schedules optimized to your actual cloud usage based on Azure Monitor data.

Here’s how it works: ParkMyCloud analyzes your Azure Monitor data to find patterns in the usage for each of your virtual machines (VMs). Based on those patterns, ParkMyCloud creates recommended on/off schedules for each VM to turn them off when they are idle. This maximizes your savings by ensuring that no VM is running when it’s not needed — while also saving you the time and frustration of trying to figure out when your colleagues need their resources running.

We released SmartParking for AWS in January, and customers have had positive feedback — and SmartParking for Google Cloud Platform is coming soon.

Customize Your Recommendations like your 401K

Is it better to park aggressively, maximizing savings, or to park conservatively, ensuring that no VM is parked when a user might need it? Everyone will have a different preference, which is why we’ve created different options for SmartParking recommendations. Like an investment portfolio, you can choose to receive SmartParking schedules that are “conservative”, “balanced”, or “aggressive”. And like an investment, a bigger risk comes with the opportunity for a bigger reward.

An aggressive SmartParking schedule prioritizes the maximum savings amount. You will park instances – and therefore save money – for the most time, with the “risk” of occasional inconvenience by having something turned off when someone needs it. Not to worry, though — users can always “snooze” these schedules to override them if they need to use the instance when it’s parked.

On the other hand, a conservative SmartParking schedule will make it more likely that your instances are never parked when they might be needed. It will only recommend parked times when the instance is never used. Choose “balanced” for a happy medium.

Customer Feedback: Making Parking Better Than Ever

ParkMyCloud customer Sysco Foods has more than 500 users across 50 teams using ParkMyCloud to manage their AWS environments. “When I’m asked by a team how they should use the tool, they’re exceedingly happy that they can go in and see when systems are idle,” Kurt Brochu, Sysco Foods’ Senior Manager of the Cloud Enablement Team, said of SmartParking. “To me, the magic is that the platform empowers the end user to make decisions for the betterment of the business.”

Already a ParkMyCloud user? Log in to your account to try out SmartParking for Azure. Note that you’ll have to update the permissions that ParkMyCloud has to access your Azure data — see the user guide for instructions on that.

Not yet a ParkMyCloud user? Start a free trial here.

Google Cloud Platform user? Not to worry — Google Cloud SmartParking is coming next month. Let us know if you’re interested and we’ll notify you when it’s released.

Don’t Let Your Server Patching Schedule Get in the Way of Cost Control

Don’t Let Your Server Patching Schedule Get in the Way of Cost Control

Don’t let your server patching schedule get in the way of saving money. The idea of minimizing cloud waste was a very new concept two years ago, but as cloud use has grown, so has the need for minimizing wasted spend. CFOs now demand that the cloud operations teams turn off idle systems in the face of rising cloud bills, but the users of these systems are the ones that have to deal with servers being off when they need them.

Users of ParkMyCloud are able to overcome some of the common objections to scheduling non-production resources. The most common objection is, “What if I need the server or database when it’s scheduled to be off?” That’s why ParkMyCloud offers the ability to “snooze” the schedule, which is a temporary override that lets you choose how long you need the system for. This snooze can be done easily from our UI, or through alternative methods like our API, mobile app, or Slackbot.

A related objection is related to how your parking schedule can work with your server patching schedule. The most common way of dealing with patching in ParkMyCloud is to use our API. The workflow would be to log in through the API, get a list of the resources, then choose which resources you want and choose to “snooze” the schedule for a couple of hours, or however long the patching takes. Once the schedule is snoozed, you can toggle the instance on, then do the patching. After the patching is complete, you can either cancel the snooze to go back to the original schedule or wait for the snooze to finish and timeout. If you have an automated patching tool that can make REST calls, this can be an easy way to patch on demand with minimal work.

If you’re on a weekly server patching schedule, you could also just implement the patch times into your pre-set schedules so that the instances turn on, say, at 2:00 a.m. on Wednesdays. By plugging this into your normal schedules, you can still save money during most off-hours, but have the instances on when the patch window is open. This can be a great way to do weekly backups as well, with minimal disruption.

This use of ParkMyCloud while plugging in to external tools and processes is the best way to get every developer and CloudOps engineer on board with continuous cost control. By reducing these objections, you can reduce your cloud costs and be the hero of your organization. Start up a free trial today to see these plug-ins in action!

Microsoft’s Start/Stop VM Solution vs. ParkMyCloud

Microsoft’s Start/Stop VM Solution vs. ParkMyCloud

Microsoft recently released a preview of their Start/Stop VM solution in the Azure Marketplace. Users of Azure took notice and started looking into it, only to find that it was lacking some key functionality that they required for their business. Let’s take a look at what this Start/Stop tool offers and what it lacks, then compare it to ParkMyCloud’s comprehensive offering.

Azure Start/Stop VM Solution

The crux of this solution is the use of a few Azure services, specifically Automation and Log Analytics to schedule the VMs and SendGrid to let you know when a system was shut down or started via email. This use of native tools within Azure can be useful if you’re already baked into the Azure ecosystem, but can be prohibitive to exploring other cloud options.

This solution does cost money, but it’s not very easy to estimate the cost (but does that surprise you?). The total cost is based on the underlying services (Automation, Log Analytics, and SendGrid), which means it could be very cheap or very expensive depending on what else you use and how often you’re scheduling resources. The schedules can be based on time, but only for a single start and stop time. The page claims it can be based on utilization, but in the initial setup there is no place to configure that. It also needs to be set up for 4 hours before it can show you any log or monitoring information.

The interface for setting up schedules and automation is not very user-friendly. It requires creating automation scripts that are either for stopping or starting only, and only have one time attached. To create new schedules, you have to create new scripts, which makes the interface confusing for those who aren’t used to the Azure portal. At the end of the setup, you’ll have at least a dozen new objects in your Azure subscription, which only grows if you have any significant number of VMs.

How it stacks up to ParkMyCloud

So if the Start/Stop VM Solution from Microsoft can start and stop VMs, what more do you need? Well, we at ParkMyCloud have heard from customers (ranging from day-1 startups to Fortune 100 companies) that there are features necessary for a cloud cost optimization tool if it is going to get widespread adoption. Here are some of the features ParkMyCloud has that are missing from the Microsoft tool:

  • Single Pane of Glass – ParkMyCloud can work with multiple clouds, multiple accounts within each cloud, and multiple regions within each account, all in one easy-to-use interface.
  • Easy to change or override schedules – Users can change schedules or temporarily “snooze” them through the UI, our API, our Slackbot, or through our iOS app.
  • User Management – Admins can delegate access to users and assign Team Leads to manage sub-groups within the organization, providing user governance over schedules and VMs.
  • No Azure-specific knowledge needed – Users don’t need to know details about setting up Automation Scripts or Log Analytics to get their servers up and running. Many ParkMyCloud administrators provide access to users throughout their organizations via the ParkMyCloud RBAC. This is useful for users who may need to, say, start and stop a demo environment on demand, but who do not have the knowledge necessary to do this through the Azure console.
  • Enterprise features – Single sign-on, savings reports, notifications straight to your email or chat group, and full support access helps your large organization save money quickly.

As you can tell, the Start/Stop VM solution from Microsoft can be useful for very specific cases, but most customers will find it lacking the features they really need to make cloud cost savings a priority. ParkMyCloud offers these features at a low cost, so try out the free trial now to see how quickly you can cut your Azure cloud bill.

7 Ways Cloud Services Pricing is Confusing

7 Ways Cloud Services Pricing is Confusing

Beware the sticker shock – cloud services pricing is nothing close to simple, especially as you come to terms with the dollar amount on your monthly cloud bill. While cloud service providers like AWS, Azure, and Google were meant to provide compute resources to save enterprises money on their infrastructure, cloud services pricing is complicated, messy, and difficult to understand. Here are 7 ways that cloud providers obscure pricing on your monthly bill:  

1 – They use varying terminology

For the purpose of this post, we’ll focus on the three biggest cloud service providers: AWS, Azure, and Google. Between these three cloud providers alone, different analogies are used for just about every component of services offered.

For example, when you think of a virtual machine (VM), that’s what AWS calls an “instance,” Azure calls a “virtual machine,” and Google calls a “virtual machine instance.” If you have a group of these different machines, or instances, in Amazon and Google they’re called “auto-scaling” groups, whereas in Azure they’re called “scale sets.” There’s also different terminology for their pricing models. AWS offers on-demand instances, Azure calls it “pay as you go,” and Google refers to it as “sustained use.” You’ve also got “reserved instances” in AWS, “reserved VM instances” in Azure, and “committed use” in Google. And you have spot instances in AWS, which are the same as low-priority VMs in Azure, and preemptible instances in Google.

2 – There’s a multitude of variables

Operating systems, compute, network, memory, and disk space are all different factors that go into the pricing and sizing of these instances. Each of these virtual machine instances also have different categories: general purpose, compute optimized, memory optimized, disk optimized and other various types. Then, within each of these different instance types, there are different families. In AWS, the cheapest and smallest instances are in the “t2” family, in Azure they’re called the “A” family. On top of that, there are different generations within each of those families, so in AWS there’s t2, t3, m2, m3, m4, and within each of those processor families, different sizes (small, medium, large, and extra large). So there are lots of different options available. Oh, and lots confusion, too.  

3 – It’s hard to see what you’re spending

If you aren’t familiar with AWS, Azure, or Google Cloud’s consoles or dashboards, it can be hard to find what you’re looking for. To find specific features, you really need to dig in, but even just trying to figure out the basics of how much you’re currently spending, and predicting how much you will be spending – all can be very hard to understand. You can go with the option of building your own dashboard by pulling in from their APIs, but that takes a lot of upfront effort, or you can purchase an external tool to manage overall cost and spending.

4 – It’s based on what you provision…not what you use

Cloud services pricing can charge on a per-hour, per-minute, or per-second basis. If you’re used to the on-prem model where you just deploy things and leave them running 24/7, then you may not be used to this kind of pricing model. But when you move to the cloud’s on-demand pricing models, everything is based on the amount of time you use it.

When you’re charged per hour, it might seem like 6 cents per hour is not that much, but after running instances for 730 hours in a month, it turns out to be a lot of money. This leads to another sub-point: the bill you get at the end of the month doesn’t come until 5 days after the month ends, and it’s not until that point that you get to see what you’ve used. As you’re using instances (or VMs) during the time you need them, you don’t really think about turning them off or even losing servers. We’ve had customers who have servers in different regions, or on different accounts that don’t get checked regularly, and they didn’t even realize they’ve been running all this time, charging up bill after bill.

You might also be overprovisioning or oversizing resources — for example, provisioning multiple extra large instances thinking you might need them someday or use them down the line. If you’re used to that, and overprovisioning everything by twice as much as you need, it can really come back to bite you when you go look at the bill and you’ve been running resources without utilizing them, but are still getting charged for them – constantly.

5 – They change the pricing frequently

Cloud services pricing has changed quite often. So far, they have been trending downward, so things have been getting cheaper over time due to factors like competition and increased utilization of data centers in their space. However, don’t jump to conclude that price changes will never go up.

Frequent price changes make it hard to map out usage and costs over time. Amazon has already made changes to their price more than 60 times since they’ve been around, making it hard for users to plan a long-term approach. Also for some of these instances, if you have them deployed for a long time, the prices of instances don’t display in a way that is easy to track, so you may not even realize that there’s been a price change if you’ve been running the same instances on a consistent basis.

6 – They offer cost savings options… but they’re difficult to understand (or implement)

In AWS, there are some cost savings measures available for shutting things down on a schedule, but in order to run them you need to be familiar with Amazon’s internal tools like Lambda and RDS. If you’re not already familiar, it may be difficult to actually implement this just for the sake of getting things to turn off on a schedule.  

One of the other things you can use in AWS is Reserved Instances, or with Azure you can pay upfront for a full year or two years. The problem: you need to plan ahead for the next 12 to 24 months and know exactly what you’re going to use over that time, which sort of goes against the nature of cloud as a dynamic environment where you can just use what you need. Not to mention, going back to point #2, the obscure terminology for spot instances, reserved instances, and what the different sizes are.

7 – Each service is billed in a different way

Cloud services pricing shifts between IaaS (infrastructure as a service), which uses VMs that are billed one way, and PaaS (platform as a service) gets billed another way. Different mechanisms for billing can be very confusing as you start expanding into different services that cloud providers offer.

As an example, the Lambda functions in AWS are charged based on the number of requests for your functions, the duration, and the time it takes for your code to execute. The Lambda free tier includes 1M free requests per month and 400,000 GB-seconds of compute time per month, or you can get 1M request free and $0.20 per 1M requests thereafter, OR use “duration” tier and get 400,000 GB-seconds per month free, $0.00001667 for every GB-second used thereafter – simple, right? Not so much.

Another example comes from the databases you can run in Azure. Databases can run as a single server or can be priced by elastic pools, each with different tables based on the type of database, then priced by storage, number of databases, etc.

With Google Kubernetes clusters, you’re getting charged per node in the cluster, and each node is charged based on size. Nodes are auto-scaled, so price will go up and down based on the amount that you need. Once again, there’s no easy way of knowing how much you use or how much you need, making it hard to plan ahead.

What can you do about it?

Ultimately, cloud service offerings are there to help enterprises save money on their infrastructures, and they’re great options IF you know how to use them. To optimize your cloud environment and save money on costs, we have a few suggestions:

    • Get a single view of your billing. You can write your own scripts (but that’s not the best answer) or use an external tool.  
    • Understand how each of the services you use is billed. Download the bill, look through it, and work with your team to understand how you’re being billed.
    • Make sure you’re not running anything you shouldn’t be. Shut things down when you don’t need them, like dev and test instance on nights and weekends.Try to plan out as much as you can in advance.
    • Review regularly to plan out usage and schedules as much as you can in advance
    • Put governance measures in place so that users can only access certain features, regions, and limits within the environment. 

Cloud services pricing is tricky, complicated, and hard to understand. Don’t let this confusion affect your monthly cloud bill. Try ParkMyCloud for an automated solution to cost control.