5 Ways to Get Discounts on Cloud Resources

5 Ways to Get Discounts on Cloud Resources

Whether you’re just getting started on public cloud, or you’ve gotten a bill that blew your budget out of the water, it’s a good idea to research ways to get discounts on cloud resources. There’s no reason to pay list price when so many cost-savings measures are available (and your peers are probably taking advantage of them!) Here are our top five ways to get discounts on cloud.

1. Buy in Advance

By purchasing your compute power in advance, you can get a discounted rate — the notable examples being AWS Reserved Instances, Azure Reserved Instances, and Google Committed Use Discounts.

So will these save you money? Actually, that’s a great question. There are several factors that weigh into the answer:

  • How much you pay upfront (for example AWS offers all-upfront, partial-upfront, or no-upfront)
  • Contract term: 1-year or 3-year term – the longer term will save more, but there’s risk involved in committing for that long
  • If the cloud provider cuts their prices during your contract term (and they probably will), you’ll save less

This blog post about AWS Reserved Instances digs into these issues further. Bottom line: paying in advance can save you money, but proceed with caution.

2. Use Your Resources More

The primary example of “spending more to save more” in the cloud computing world is Google Sustained Use Discounts. This is a cool option for automatic savings – as long as you use an instance for at least 25% of the month, GCP will charge you less than list price.

But just like the advanced purchasing options above, there are several factors to account for before assuming this will really save you “up to 60%” of the cost. It may actually be better to just turn off your resources when you’re not using them – more in this post about Google Sustained Use Discounts.

3. If You’re Big: Enterprise Agreements and Volume Discounts

Anyone who’s shopped at Costco isn’t surprised that buying in bulk can get you a discount. Last week, Twitter announced that it will be using Google Cloud Platform for cold data storage and flexible compute Hadoop clusters — at an estimated list price of $10,000,000/month. Of course, it’s unthinkable that they would actually pay that much – as such a high-profile customer, Twitter is likely to have massive discounts on GCP’s list prices. We often hear from our Azure customers that they chose Azure due to pre-existing Microsoft Enterprise Agreements that give them substantial discounts.

If you have or foresee a large volume of infrastructure costs, make sure to look into:

4. If You’re Small: Startup Credits

Each of the major cloud providers offers free credit programs to startups to lure them and get locked in on their services – but that’s not a bad thing. We’ve talked to startups focused on anything from education to location services who have gotten their money’s worth out of these credits while they focus on growth.

If you work for a startup, check out:

5. Wait

So far, history tells us that if you wait a few months, your public cloud provider will drop their prices, giving you a built-in discount.

If you stick with the existing resource types, rather than flocking to the newer, shinier models, you should be all set. The same AWS m1.large instance that cost $0.40/hour in 2008 now goes for $0.175. We’ll just say that’s not exactly on pace with inflation.

It’s Okay if You Don’t Get Discounts on Cloud

What if you’re not a startup, you’re not an enterprise, and you just need some regular compute and database infrastructure now? Should you worry if you don’t get discounts on cloud list prices? No sweat. Even by paying list price, it’s still possible to optimize your spend. Make sure you’re combing through your bill every so often to find orphaned or unused resources that need to be deleted.

Additionally, right-size your resources and turn them off when you’re not using them to pay only for what you actually need – you’ll save money, even without a discount.

Interview: ParkMyCloud Empowers Sysco Foods’ Cloud-Only Strategy

Interview: ParkMyCloud Empowers Sysco Foods’ Cloud-Only Strategy

We talked with Kurt Brochu, Senior Manager of the Cloud Enablement Team at Sysco Foods, about how his company has been using ParkMyCloud to empower end users to keep costs in check with the implementation of their cloud-only strategy.

Thanks for taking the time to speak with us today. I know we chatted before at re:Invent, where you gave us some great feedback, and we’re excited to hear more about your use of ParkMyCloud since it rolled out to your other teams.

To get started, can you describe your role at Sysco and what you do?

I’m a senior manager here in charge of the cloud enablement team. The focus is on public cloud offerings, where we function as the support tier for the teams that consume those services. I also have ownership of ensuring that cost containment and appropriateness of use is being performed, as well as security and connectivity, network services, authentication, and DNS.

We don’t consider ourselves IT, our department is referred to as Business Technology. Our CTO brought us on 3 or 4 years ago with the expectation that we understand the business needs, wants, and desires, to actually service them as they would need versus passively telling them that their server is up or down.

As well as security and the dev team, teams using cloud also include areas that are customer facing, like sales, or internal, like finance, business reporting, asset management, and the list goes on.

Tell us about your company’s cloud usage.

We’ve had our own private cloud since 2003, offered on-prem. We’ve been in public cloud since 2013. Now, our position has gone from a “cloud-first” to a “cloud-only” strategy in the sense that any new workload that comes along is primarily put in public cloud. We primarily use AWS and are adding workloads to Azure as well.

Talk to me about how cost control fits into your cloud-only strategy. How did you realize there was a problem?

We were seeing around 20% month over month growth in expenditure between our two public clouds. Our budget wasn’t prepared for that type of growth.

We realized that some of the teams that had the ability to auto-generate workloads weren’t best managing their resources. There wasn’t an easy way to show the expenses in a visual manner to present them to Sysco, or to give them some means to manage the state of their workloads.

The teams were good at building other pipelines for bringing workloads online but they didn’t have day-to-day capabilities.

How did you discover ParkMyCloud as a solution to your cost control problem?

We first stumbled upon ParkMyCloud at the 2016 AWS re:Invent conference and were immediately intrigued but didn’t have the cycles to look into it until this past summer, when we made the switch from cloud-first to a cloud-only strategy.

We’ve been running ParkMyCloud since the week before re:Invent in 2017. From there, we had our first presentation to our leadership team in December 2017, where we showed that the uptick in savings was dramatic. It’s leveled off right now because we have a lot of new workloads coming in, but the savings are still noticeable. We still have developers who think that their dev system has to always be on and at will, but they don’t understand that now that we have ParkMyCloud, making it “at will” is as simple as an API call or the click of a button. I expect to see our savings to grow over the rest of the calendar year.

We have 50+ teams and over 500 users on ParkMyCloud now.

That’s great to hear! So how much are you saving on your cloud costs with ParkMyCloud?

Our lifetime savings thus far is $28,000, and the tool has paid for itself pretty quickly.

We have one team who has over 40% savings on their workloads. They were spending on average about $10,000 a month, and now it’s at $5,800 because they leverage ParkMyCloud’s simplified scheduling start/stop capabilities.

What other benefits are you getting from your use of the platform?

What I really like is that we have given most of our senior directors, who actually own the budgets, access to the tool as well. It lets the senior directors, as well as the executives when I present to them, see the actual cost savings. It gives you the ability to shine light in places that people don’t like to have the light shine.

The development team at ParkMyCloud has also been very open to receiving suggestions and capabilities that will help us improve savings and increase user adoption.

That’s great, and please continue to submit your feedback and requests to us! And in that regard, have you tried our SmartParking feature to get recommended schedules based on your usage?

Yes, we have started to. When I’m asked by a team to show them how we suggest they use the tool, they get to decide whether or not to enforce it. I’ll say that they are exceedingly happy by the fact that they can go and see their usage. One developer is telling their team that the feature has to be on at all times.

Are there any other cost savings measures that you use in conjunction with ParkMyCloud or in addition?

We pull numbers and look at Amazon’s best prices guide for sizing. We also take the recommendations from ParkMyCloud and we cross compare those.

Do you have any other feedback for us?

The magic of ParkMyCloud is that it empowers the end user to make decisions for the betterment of business, and gives us the needed visibility to do our jobs effectively. That’s the bottom line. Each user has a decision: I can spend money on wasted resources or I can save it where I can and apply the savings to other projects. Once you start to understand that, then you have that “AHA” moment.

Before using ParkMyCloud, most developers have no awareness of the expense of their workloads. This tool allows me to unfilter that data so they can see, for example: this workload is $293 a month, every month. If you look at your entire environment, you’re spending $17,000 a month, but if you take it down just for the weekend, you could be saving $2-3,000 a month or more depending on how aggressive you want to be, without hurting your ability to support the business. It’s that “AHA” moment that is satisfying to watch.

That’s what we noticed immediately when we looked at the summary reports  – the uptick that appears right after you have these presentations with the team makes your heart feel good.

Well thank you Kurt, again we really appreciate you taking the time to speak with us.

Thank you.

Interview: Qcentive Saves Significant Amounts on AWS while Enabling Cloud Computing in Healthcare

Interview: Qcentive Saves Significant Amounts on AWS while Enabling Cloud Computing in Healthcare

We talked with Bill Gullicksen, Director of IT at Qcentive, about how the company is using ParkMyCloud to save money on their AWS costs while enabling cloud computing in healthcare.

Thanks for taking the time to speak with us today. Can you start by telling me about Qcentive and how you are using the cloud?

We are a 2-year-old healthcare startup founded through the venture capital arm of Blue Cross Blue Shield of Massachusetts (BCBSMA). We build systems for the healthcare industry to help reduce costs in healthcare and provide efficiencies. Our cloud-based payment platform will facilitate the development and management of value-based contracts between healthcare companies. We are excited to be one of the earliest vendors authorized to take healthcare information and move it securely to the cloud.

What do you think made Qcentive stand apart as the best option for moving data to the cloud?

Healthcare has historically been cloud-averse due to issues like privacy and security concerns. In order to prove the use case for cloud computing in healthcare, we needed to build out a prototype and go through many months of meetings–producing artifacts to prove that we could move data to the cloud in a HIPAA/HITECH-compliant and secure manner.

We’ve recently released our first prototype module of the application by taking years of patient and healthcare contract information, loading it all into AWS, and then putting our application on top of it. Our application allows our health plan customers and their value-based contracting provider partners to analyze healthcare claim records, emergency room visits, etc. and to quickly calculate how to potentially realize savings in those areas.

So as you’re helping healthcare companies transition to the cloud, how did you come to find ParkMyCloud as a useful tool for your mission?

We had a few architects just going to town on AWS during the first year we were in business. They were building away, and all of a sudden our monthly AWS costs began to ramp up. We were spending a lot of money on Amazon and we didn’t even have a working application yet!

Last summer I was put in charge of our AWS operations and was asked to address our AWS costs. I asked, “what can we do to get some of these costs under control?” We started out with some rightsizing exercises and scaled some stuff back and that got us some savings. We found areas where we have had some stability and used Reserved Instances there, allowing us to get a 30-40% discount, but we didn’t want to do long-term commitments so we only did those for a year.

For the remaining instances, I realized we pay by the minute and we really don’t need to be running instances 24/7. That’s when we started thinking about how to schedule instances to shut down. I could do that and turn them off with AWS tools, but then telling an instance to turn itself back on at 6 in the morning–I didn’t have a way to do that. And that’s when I found out about ParkMyCloud and said this looks perfect – we can schedule instances to get them running 12 hours a day, 5 days a week instead of 24/7 and we’ll probably cut our costs in half.

Have you discovered any other benefits while using ParkMyCloud?

ParkMyCloud was the perfect tool for what we needed at the time and it also gave us a side benefit where we could give developers, QA people, and even data analysts and business folks the ability to turn an instance off when they’re done, or turn it on without having to write a bunch of complex policies within AWS.

Before, if we only wanted certain people to be able to manipulate a handful of instances, I had to put those instance IDs in the policies. Instance IDs frequently change, so running custom policies was taking a lot of overhead and we got the benefit from ParkMyCloud of just assigning them teams. Now, whether the instance IDs change or not, there’s no extra work for our IT team.

That’s why we chose ParkMyCloud and why we’ve been using it for 6-7 months now. For me it was great, very simple to set up, simple to use, easy for non-technical users and with very little effort from me and my technical staff, so it’s been perfect.

Great. So it seems like you were using a good mix of different cost savings efforts between the reserved instances, the rightsizing, and ParkMyCloud. Is there anything else you’re doing to manage cloud re-infrastructure costs?

Those are the bulk of it. We have other cloud-tracking subscriptions that we use sometimes. They are very simple but I just use it for looking at the daily spend, seeing if there’s any unexpected spikes, things like that. I can use it for finding resources that are no longer being used. It’s nice to have for identifying orphaned volumes and gives me a simple, easy way to clean some of that up, but we get our biggest use out of ParkMyCloud.

What percent of your resources are currently on ParkMyCloud schedules?

We’ve taken some schedules off just to keep some systems up for a while, but our rule of thumb has been to put a schedule and a team on everything. Even if a schedule is running 24/7/365, we want to at least have a schedule on it and know that it’s a conscious business decision we made to keep that up versus “it just slipped through the cracks and we never looked at it.”

About how many people in your team or organization are using ParkMyCloud?

Somewhere around 15 or so users.

Where do those users sit within your organization?

I’m Director of IT and we’ve got a Director of DevOps and a DevOps engineer–we are the three technical resources around infrastructure. Then we’ve got around 10 or so software developers that all have access so they can spin up their dev environments and spin them down when they’re not working.

We have a flexible schedule. Some of our software developers do their best coding at 3 in the morning. If they get up with an idea and they want to code, they need the ability to start up instances, do what they need to do, and then turn them off when they’re done. So they’re all in there, our QA department and some business analysts that do a lot of data analysis and database querying are also using ParkMyCloud.

That makes sense. So, how much are you saving on your AWS bills using ParkMyCloud?

Our initial savings with ParkMyCloud were significant and the product paid for itself quickly. Based on business needs, our costs can escalate rapidly so we estimate we’re saving up to 20% on our costs on a monthly basis.

We’ve got a lot of instances that we keep normally parked now and we only turn them on when there’s a workload to run. And then we’ve got probably another 40 or 50% of our instances that only run Monday through Friday, from 7:00 AM to 7:00 PM, so we’re getting that savings there which to me is bigger savings than dealing with Reserved Instances.

Things like Reserved Instances look great the day you buy them, but then the first time you have to change the size on something, all of the sudden you’ve got Reserved Instances that you’re not using anymore. With ParkMyCloud that never happens, it’s all savings.

How did you first hear about ParkMyCloud?

We were interviewing an external technology company, G2 Technologies in Boston last summer that was being brought in to augment our CI/CD process. While they were in we asked, “hey, do you know any good methods for doing scheduling?” – and they said take a look at ParkMyCloud.

Any other feedback for us?

I was surprised how simple ParkMyCloud was to get up and running. It was a couple of hours from signing up for the trial to having most of the work done and realizing savings, which was great. The release of your mobile app has been fantastic because it’s nice if we need to turn something on for somebody that doesn’t have access on a Saturday when I’m 30 miles away from my computer. I can do it anywhere with the mobile app.

Glad to hear it! I think that wraps things up for now. Thank you Bill, I appreciate your time.

You’re welcome!

Don’t Let Your Server Patching Schedule Get in the Way of Cost Control

Don’t Let Your Server Patching Schedule Get in the Way of Cost Control

Don’t let your server patching schedule get in the way of saving money. The idea of minimizing cloud waste was a very new concept two years ago, but as cloud use has grown, so has the need for minimizing wasted spend. CFOs now demand that the cloud operations teams turn off idle systems in the face of rising cloud bills, but the users of these systems are the ones that have to deal with servers being off when they need them.

Users of ParkMyCloud are able to overcome some of the common objections to scheduling non-production resources. The most common objection is, “What if I need the server or database when it’s scheduled to be off?” That’s why ParkMyCloud offers the ability to “snooze” the schedule, which is a temporary override that lets you choose how long you need the system for. This snooze can be done easily from our UI, or through alternative methods like our API, mobile app, or Slackbot.

A related objection is related to how your parking schedule can work with your server patching schedule. The most common way of dealing with patching in ParkMyCloud is to use our API. The workflow would be to log in through the API, get a list of the resources, then choose which resources you want and choose to “snooze” the schedule for a couple of hours, or however long the patching takes. Once the schedule is snoozed, you can toggle the instance on, then do the patching. After the patching is complete, you can either cancel the snooze to go back to the original schedule or wait for the snooze to finish and timeout. If you have an automated patching tool that can make REST calls, this can be an easy way to patch on demand with minimal work.

If you’re on a weekly server patching schedule, you could also just implement the patch times into your pre-set schedules so that the instances turn on, say, at 2:00 a.m. on Wednesdays. By plugging this into your normal schedules, you can still save money during most off-hours, but have the instances on when the patch window is open. This can be a great way to do weekly backups as well, with minimal disruption.

This use of ParkMyCloud while plugging in to external tools and processes is the best way to get every developer and CloudOps engineer on board with continuous cost control. By reducing these objections, you can reduce your cloud costs and be the hero of your organization. Start up a free trial today to see these plug-ins in action!

7 Ways Cloud Services Pricing is Confusing

7 Ways Cloud Services Pricing is Confusing

Beware the sticker shock – cloud services pricing is nothing close to simple, especially as you come to terms with the dollar amount on your monthly cloud bill. While cloud service providers like AWS, Azure, and Google were meant to provide compute resources to save enterprises money on their infrastructure, cloud services pricing is complicated, messy, and difficult to understand. Here are 7 ways that cloud providers obscure pricing on your monthly bill:  

1 – They use varying terminology

For the purpose of this post, we’ll focus on the three biggest cloud service providers: AWS, Azure, and Google. Between these three cloud providers alone, different analogies are used for just about every component of services offered.

For example, when you think of a virtual machine (VM), that’s what AWS calls an “instance,” Azure calls a “virtual machine,” and Google calls a “virtual machine instance.” If you have a group of these different machines, or instances, in Amazon and Google they’re called “auto-scaling” groups, whereas in Azure they’re called “scale sets.” There’s also different terminology for their pricing models. AWS offers on-demand instances, Azure calls it “pay as you go,” and Google refers to it as “sustained use.” You’ve also got “reserved instances” in AWS, “reserved VM instances” in Azure, and “committed use” in Google. And you have spot instances in AWS, which are the same as low-priority VMs in Azure, and preemptible instances in Google.

2 – There’s a multitude of variables

Operating systems, compute, network, memory, and disk space are all different factors that go into the pricing and sizing of these instances. Each of these virtual machine instances also have different categories: general purpose, compute optimized, memory optimized, disk optimized and other various types. Then, within each of these different instance types, there are different families. In AWS, the cheapest and smallest instances are in the “t2” family, in Azure they’re called the “A” family. On top of that, there are different generations within each of those families, so in AWS there’s t2, t3, m2, m3, m4, and within each of those processor families, different sizes (small, medium, large, and extra large). So there are lots of different options available. Oh, and lots confusion, too.  

3 – It’s hard to see what you’re spending

If you aren’t familiar with AWS, Azure, or Google Cloud’s consoles or dashboards, it can be hard to find what you’re looking for. To find specific features, you really need to dig in, but even just trying to figure out the basics of how much you’re currently spending, and predicting how much you will be spending – all can be very hard to understand. You can go with the option of building your own dashboard by pulling in from their APIs, but that takes a lot of upfront effort, or you can purchase an external tool to manage overall cost and spending.

4 – It’s based on what you provision…not what you use

Cloud services pricing can charge on a per-hour, per-minute, or per-second basis. If you’re used to the on-prem model where you just deploy things and leave them running 24/7, then you may not be used to this kind of pricing model. But when you move to the cloud’s on-demand pricing models, everything is based on the amount of time you use it.

When you’re charged per hour, it might seem like 6 cents per hour is not that much, but after running instances for 730 hours in a month, it turns out to be a lot of money. This leads to another sub-point: the bill you get at the end of the month doesn’t come until 5 days after the month ends, and it’s not until that point that you get to see what you’ve used. As you’re using instances (or VMs) during the time you need them, you don’t really think about turning them off or even losing servers. We’ve had customers who have servers in different regions, or on different accounts that don’t get checked regularly, and they didn’t even realize they’ve been running all this time, charging up bill after bill.

You might also be overprovisioning or oversizing resources — for example, provisioning multiple extra large instances thinking you might need them someday or use them down the line. If you’re used to that, and overprovisioning everything by twice as much as you need, it can really come back to bite you when you go look at the bill and you’ve been running resources without utilizing them, but are still getting charged for them – constantly.

5 – They change the pricing frequently

Cloud services pricing has changed quite often. So far, they have been trending downward, so things have been getting cheaper over time due to factors like competition and increased utilization of data centers in their space. However, don’t jump to conclude that price changes will never go up.

Frequent price changes make it hard to map out usage and costs over time. Amazon has already made changes to their price more than 60 times since they’ve been around, making it hard for users to plan a long-term approach. Also for some of these instances, if you have them deployed for a long time, the prices of instances don’t display in a way that is easy to track, so you may not even realize that there’s been a price change if you’ve been running the same instances on a consistent basis.

6 – They offer cost savings options… but they’re difficult to understand (or implement)

In AWS, there are some cost savings measures available for shutting things down on a schedule, but in order to run them you need to be familiar with Amazon’s internal tools like Lambda and RDS. If you’re not already familiar, it may be difficult to actually implement this just for the sake of getting things to turn off on a schedule.  

One of the other things you can use in AWS is Reserved Instances, or with Azure you can pay upfront for a full year or two years. The problem: you need to plan ahead for the next 12 to 24 months and know exactly what you’re going to use over that time, which sort of goes against the nature of cloud as a dynamic environment where you can just use what you need. Not to mention, going back to point #2, the obscure terminology for spot instances, reserved instances, and what the different sizes are.

7 – Each service is billed in a different way

Cloud services pricing shifts between IaaS (infrastructure as a service), which uses VMs that are billed one way, and PaaS (platform as a service) gets billed another way. Different mechanisms for billing can be very confusing as you start expanding into different services that cloud providers offer.

As an example, the Lambda functions in AWS are charged based on the number of requests for your functions, the duration, and the time it takes for your code to execute. The Lambda free tier includes 1M free requests per month and 400,000 GB-seconds of compute time per month, or you can get 1M request free and $0.20 per 1M requests thereafter, OR use “duration” tier and get 400,000 GB-seconds per month free, $0.00001667 for every GB-second used thereafter – simple, right? Not so much.

Another example comes from the databases you can run in Azure. Databases can run as a single server or can be priced by elastic pools, each with different tables based on the type of database, then priced by storage, number of databases, etc.

With Google Kubernetes clusters, you’re getting charged per node in the cluster, and each node is charged based on size. Nodes are auto-scaled, so price will go up and down based on the amount that you need. Once again, there’s no easy way of knowing how much you use or how much you need, making it hard to plan ahead.

What can you do about it?

Ultimately, cloud service offerings are there to help enterprises save money on their infrastructures, and they’re great options IF you know how to use them. To optimize your cloud environment and save money on costs, we have a few suggestions:

    • Get a single view of your billing. You can write your own scripts (but that’s not the best answer) or use an external tool.  
    • Understand how each of the services you use is billed. Download the bill, look through it, and work with your team to understand how you’re being billed.
    • Make sure you’re not running anything you shouldn’t be. Shut things down when you don’t need them, like dev and test instance on nights and weekends.Try to plan out as much as you can in advance.
    • Review regularly to plan out usage and schedules as much as you can in advance
    • Put governance measures in place so that users can only access certain features, regions, and limits within the environment. 

Cloud services pricing is tricky, complicated, and hard to understand. Don’t let this confusion affect your monthly cloud bill. Try ParkMyCloud for an automated solution to cost control.

How to Use Terraform Provisioning and ParkMyCloud to Manage AWS

How to Use Terraform Provisioning and ParkMyCloud to Manage AWS

Recently, I’ve been on a few phone calls where I get asked about cost management of resources built in AWS using Terraform provisioning. One of the great things about working with ParkMyCloud customers is that I get a chance to talk to a lot of different technical teams from various types of businesses. I get a feel for how the modern IT landscape is shifting and trending, plus I get exposed to the variety of tools that are used in real-world use cases, like Atlassian Bamboo, Jenkins, Slack, Okta, and Hashicorp’s Terraform.

Terraform seems to be the biggest player in the “infrastructure as code” arena. If you’re not already familiar with it, the utilization is fairly straightforward and the benefits quickly become apparent. You take a text file, use it to describe your infrastructure down to the finest detail, then run “terraform apply” and it just happens. Then, if you need to change your infrastructure, or revoke any unwanted changes, Terraform can be updated or roll back to a known state. By working together with AWS, Azure, VMware, Oracle, and much more, Terraform can be your one place for infrastructure deployment and provisioning.

How to Use Terraform Provisioning and ParkMyCloud with AWS Autoscaling Groups

I’ve talked to a few customers recently, and they utilize Terraform as their main provisioning tool, while ParkMyCloud is their ongoing cloud governance and cost control tool. Using these two systems together is great, but one main confusion comes in with AWS’s AutoScaling Groups. The question I usually get asked is around how Terraform handles the changes that ParkMyCloud makes when scheduling ASGs, so let’s take a look at the interaction.

When ParkMyCloud “parks” an ASG, it sets the Min/Max/Desired to 0/0/0 by default, then sets the values for “started” to the values you had originally entered for that ASG. If you run “terraform apply” while the ASG is parked, then terraform will complain that the Min/Max/Desired values are 0 and will change them to the values you state. Then, when ParkMyCloud notices this during the next time it pulls from AWS (which is every 10 minutes), it will see that it is started and stop the ASG as normal.

If you change the value of the Min/Max/Desired in Terraform, this will get picked up by ParkMyCloud as the new “on” values, even if the ASG was parked when you updated it. This means you can keep using Terraform to deploy and update the ASG, while still using ParkMyCloud to park the instances when they’re idle.

How to Use Terraform to Set Up ParkMyCloud

If you currently leverage Terraform provisioning for AWS resources but don’t have ParkMyCloud connected yet, you can also utilize Terraform to do the initial setup of ParkMyCloud. Use this handy Terraform script to create the necessary IAM Role and Policy in your AWS account, then paste the ARN output into your ParkMyCloud account for easy setup. Now you’ll be deploying your instances as usual using Terraform provisioning while parking them easily to save money!