>_ Today we have a piece of advice: don’t write a script to save money on AWS. Here at ParkMyCloud, we spend a lot of time chatting with DevOps and infrastructure teams, listening to how they manage their cloud operations.
You can Take the DIY Approach (scripting)
Although there is a lot of ‘tooling’ out there to make their lives easier, many infrastructure guys still choose to drop to the command line to take control of their environments. They might do this by using automation tools like Chef or Puppet, continuous delivery tools like Jenkins or simply revert to Bash or PowerShell. They are using these tools because of the granular control they provide and because they can seemingly quickly ‘knock out a script’ to either solve a problem or provide a quick automation of a common task.
We’ve talked with a number of larger AWS customers who are optimizing thousands or even tens of thousands of instances using scripting technologies. Given the needs of their businesses, their internal customers are typically geographically distributed with hundreds of teams and team members utilizing their cloud infrastructure.
But do you really want to?
Although we love scripts as much as the next guy, when it comes to cloud we see a few problems with this approach. Firstly, cloud is the great democratizer of infrastructure. No longer do the business folk in finance, marketing, sales etc. need to call IT to bring on new resources. They know they can spin it up themselves – even if they don’t know they can turn it off just as easily. But if you want cloud users to take on responsibility and ensure governance, they need tools, not scripts. Secondly, supporting hundreds of teams and user-managed infrastructures without embedding DevOps resources in every team quickly becomes burdensome and inefficient. The way we see it, just because you can do it, doesn’t mean you should do it.
There is a reason that simple, single-purpose web apps are sweeping across the enterprise. Users like simple UI’s with little to no learning curve. Companies realized that if you want to empower internal stakeholders by providing tools that allow them to optimize their workflow and resource use, it’s much easier to sustain if the end users can ‘do it themselves’. Be it the crack dev team who begins to self-manage their non-production environment or the team of data-scientists who make the CFO wince when they run their clusters.
Leave it to the folks down under to turn things on their heads
As we listen and learn how our customers optimize their cloud environments, we are always excited when we see an entirely new way of doing something. Recently, we were chatting with Foster Moore, one of our Antipodean customers in New Zealand. They too were active users of automation scripts for cloud optimization.
Once they found ParkMyCloud, however, they realized that they could free up their DevOps team to work on higher value activities. With their new tool in hand they decided to turn upside down the way their teams thought about cloud computing. They created a simple ‘always-off’ schedule, which takes newly launched instances and turns them off by default unless a user needs them turned on. By having them held in a stopped-state until exactly when they need them they avoid all unnecessary running costs.
The lesson is, you don’t need to write a script to save money on AWS. While our customer’s overall approach would have been technically possible using scripts, the ability to enable all their teams to have a simple way to remove this ‘always-off schedule’ would have required custom application development. Instead they were able to use ParkMyCloud’s ‘snooze’ functionality which allows its parking schedules to be temporarily removed for user defined periods of time, which could be an 8-hour workday or any other compute resources needed to be available. By reversing the ‘always-on’ nature of cloud compute to ‘always-off’, they were able to empower their teams to cut costs by 40% – without a single script in sight.