We recently discussed how orphaned volumes and snapshots contribute to cloud waste and what you can do about it, but those are just two examples of orphaned cloud resources that result in unnecessary charges. The public cloud is a pay-as-you-go utility, requiring full visibility of specific infrastructure – you don’t want to be charged for resources you aren’t using. Here are other types of orphaned cloud resources that contribute to cloud waste (and cost you money):
Unassociated Elastic IPs
Elastic IPs are reserved public IP addresses designed for dynamic cloud computing in AWS. As a static IPv4 address associated with your AWS account, Elastic IPs can continue running an EC2 instance, even if it is stopped and restarted, by quickly remapping the address to another one of your instances. You can allocate an Elastic IP address to any EC2 instance in a given region, until you decide to release it.
The advantage of having an Elastic IP (EIP) is the ability to mask the failure of an EC2 instance, but if you do not associate the address to your account – you’re still getting charged. To avoid incurring a needless hourly charge from AWS, remember to release any unassociated IPs you no longer need.
Elastic Load Balancers (with no instances)
Cloud load balancing allows users to distribute workloads and traffic with the benefit of the cloud’s scalability. All major cloud providers offer some type of load balancing – AWS users can balance workloads and distribute traffic among EC2 instances with its Elastic Load Balancer, Google Cloud can distribute traffic between VM instances with Google Cloud Load Balancing, and Azure’s Load Balancer distributes traffic across multiple data centers.
An AWS Elastic Load Balancer (ELB) will incur charges to your bill as long as it’s configured with your account. Like with Elastic IPs, whether you’re using it or not – you’re paying. If you have no instances associated with your ELB, delete to avoid paying needless charges on your monthly bill.
Unused Machine Images (AMIs)
A Machine Image provides the information required to launch an instance, which is a virtual server in the cloud. In AWS they’re called AMIs, in Azure they’re Managed Images, and in Google Cloud Platform they’re Custom Images.
As part of your measures to reduce unnecessary costs from orphaned volumes, delete unused machine images when you no longer need them. Unless deleted, the snapshot that was created when the image was first created will continue to incur storage costs.
One of the growing pains that organizations face is the management of isolated pools of data in their cloud environment. Fragmented storage can result from data coming from a number of sources used by applications and business processes. Object Storage was designed to break down silos into scalable, cost-effective storage to store data in its native format. AWS offers object storage solutions like Amazon S3 and Amazon Glacier, Google has Google Cloud Storage, and Azure calls its solution Azure Blob Storage. All options help you manage your storage in one place, keeping your data organized and your business more cost effective.
Although object storage in and of itself is a cost effective solution, there are still ways to optimize and reduce costs within this solution. Delete files you no longer need so that you’re not paying for them. Delete unused files which can be recreated. In S3, use the “lifecycle” feature to delete or overwrite older versions of data. Clean up incomplete uploads that were interrupted, resulting in partial objects that are taking up needless space. And compress your data before storing to give you better performance and reduce your storage requirements.
How to Avoid Wasted Spend on Orphaned Cloud Resources
Don’t let forgotten resources waste money on your cloud bill. Put a stop to cloud waste by eliminating orphaned cloud resources and inactive storage, saving space, time, and money in the process. Remember to:
- Release unassociated IPs you no longer need.
- Remove Elastic Load Balancers with no instances attached.
- Delete unused machine images when you no longer need them.
- Keep object storage minimal – optimize by “cleaning up” regularly, removing files you don’t need.
The premise of the cloud and the resources available to you were meant to be cost effective, but it’s up to you keep costs in check. Optimize your cloud spend with visibility, management, and cost automation tools like ParkMyCloud to get the most out of your cloud environment.
You’ve gone full-blown DevOps, drank the Agile Kool-Aid, cloudified everything, and turned your monolith to microservices — so why have all of your old monolith costs turned into even bigger microservices costs? There are a few common reasons this happens, and some straightforward first steps to get microservices cost control in place.
Why Monolith to Microservices Drives Costs Up
As companies and departments adapt to modern software development processes and utilize the latest technologies, they assume they’re saving money – or forget to think about it altogether. Smaller applications and services should come with more savings opportunities, but complexity and rapidly-evolving environments can actually make the costs skyrocket. Sometimes, it’s happening right under your nose, but the costs are so hard to compile that you don’t even know it’s happening until it’s too late.
The same thing that makes microservices attractive — smaller pieces of infrastructure that can work independently from each other — can also be the main reason that costs spiral out of control. Isolated systems, with their own costs, maintenance, upgrades, and underlying architecture, can each look cheaper than the monolithic system you were running before, but can skyrocket in cost when aggregated.
How to Control Microservices Costs
If your microservices costs are already out of control, there are a few easy first steps to reining them in.
Keep It Simple
As with many new trends, there is a tendency to jump right in and switch everything to the new hotness. Having a drastic cutover, while scrapping all of your old code, can be refreshing and damaging all at the same time. It makes it hard to keep track of everything, so costs can run rampant while you and your team are struggling just to comprehend what pieces are where. By keeping some of what you already have, but slowly creating new functionality in a microservices model, you can maintain a baseline while focusing on costs and infrastructure of your new code.
The other way to keep it simple is to keep each microservice extremely limited in scope. If a microservice does just one thing, without a bunch of bells and whistles, it’s much easier to see if costs are rising and make the infrastructure match the use case. Additional opportunities for using PaaS or picking a cloud provider that fits your needs can really help maximize utilization.
Scalability and Bursting
Microservices architectures, by the very nature of their design, allow you to optimize individual pieces to minimize bottlenecks. This optimization can also include cost optimization of individual components, even to the point of having idle pieces turned completely off until they are needed. Other pieces might be on, but scaled down to the bare minimum, then rapidly scale out when demand runs high. A fluctuating architecture sounds complex, but can really help keep costs down when load is low.
Along with a microservices architecture, you may start having certain users and departments be responsible for just a piece of the system. With that in mind, cloud providers and platform tools can help you separate users to only access the systems and infrastructure they are working on so they can focus on the operation (and costs) of that piece. This allows you to give individual users the role that is necessary for minimal access controls, while still allowing them to get their jobs done.
Ordered Start/Stop and Automation with ParkMyCloud
ParkMyCloud is all about cost control, so we’ve started putting together a cost-savings plan for our customers who are moving from monolith to microservices.
First, they should use ParkMyCloud’s Logical Groups to put multiple instances and databases into a single entity with an ordered list. This way, your users do not have to remember multiple servers to start for their application – instead, they can start one group with a single click. This can help eliminate the support tickets that are due to parts of the system not running.
Additionally, use Logical Groups to set start delays and stop delays between nodes of the group. With delays, ParkMyCloud will know to start database A, then wait 10 minutes before starting instance B, to ensure the database is up and ready to accept connections. Similarly, you can make sure other microservices are shut down before finally shutting down the database.
Everything you can do in the ParkMyCloud user interface can also be done through the ParkMyCloud REST API. This means that you can temporarily override schedules, toggle instances to turn off or on, or change team memberships programmatically. In a microservices setup, you might have certain pieces that are idle for large portions of the day. With the ParkMyCloud API, you could have those nodes turned off on a schedule to save money, then have a separate microservice call the API to turn the node on when it’s needed.
The Goal: Continuous Cost Control
Moving from monolith to microservices can be a huge factor in a successful software development practice. Don’t let cost be a limiting factor – practice continuous cost control, no matter what architecture you choose. By putting a few costs control measures in place with ParkMyCloud, along with some automation and user management, you can make sure your new applications are not only modern, but also cost-effective.
Travel technology company Sabre announced a strategic agreement with Microsoft last week, weeks after a similar agreement with AWS. There are a lot of factors contributing to these decisions, but among them, it seems likely they’ve chosen multi-cloud for cost control.
The company has been under the leadership of CEO Sean Menke for a year and a half, and in that time has already downsized its workforce by 10% – saving the company $110 million in annual costs. Against such a backdrop, clearly, cost control will be front of mind.
So how will a multi-cloud strategy contribute to controlling costs as Sabre aims to “reimagine the business of travel”, in their words?
Why Multi-Cloud for Cost Control Makes Sense
As Sabre moves into AWS and Azure, they plan to write new applications with a microservices architecture deployed on Docker containers. Containerization can be an effective cost-saving strategy by reducing the amount of infrastructure needed – and thereby reducing wasted spend, and simplifying software delivery processes to increase productivity and reduce maintenance.
Plus, containerization has the advantage of ease of portability. With a large and public account like Sabre’s, this becomes a cost reduction strategy as AWS and Azure are forced into competition for their business against each other. “We want to have incentives for (cloud providers) not to take our business for granted,” said CIO Joe DiFonzo.
Avoiding vendor lock-in and optimizing workloads are the top two cited reasons for companies to choose a multi-cloud strategy – both of which contribute to cost control.
Either Way, Cost Has to Be a Factor
Aside from the reasons listed above, Sabre may have chosen to make deals with both AWS and Azure due to each cloud providers’ technological strengths, support offerings, developer familiarity, or for other reasons. Whether they’ve chosen multi-cloud for cost control as the primary reason is debatable, but they certainly need to control costs now that they’re there.
First of all, most cloud migrations go over budget – not to mention that 62% of first-attempt cloud migrations take longer than expected or fail outright, wasting money directly and through opportunity cost.
Second, Sabre’s legacy system of local, on-premises infrastructure means their IT and development staff is used to the idea of resources that are always available. Users need to be re-educated to learn a “cloud as utility” mindset – as a Director of Infrastructure of Avid put it, users need to learn “that there’s a direct monetary impact for every hour that an idle instance is running.” Of course, this is an issue we see every day.
For companies new to the cloud, we recommend providing training and guidelines to IT Ops, DevOps and Development teams about proper use of cloud infrastructure. This should include:
- Clear governance structures – what users can make infrastructure purchases? How are these purchases controlled?
- Turning resources off when not needed – automating non-production resources to turn off when not needed can reduce the cost of those resources by 65% or more (happy to help, Joe DiFonzo!)
- Regular infrastructure reviews – especially as companies get started in the cloud, it’s easy to waste money on orphaned resources, oversized resources, and resources you no longer need. We recommend regular reviews of all infrastructure to ensure every unused item is caught and eliminated.
Cheers to you, Sabre, and best of luck in your cloud journey.