Interview: Cofense uses ParkMyCloud for Multi-Cloud Cost Management

Interview: Cofense uses ParkMyCloud for Multi-Cloud Cost Management

Cofense uses ParkMyCloud for multi-cloud cost management. We talked with Todd Morgan, Senior Systems Engineer, about how his team is using the platform to gain “sizable cost savings” at scale.

Thank you for taking the time the speak with us. Can you tell us about Cofense, your role, and the team you work with?

Cofense is a SaaS company in the cybersecurity world. We’ve been around for about 10 years, so we don’t have a legacy of using on-prem infrastructure. The company has leveraged the cloud for their infrastructure needs. My role is that of engineer and architect working in a traditional IT department, and I’m in charge of managing our resources across cloud service providers.

Can you describe how you’re using the cloud and tell us more about what that looks like in your cloud environments?

We are a multi-cloud customer – it gives us a lot of flexibility. We can make cost decisions around which CSP has the most attractive cost models.  Also, some solutions are a better fit for one place versus another. We leverage a wide variety of the cloud services available today, including VMs and RDS.  

What was it that drove you to look for a multi-cloud cost management tool?

Part of shopping around for cost optimization was to gain insights and be able to make informed decisions for how we use our CSPs. We had been using a cloud tool for security purposes – to identify risks that we need to mitigate. We weren’t happy with the product, so rather than finding a better product that does the same thing, we expanded our scope to include other features such as cost management and config management, hoping to find one cloud tool that does it all. The search revealed that a single tool to meet all of our requirements doesn’t exist today. So, the goal shifted to finding a couple tools that compliment each other. While focusing on cost management requirements, I landed on ParkMyCloud.

I’ve kept a running scorecard of all the other cloud tools we’ve done trials and demos for. I’ve got some winners in mind to purchase, but we’re also thinking of making our own solution while the marketplace continues to evolve. We bought into ParkMyCloud because we were satisfied with the trial, the product met our requirements, and were pleased with how the product roadmap aligns with our goals.

How’d you hear about ParkMyCloud and how are you using it?

I learned about ParkMyCloud from networking conversations with current and former co-workers.

One of our requirements was to identify idle resources that were just sitting and not being used. I wanted a tool that would help give me insight into resource utilization and clearly report on idle resources. Where ParkMyCloud shined was by making the scheduling of resource on hours turnkey.  

We have also been using ParkMyCloud’s API to easily override schedules. For example, if someone needs to use a server over the weekend but it’s scheduled to turn off, they can self-service the request to override the schedule.

How do you determine schedules between different departments?

I started with an aggressive plan that was based upon the usage metrics provided by ParkMyCloud. Then I would meet with each team owning a subset of resources, looking to get their sign-off on adjusted schedules. In most cases the teams would outline valid uses cases for times when resources looked idle but they do need them on. After shaving back my plan to meet their needs, we still have sizable cost savings at the end of the day.

What other benefits have you gotten from using the ParkMyCloud platform?

Something else that’s been happening is I’m finding servers that don’t need to be on at all. ParkMyCloud is proving to be a conversation starter about resource usage.  These business conversations have led me to decommission idle resources altogether.

For the resources, we do schedule, at scale the cost savings is sizable. We only have a few examples of resources that need to be always-on 24x7x365.  For the majority of resources, we have assigned new schedules. Also, when new resources are provisioned, we’re changing it so the default is now scoped to only be on during working hours.

Anything else to add or feedback to share on your use of the platform?

We’re very happy with the tool and the engagement with your team.

Thank you Todd!

How to Use 9 Cloud DevOps Best Practices For Cost Control

How to Use 9 Cloud DevOps Best Practices For Cost Control

Any organization with a functioning cloud DevOps practice will have some common core tenants. While those tenants are frequently applied to things like code delivery and security, a company that fails to apply those tenants to cost control are destined to have a runaway cloud bill (or at least have a series of upcoming meetings with the CFO). Here are some of those tenants, and how they apply to cost control:

1. Leadership

One common excuse for wasted cloud spend is “well that other group has cloud waste too!” By aggressively targeting and eliminating cloud waste, you can set the tone for cost control within your team, which will spread throughout the rest of the organization. This also helps to get everyone thinking about the business, even if it doesn’t seem like wasting a few bucks here or there really matters (hint: it does).

2. Collaborative Culture

By tearing down silos and sharing ideas and services, cost control can be a normal part of DevOps cloud practice instead of a forced decree that no one wants to take part in. Writing a script that is more generally applicable, or finding a tool that others can be invited to will cause others to save money and join in. You may also get ideas from others that you never thought of, without having to waste time or replicate work.

3. Design for DevOps

Having cost control as a central priority within your team means that you end up building it into your processes and software as you go. Attempting to control costs after-the-fact can be tough and can cause rewrites or rolling back instead of pressing forward. Also, tacked-on cost control is often less effective and saves less money than starting with it.

4. Continuous Integration

Integrating ideas and code from multiple teams with multiple codebases and processes can be daunting, which is why continually integrating as new commits happen is such a big step forward. Along the same lines, continually controlling costs during the integration phase means you can optimize your cloud spend by sharing resources, slimming down those resources, and shutting down resources until they are needed by the integration.

5. Continuous Testing

Continuous testing of software helps find bugs quickly and while developers are still working on those systems. Cost control during the testing phase can take multiple forms, including controlling the costs of those test servers, or doing continuous testing of the cost models and cost reduction strategies. New scripts and tools that are being used for cost control can also be tested during this phase.

6. Continuous Monitoring

Monitoring and reporting, like cost control, are often haphazardly tacked on to a software project instead of being a core component. For a lot of organizations, this means that costs aren’t actively being monitored and reported, which is what causes yelling from the Finance team when that cloud bill comes. By making everyone aware of how costs are trending and noting when huge spikes occur, you can keep those bills in check and help save yourself from those dreaded finance meetings.

7. Continuous Security

Cloud cost control can contribute to better security practices. For example, shutting down Virtual Machines when they aren’t in use decreases the number of entry points for would-be hackers, and helps mitigate various attack strategies. Reducing your total number of virtual machines also makes it easier for your security teams to harden and monitor the machines that exist.

8. Elastic Infrastructure

Auto-scaling resources are usually implemented by making services scale up automatically, while the “scaling down” part is an afterthought. It can be admittedly tricky to drain existing users and processes from under-utilized resources, but having lots of systems with low load is the leading cause of cloud waste. Additionally, having different scale patterns based on time of day, day of the week, and business need can be implemented, but requires thought and effort into this type of cost control.

9. Continuous Delivery/Deployment

Deploying your completed code to production can be exciting and terrifying at the same time. One factor that you need to consider is the size and cost of those production resources.  Cost savings for those resources is usually different from the dev/test/QA resources, as they typically need to be on 24/7 and can’t have high latency or long spin-up times. However, there are some cost control measures, like pre-paying for instances or having accurate usage patterns for your elastic environments, that should be considered by your production teams.

Full Cloud DevOps Cost Control

As you can see, there are a lot of paths to lowering your cloud bill by using some common cloud DevOps tenants. By working these ideas into your teams and weaving it throughout your processes, you can save money and help lead others to do the same. Controlling these costs can lead to fewer headaches, more time, and more money for future projects, which is what we’re all aiming to achieve with DevOps.

5 Things to Look For in an IaaS Cost Management Tool

5 Things to Look For in an IaaS Cost Management Tool

With $39.5 billion projected to be spent on Infrastructure as a Service (IaaS) this year, many cloud users will find it’s time to optimize spend with an IaaS cost management tool.  With so many different options to choose from – picking the right one can be overwhelming. While evaluating your options, you should have an idea of what would be most compatible for you and your organization.  In order to cut cloud costs and waste, make sure you look for these 5 things while picking an IaaS cost management tool.

1. UI is Easy to Understand

When adopting a new piece of software, you should not be stressed out trying to figure out how it works. It should be designed around the end user in order to give them an easy user experience so they can accomplish tasks quickly. Many native tools required by the cloud providers require specialized coding knowledge that the IaaS users in your organization may not have. Whether it is useful or not depends on how simple and easy to follow it is so that every cloud user can contribute to the task of managing IaaS cost.  

2.  Improved Visibility

It is essential that you have all of your information available to you in one place – this helps make sure you didn’t overlook anything.  Seeing all your resources on one screen, all at once, will allow you to pinpoint strengths/weaknesses you need to focus on to that will help manage your IaaS cost. Of course, cost management includes more than visibility, which leads to the next points.

3. Provides Reporting

You want your organization to be well informed, so it is important that any IaaS cost management tool you adopt includes the ability to generate cost and savings reports. You can’t change something if you don’t know what it means, the data gathered – past and present – will help you understand the past and make a forecast for the future. These reports will give you the information you need to make quick, informed decisions. Preferably, they contain automated recommendations as well based on your resource utilization history and patterns. Additionally, it’s important for any cost optimization tool to report on the amount of money you have saved using it, so you can justify the cost of the tool as needed to your management or Finance department.

4. Implements actions

After gathering the data and making suggestions, the next step in cost optimization is to actually make these changes. Using the reports and data gathered, the tool should be able to manage your resources and implement any necessary changes without you having to do anything.  

5.  Automation and APIs

Even though it goes on in the background, APIs are necessary because they allow your tool to work in conjunction with other operations. With the support of inbound actions and outbound notifications, this automated process allows you to streamline all of your data.  This will make things faster and more efficient – allowing you to cut down on time and IaaS cost. Highlights to look for include Single Sign-On, ChatOps integrations, and a well-documented API.

Keep Your Organization’s IaaS Cost Needs in Mind

These are just a few of the things you should be looking for when searching for IaaS cost optimization – but you have to find the platform that works best for you!

ParkMyCloud automatically optimizes your IaaS costs with these principles in mind – try it out with a 14-day free trial and see if it’s the right fit for you.

 

Cloud Container Services Comparison

Cloud Container Services Comparison

There’s no doubt that cloud container services adoption is on the rise. A recent survey found that more than 80% of IT professionals and teams reported deploying container technologies — up from 58% in 2017.

With this rise in adoption comes a rise of options in the market, so it quickly becomes difficult to keep track of each service and what they’re best used for. We took a look at 14 container services and container-like services associated with the top cloud providers, and broke down the main use case for each. Scroll to the bottom for a comparison chart.

AWS Cloud Container Services

Amazon Elastic Container Service

Amazon Elastic Container Service (Amazon ECS) is a container orchestration service, used to manage and deploy containers distributed across many AWS virtual machines. Combined with AWS Fargate, it allows you to run containers without selecting servers. Pricing depends on the launch model: for the Fargate model, you pay for vCPU and memory that your containerized application requests. For the EC2 model, you simply pay for the EC2 instances and other resources – such as EBS volumes – you create to store and run your application.

Amazon Elastic Container Registry

Amazon Elastic Container Registry (Amazon ECR) is AWS’s managed solution to store, manage, and deploy Docker container images. It is highly available, scalable, and integrated with Amazon ECS. Payment is based on the amount of data stored in repositories and data transferred to the Internet.

Amazon Elastic Container Service for Kubernetes

Amazon Elastic Container Service for Kubernetes (Amazon EKS) is AWS’s service to manage and deploy containers via Kubernetes container orchestration service. Pricing is $0.20 per hour for each EKS cluster, as well as the cost of AWS resources such as EC2 instances that you create to run your Kubernetes worker nodes.

AWS Fargate

AWS Fargate is a solution for Amazon ECS that allows you to run containers without managing servers or infrastructure, making it easier to focus on applications rather than the infrastructure that runs them. Pricing is based on the vCPU and memory resources used.

AWS Batch

AWS Batch is a way for AWS users to run large quantities of batch computing jobs — which is done by executing them as Docker containers. You pay only for the AWS resources you use to create to store and run your application, with no additional fees.

Azure Cloud Container Services

Azure Kubernetes Service

Azure Kubernetes Service (AKS) is Azure’s fully managed solution to manage & deploy containers via Kubernetes container orchestration service. You pay only for the VMs, storage, and networking resources used for the Kubernetes cluster, with no additional charge.

Azure Container Registry

Azure Container Registry is a way to store and manage container images for container deployment across DC/OS, Docker Swarm, Kubernetes, and Azure services including App Service, Batch, and Service Fabric. Pricing is per day, with several tiers depending on the amount of storage and web hooks needed.

Azure Container Instances

Azure Container Instances (ACI) is a service that allows you to run containers on Azure without managing servers or infrastructure, making it simpler to build applications without focusing on infrastructure. Billing is by “container groups” which are assignments of vCPU and memory resources for your running containers, and is on a per-second basis.

Azure Batch

Azure Batch is a service for running a large number of competitive compute jobs, which users can choose to can run directly on virtual machines or on Docker-compatible containers. You pay only for the compute and other resources used to run the batch jobs, with no additional fees for using Batch.

Azure App Service

Azure App Service is a way to create cloud-based web apps and APIs, which similarly to Azure Batch, has options for running on virtual machines or in containers. Billing is per hour, with several tiers depending on your needs for disk space, number of instances, auto scaling, and network isolation.

Azure Service Fabric

Azure Services Fabric is a way to lift, shift, and modernize .NET applications to microservices using Windows Server containers. Service Fabric is an open source project that powers core Azure infrastructure and other Microsoft services include Skype for Business, Azure SQL Databases, Cortana and more. You pay for compute, volumes, and collections used, though the complicated pricing model makes it hard to estimate.

Google Cloud Container Services

Google Kubernetes Engine

Google Kubernetes Engine (GKE) is Google Cloud’s fully managed solution to manage and deploy containers via Kubernetes container orchestration service. You pay for the Google Compute Engine instances used, with no additional charges.

Google Container Registry

Google Container Registry allows users to store and manage Docker container images for container deployment. You pay for the storage and network used by your Docker resources.

Google App Engine Flexible Environment

Google App Engine Flexible Environment is a platform for deploying web apps and APIs, which you can do on VM instances or on Docker containers. Pricing is based on the compute, storage, and other resources used for the apps

Cloud Container Services Comparison Chart

For quick and easy reference, we’ve condensed this comparison into a chart:

It’s a great time to become familiar with the various cloud container services and try them out — this infrastructure model will only become more prominent!

Should You Use the Cloud-Native Instance Scheduler Tools?

Should You Use the Cloud-Native Instance Scheduler Tools?

When adopting or optimizing your public cloud use, it’s important to eliminate wasted spend from idle resources – which is why you need to include an instance scheduler in your plan. An instance scheduler ensures that non-production resources – those used for development, staging, testing, and QA – are stopped when they’re not being used, so you aren’t charged for compute time you’re not actually using.

AWS, Azure, and Google Cloud each offer an instance scheduler option. Will these fit your needs – or will you need something more robust? Let’s take a look at the offerings and see the benefits and drawbacks of each.

AWS Instance Scheduler

AWS has a solution called the AWS Instance Scheduler. AWS provides a CloudFormation template that deploys all the infrastructure needed to schedule EC2 and RDS instances. This infrastructure includes DynamoDB tables, Lambda functions, and CloudWatch alarms and metrics, and relies on tagging of instances to shut down and turn on the resources.

The AWS Instance scheduler is fairly robust in that it allows you to have multiple schedules, override those schedules, connect to other AWS accounts, temporarily resize instances, and manage both EC2 instances and RDS databases.  However, that management is done exclusively through editing DynamoDB table entries, which is not the most user-friendly experience. All of those settings in DynamoDB are applied via instance tags, which is good if your organization is tag-savvy, but can be a problem if not all users have access to change tags.

If you will have multiple users adding and updating schedules, the Instance Scheduler does not provide good auditing or multi-user capabilities. You’ll want to strongly consider an alternative.

Microsoft Azure Automation

Microsoft has a feature called Azure Automation, which includes multiple solutions for VM management. One of those solutions is “Start/Stop VMs during off-hours”, which deploys runbooks, schedules, and log analytics in your Azure subscription for managing instances. Configuration is done in the runbook parameters and variables, and email notifications can be sent for each schedule.

This solution steps you through the setup for timing of start and stop, along with email configuration and the target VMs. However, multiple schedules require multiple deployments of the solution, and connecting to additional Azure subscriptions requires even more deployments. They do include the ability to order or sequence your start/stop, which can be very helpful for multi-component applications, but there’s no option for temporary overrides and no UI for self-service management. One really nice feature is the ability to recognize when instances are idle, and automatically stop them after a set time period, which the other tools don’t provide.

Google Cloud Scheduler

Google also has packaged some of their Cloud components together into a Google Cloud Scheduler. This includes usage of Google Cloud Functions for running the scripts, Google Cloud Pub/Sub messages for driving the actions, and Google Cloud Scheduler Jobs to actually kick-off the start and stop for the VMs. Unlike AWS and Azure, this requires individual setup (instead of being packaged into a deployment), but the documentation takes you step-by-step through the process.

Google Cloud Scheduler relies on instance names instead of tags by default, though the functions are all made available for you to modify as you need. The settings are all built into those functions, which makes updating or modifying much more complicated than the other services. There’s also no real UI available, and the out-of-the-box experience is fairly limited in scope.

Cloud Native or Third Party?

Each of the instance scheduler tools provided by the cloud providers has a few limitations. One possible dealbreaker is that none of these tools are multi-cloud capable, so if your organization uses multiple public clouds then you may need to go for a third-party tool. They also don’t provide a self-service UI, built-in RBAC capabilities, Single Sign-On, or reporting capabilities. When it comes to cost, all of these tools are “free”, but you end up paying for the deployed infrastructure and services that are used, so the cost can be very hard to pin down.

We built ParkMyCloud to solve the instance scheduler problem (now with rightsizing too). Here’s how the functionality stacks up against the cloud-native options:

 

AWS Instance Scheduler Microsoft Azure Automation Google Cloud Scheduler ParkMyCloud
Virtual Machine scheduling
Database scheduling
Scale Set scheduling
Tag-based scheduling
Usage-based recommendations
Simple UI
Resize instances
Override Schedules
Reporting
Start/Stop notifications
Multi-Account
Multi-Cloud

Overall, the cloud-native instance scheduler tools can help you get started on your cost-saving journey, but may not fulfill your longer-term requirements due to their limitations.

Try ParkMyCloud with a free trial — we think you’ll find that it meets your needs in the long run.