Application Containerization: Pros and Cons

Application Containerization: Pros and Cons

What is Application Containerization?

Application containerization is more than just a new buzz-word in cloud computing; it is changing the way in which resources are deployed into the cloud. However, many people are still coming to grips with the concept of application containerization, how it works, and the benefits it can deliver.

Most people understand the term “cloud computing” relates to the renting of computing services over the Internet from Cloud Service Providers (AWS, Azure, Google, etc.). Cloud computing breaks down into three broad categories – Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) – often called the “cloud computing stack” because they build on top of one another.

The benefits of cloud computing are easily seen at the IaaS level; where, rather than building a physical, on-premises IT infrastructure, businesses can simply pay for the computing services they need as they want them, on demand. The advantages of cost, scalability, flexibility and low maintenance overheads have driven IaaS cloud computing to be a $50 billion industry in little more than a decade.

However, IaaS cloud computing also has its issues. In order to take advantage of the benefits, businesses have to rent virtual machines (VMs or “instances”) which replicate the features of a physical IT environment. This means paying for a server complete with its own operating system and the software required to run the operating system, even if you only want to launch a single application.

Where Application Containerization Comes Into the Picture

By comparison, application containerization allows businesses to launch individual applications without the need to rent an entire VM. It does this by “virtualizing” an operating system and giving containers access to a single operating system kernel – each container comprising the application and the software required for the application to run (settings, libraries, storage, etc.).

The process of application containerization allows multiple applications to be distributed across a single host operating system without requiring their own VM, which can lead to significant cost savings. Whereas previously, a server hosting eight applications in eight VMs would have eight copies of the operating system running in each VM, ten containers can share the same operating system.

In addition to significant cost savings, application containerization allows for greater portability. This can accelerate the process of testing applications across different operating systems because there is no waiting for the operating system to boot up. Furthermore, if the application crashes during testing, it only takes down the isolated container rather than the entire operating system.

One further benefit of application containerization is that containers can be clustered together for easy scalability or to work together as micro-services. In the latter case, if an application requires updating or replacing, it can be done in isolation of other applications and without the need to stop the entire service. The lower costs, greater portability and minimal downtime are three reasons why application containerization has become more than just a new buzzword in cloud computing and is changing the way in which resources are deployed into the cloud.

The Downsides of Application Containerization

Unfortunately there are downsides to application containerization. Some of these – for example, container networking – are being resolved as more businesses take advantage of application containerization. However, container security and complexity are remaining issues, as is the potential for costs to spiral out of control as they often do when businesses adopt new technologies.

The security issue evolves from the process of containers sharing the same operating system. If a vulnerability in the operating system or the kernel is exploited, it will affect the security of all the applications connected to the operating system. Consequently, security policies have to be turned on for every application, with activities other than essential ones forbidden.

Containers also add more operational complexity than you might at first assume, adding more to orchestrate and requiring additional management.

With regard to costs, the risk exists that developers will launch multiple containers and fail to terminate them when they are no longer required. Due to the number of containers being launched compared to VMs, it will not take long for container-related cloud waste to match that of VM-related cloud waste – estimated to be $12.9 billion per year in this blog post.

The problem with controlling cloud spend using cloud management software is that many solutions fail to identify unused containers because the solutions are host-centric rather than role-centric. For an effective way to control cloud spend, speak with ParkMyCloud about our cloud cost management software.  

6 Cost-Saving Cloud API Integrations

6 Cost-Saving Cloud API Integrations

One common thing I hear from people evaluating tools for individual/team use is the need for a multitude of service and cloud API integrations available. At ParkMyCloud, we have the same mindset, which is why we’ve recently published a whole list of API endpoints that can be used to combine cloud cost savings with your existing tools and services to achieve continuous cost control!

The updated list of APIs is available at https://prod-api.parkmycloud.com/ and is publically documented so you can check it out now. Once you’re a ParkMyCloud customer, you can get an API key from us to start leveraging the cost-saving power of ParkMyCloud outside of our simple-to-use UI. There are tons of things you can write with these new APIs, so here are a few ideas, based on what pain points you need to automate:

1. If You Have Lots of Users: Active Directory Group-to-Role Mapping

With exposed APIs for creating Teams and Users in ParkMyCloud, pulling information from Active Directory or LDAP can make your initial setup a breeze. Each instance in ParkMyCloud is on exactly one team, while users can be a part of zero or more teams. This team membership is what determines the list of resources available to that user.

AD users typically have group memberships based on what level of access they need to AD-connected systems. By querying Active Directory programmatically, you can get a list of users and what groups they are a part of, then create corresponding teams and assign access based on those groups — saving yourself from doing it manually and keeping governance measures in place.

2. If You Have Lots of Cloud Accounts: Programmatically Import Them for Cost Control

More and more companies are moving to separate AWS accounts, Azure subscriptions, or Google Cloud projects and accounts. This makes it much easier to separate dev from test, QA from staging, and production from everything else that is non-production. I’ve talked to customers who have hundreds of accounts, to who I’d recommend creating a programmatic way to add them.

3. If You’re a Managed Service Provider: Manage Customer Accounts in One Fell Swoop

If you’re a managed service provider (MSP), you’re always looking for additional value to add to the services you provide to your customers. The challenge you’ll face is that you have many customers, each with multiple cloud accounts, but you must keep a distinct separation between clients. With ParkMyCloud, you can easily add additional value by automating your customers’ cloud cost savings. To manage several customers at once, use the ParkMyCloud API to easily connect multiple cloud accounts and create policies to handle each customers’ resources automatically. As the admin, you can use teams to keep things separated for multi-tenancy and user governance.

4. If You Spend Your Day in CI/CD Tools: Integrate Continuous Cost Control, Too

Plugging into CI/CD tools like Bamboo and Jenkins has always been the bread-and-butter use case for our API. Software build management and continuous deployment often has lots of large servers sitting idle in between code commits, so ParkMyCloud can manage those instances with schedules that are easily overridden using the “snooze” API. Once an instance is snoozed (which is like a temporary override), it can then be toggled, used for the build or deployment, then at the end of the snooze it gets powered back off. These couple Rest API calls can get added to the beginning of any build job, so you get cost savings with minimal changes to your setup.

5. If The Higher-Ups are On Your Back: Visually Track Savings Data

The ParkMyCloud API allows you to pull the current month-to-date savings information, along with the current estimated 30-day savings that your instance schedules are creating. You can take this data and show it proudly on your existing intranet homepage or monitoring dashboard. You’ll show the bosses how much you’re saving on cloud costs and be the hero of the office, and it can really help drive more users and groups to save even more money.

6. If You’re A Huge Slacker (or Hipchatter): Utilize ChatOps For Quick Instance Management

Slack and HipChat have changed the game in the DevOps world. Not only can you get rapid feedback (one of the core DevOps tenants) from all of your tools and colleagues, but you can also use ChatBots to control your environment and workloads. ParkMyCloud can send notifications to your chat programs via Webhooks, and you can use ChatBots to send API calls back to ParkMyCloud. Get notified in your dev Slack channel about a system being shut down, then immediately respond in the channel to override the schedule and let your team know that you’ve got it taken care of.

Got Any Cool Cloud API Integration Ideas?

We’d love to hear your ideas for how you might utilize these cloud API integrations with ParkMyCloud. Once you get that script written, open a pull request on our public Github repo so we can share it with the ParkMyCloud community. If you’ve got an idea but don’t know where to start, comment below or share the idea with us on Twitter and we can chat about implementation details. If you’re a programmer who wants to work on some integrations but aren’t yet saving money on your cloud bill, check out our free trial of ParkMyCloud today!

Don’t Let Your Server Patching Schedule Get in the Way of Cost Control

Don’t Let Your Server Patching Schedule Get in the Way of Cost Control

Don’t let your server patching schedule get in the way of saving money. The idea of minimizing cloud waste was a very new concept two years ago, but as cloud use has grown, so has the need for minimizing wasted spend. CFOs now demand that the cloud operations teams turn off idle systems in the face of rising cloud bills, but the users of these systems are the ones that have to deal with servers being off when they need them.

Users of ParkMyCloud are able to overcome some of the common objections to scheduling non-production resources. The most common objection is, “What if I need the server or database when it’s scheduled to be off?” That’s why ParkMyCloud offers the ability to “snooze” the schedule, which is a temporary override that lets you choose how long you need the system for. This snooze can be done easily from our UI, or through alternative methods like our API, mobile app, or Slackbot.

A related objection is related to how your parking schedule can work with your server patching schedule. The most common way of dealing with patching in ParkMyCloud is to use our API. The workflow would be to log in through the API, get a list of the resources, then choose which resources you want and choose to “snooze” the schedule for a couple of hours, or however long the patching takes. Once the schedule is snoozed, you can toggle the instance on, then do the patching. After the patching is complete, you can either cancel the snooze to go back to the original schedule or wait for the snooze to finish and timeout. If you have an automated patching tool that can make REST calls, this can be an easy way to patch on demand with minimal work.

If you’re on a weekly server patching schedule, you could also just implement the patch times into your pre-set schedules so that the instances turn on, say, at 2:00 a.m. on Wednesdays. By plugging this into your normal schedules, you can still save money during most off-hours, but have the instances on when the patch window is open. This can be a great way to do weekly backups as well, with minimal disruption.

This use of ParkMyCloud while plugging in to external tools and processes is the best way to get every developer and CloudOps engineer on board with continuous cost control. By reducing these objections, you can reduce your cloud costs and be the hero of your organization. Start up a free trial today to see these plug-ins in action!

7 Ways Cloud Services Pricing is Confusing

7 Ways Cloud Services Pricing is Confusing

Beware the sticker shock – cloud services pricing is nothing close to simple, especially as you come to terms with the dollar amount on your monthly cloud bill. While cloud service providers like AWS, Azure, and Google were meant to provide compute resources to save enterprises money on their infrastructure, cloud services pricing is complicated, messy, and difficult to understand. Here are 7 ways that cloud providers obscure pricing on your monthly bill:  

1 – They use varying terminology

For the purpose of this post, we’ll focus on the three biggest cloud service providers: AWS, Azure, and Google. Between these three cloud providers alone, different analogies are used for just about every component of services offered.

For example, when you think of a virtual machine (VM), that’s what AWS calls an “instance,” Azure calls a “virtual machine,” and Google calls a “virtual machine instance.” If you have a group of these different machines, or instances, in Amazon and Google they’re called “auto-scaling” groups, whereas in Azure they’re called “scale sets.” There’s also different terminology for their pricing models. AWS offers on-demand instances, Azure calls it “pay as you go,” and Google refers to it as “sustained use.” You’ve also got “reserved instances” in AWS, “reserved VM instances” in Azure, and “committed use” in Google. And you have spot instances in AWS, which are the same as low-priority VMs in Azure, and preemptible instances in Google.

2 – There’s a multitude of variables

Operating systems, compute, network, memory, and disk space are all different factors that go into the pricing and sizing of these instances. Each of these virtual machine instances also have different categories: general purpose, compute optimized, memory optimized, disk optimized and other various types. Then, within each of these different instance types, there are different families. In AWS, the cheapest and smallest instances are in the “t2” family, in Azure they’re called the “A” family. On top of that, there are different generations within each of those families, so in AWS there’s t2, t3, m2, m3, m4, and within each of those processor families, different sizes (small, medium, large, and extra large). So there are lots of different options available. Oh, and lots confusion, too.  

3 – It’s hard to see what you’re spending

If you aren’t familiar with AWS, Azure, or Google Cloud’s consoles or dashboards, it can be hard to find what you’re looking for. To find specific features, you really need to dig in, but even just trying to figure out the basics of how much you’re currently spending, and predicting how much you will be spending – all can be very hard to understand. You can go with the option of building your own dashboard by pulling in from their APIs, but that takes a lot of upfront effort, or you can purchase an external tool to manage overall cost and spending.

4 – It’s based on what you provision…not what you use

Cloud services pricing can charge on a per-hour, per-minute, or per-second basis. If you’re used to the on-prem model where you just deploy things and leave them running 24/7, then you may not be used to this kind of pricing model. But when you move to the cloud’s on-demand pricing models, everything is based on the amount of time you use it.

When you’re charged per hour, it might seem like 6 cents per hour is not that much, but after running instances for 730 hours in a month, it turns out to be a lot of money. This leads to another sub-point: the bill you get at the end of the month doesn’t come until 5 days after the month ends, and it’s not until that point that you get to see what you’ve used. As you’re using instances (or VMs) during the time you need them, you don’t really think about turning them off or even losing servers. We’ve had customers who have servers in different regions, or on different accounts that don’t get checked regularly, and they didn’t even realize they’ve been running all this time, charging up bill after bill.

You might also be overprovisioning or oversizing resources — for example, provisioning multiple extra large instances thinking you might need them someday or use them down the line. If you’re used to that, and overprovisioning everything by twice as much as you need, it can really come back to bite you when you go look at the bill and you’ve been running resources without utilizing them, but are still getting charged for them – constantly.

5 – They change the pricing frequently

Cloud services pricing has changed quite often. So far, they have been trending downward, so things have been getting cheaper over time due to factors like competition and increased utilization of data centers in their space. However, don’t jump to conclude that price changes will never go up.

Frequent price changes make it hard to map out usage and costs over time. Amazon has already made changes to their price more than 60 times since they’ve been around, making it hard for users to plan a long-term approach. Also for some of these instances, if you have them deployed for a long time, the prices of instances don’t display in a way that is easy to track, so you may not even realize that there’s been a price change if you’ve been running the same instances on a consistent basis.

6 – They offer cost savings options… but they’re difficult to understand (or implement)

In AWS, there are some cost savings measures available for shutting things down on a schedule, but in order to run them you need to be familiar with Amazon’s internal tools like Lambda and RDS. If you’re not already familiar, it may be difficult to actually implement this just for the sake of getting things to turn off on a schedule.  

One of the other things you can use in AWS is Reserved Instances, or with Azure you can pay upfront for a full year or two years. The problem: you need to plan ahead for the next 12 to 24 months and know exactly what you’re going to use over that time, which sort of goes against the nature of cloud as a dynamic environment where you can just use what you need. Not to mention, going back to point #2, the obscure terminology for spot instances, reserved instances, and what the different sizes are.

7 – Each service is billed in a different way

Cloud services pricing shifts between IaaS (infrastructure as a service), which uses VMs that are billed one way, and PaaS (platform as a service) gets billed another way. Different mechanisms for billing can be very confusing as you start expanding into different services that cloud providers offer.

As an example, the Lambda functions in AWS are charged based on the number of requests for your functions, the duration, and the time it takes for your code to execute. The Lambda free tier includes 1M free requests per month and 400,000 GB-seconds of compute time per month, or you can get 1M request free and $0.20 per 1M requests thereafter, OR use “duration” tier and get 400,000 GB-seconds per month free, $0.00001667 for every GB-second used thereafter – simple, right? Not so much.

Another example comes from the databases you can run in Azure. Databases can run as a single server or can be priced by elastic pools, each with different tables based on the type of database, then priced by storage, number of databases, etc.

With Google Kubernetes clusters, you’re getting charged per node in the cluster, and each node is charged based on size. Nodes are auto-scaled, so price will go up and down based on the amount that you need. Once again, there’s no easy way of knowing how much you use or how much you need, making it hard to plan ahead.

What can you do about it?

Ultimately, cloud service offerings are there to help enterprises save money on their infrastructures, and they’re great options IF you know how to use them. To optimize your cloud environment and save money on costs, we have a few suggestions:

    • Get a single view of your billing. You can write your own scripts (but that’s not the best answer) or use an external tool.  
    • Understand how each of the services you use is billed. Download the bill, look through it, and work with your team to understand how you’re being billed.
    • Make sure you’re not running anything you shouldn’t be. Shut things down when you don’t need them, like dev and test instance on nights and weekends.Try to plan out as much as you can in advance.
    • Review regularly to plan out usage and schedules as much as you can in advance
    • Put governance measures in place so that users can only access certain features, regions, and limits within the environment. 

Cloud services pricing is tricky, complicated, and hard to understand. Don’t let this confusion affect your monthly cloud bill. Try ParkMyCloud for an automated solution to cost control.

How to Use Terraform Provisioning and ParkMyCloud to Manage AWS

How to Use Terraform Provisioning and ParkMyCloud to Manage AWS

Recently, I’ve been on a few phone calls where I get asked about cost management of resources built in AWS using Terraform provisioning. One of the great things about working with ParkMyCloud customers is that I get a chance to talk to a lot of different technical teams from various types of businesses. I get a feel for how the modern IT landscape is shifting and trending, plus I get exposed to the variety of tools that are used in real-world use cases, like Atlassian Bamboo, Jenkins, Slack, Okta, and Hashicorp’s Terraform.

Terraform seems to be the biggest player in the “infrastructure as code” arena. If you’re not already familiar with it, the utilization is fairly straightforward and the benefits quickly become apparent. You take a text file, use it to describe your infrastructure down to the finest detail, then run “terraform apply” and it just happens. Then, if you need to change your infrastructure, or revoke any unwanted changes, Terraform can be updated or roll back to a known state. By working together with AWS, Azure, VMware, Oracle, and much more, Terraform can be your one place for infrastructure deployment and provisioning.

How to Use Terraform Provisioning and ParkMyCloud with AWS Autoscaling Groups

I’ve talked to a few customers recently, and they utilize Terraform as their main provisioning tool, while ParkMyCloud is their ongoing cloud governance and cost control tool. Using these two systems together is great, but one main confusion comes in with AWS’s AutoScaling Groups. The question I usually get asked is around how Terraform handles the changes that ParkMyCloud makes when scheduling ASGs, so let’s take a look at the interaction.

When ParkMyCloud “parks” an ASG, it sets the Min/Max/Desired to 0/0/0 by default, then sets the values for “started” to the values you had originally entered for that ASG. If you run “terraform apply” while the ASG is parked, then terraform will complain that the Min/Max/Desired values are 0 and will change them to the values you state. Then, when ParkMyCloud notices this during the next time it pulls from AWS (which is every 10 minutes), it will see that it is started and stop the ASG as normal.

If you change the value of the Min/Max/Desired in Terraform, this will get picked up by ParkMyCloud as the new “on” values, even if the ASG was parked when you updated it. This means you can keep using Terraform to deploy and update the ASG, while still using ParkMyCloud to park the instances when they’re idle.

How to Use Terraform to Set Up ParkMyCloud

If you currently leverage Terraform provisioning for AWS resources but don’t have ParkMyCloud connected yet, you can also utilize Terraform to do the initial setup of ParkMyCloud. Use this handy Terraform script to create the necessary IAM Role and Policy in your AWS account, then paste the ARN output into your ParkMyCloud account for easy setup. Now you’ll be deploying your instances as usual using Terraform provisioning while parking them easily to save money!