Application Containerization: Pros and Cons

Application Containerization: Pros and Cons

What is Application Containerization?

Application containerization is more than just a new buzz-word in cloud computing; it is changing the way in which resources are deployed into the cloud. However, many people are still coming to grips with the concept of application containerization, how it works, and the benefits it can deliver.

Most people understand the term “cloud computing” relates to the renting of computing services over the Internet from Cloud Service Providers (AWS, Azure, Google, etc.). Cloud computing breaks down into three broad categories – Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) – often called the “cloud computing stack” because they build on top of one another.

The benefits of cloud computing are easily seen at the IaaS level; where, rather than building a physical, on-premises IT infrastructure, businesses can simply pay for the computing services they need as they want them, on demand. The advantages of cost, scalability, flexibility and low maintenance overheads have driven IaaS cloud computing to be a $50 billion industry in little more than a decade.

However, IaaS cloud computing also has its issues. In order to take advantage of the benefits, businesses have to rent virtual machines (VMs or “instances”) which replicate the features of a physical IT environment. This means paying for a server complete with its own operating system and the software required to run the operating system, even if you only want to launch a single application.

Where Application Containerization Comes Into the Picture

By comparison, application containerization allows businesses to launch individual applications without the need to rent an entire VM. It does this by “virtualizing” an operating system and giving containers access to a single operating system kernel – each container comprising the application and the software required for the application to run (settings, libraries, storage, etc.).

The process of application containerization allows multiple applications to be distributed across a single host operating system without requiring their own VM, which can lead to significant cost savings. Whereas previously, a server hosting eight applications in eight VMs would have eight copies of the operating system running in each VM, ten containers can share the same operating system.

In addition to significant cost savings, application containerization allows for greater portability. This can accelerate the process of testing applications across different operating systems because there is no waiting for the operating system to boot up. Furthermore, if the application crashes during testing, it only takes down the isolated container rather than the entire operating system.

One further benefit of application containerization is that containers can be clustered together for easy scalability or to work together as micro-services. In the latter case, if an application requires updating or replacing, it can be done in isolation of other applications and without the need to stop the entire service. The lower costs, greater portability and minimal downtime are three reasons why application containerization has become more than just a new buzzword in cloud computing and is changing the way in which resources are deployed into the cloud.

The Downsides of Application Containerization

Unfortunately there are downsides to application containerization. Some of these – for example, container networking – are being resolved as more businesses take advantage of application containerization. However, container security and complexity are remaining issues, as is the potential for costs to spiral out of control as they often do when businesses adopt new technologies.

The security issue evolves from the process of containers sharing the same operating system. If a vulnerability in the operating system or the kernel is exploited, it will affect the security of all the applications connected to the operating system. Consequently, security policies have to be turned on for every application, with activities other than essential ones forbidden.

Containers also add more operational complexity than you might at first assume, adding more to orchestrate and requiring additional management.

With regard to costs, the risk exists that developers will launch multiple containers and fail to terminate them when they are no longer required. Due to the number of containers being launched compared to VMs, it will not take long for container-related cloud waste to match that of VM-related cloud waste – estimated to be $12.9 billion per year in this blog post.

The problem with controlling cloud spend using cloud management software is that many solutions fail to identify unused containers because the solutions are host-centric rather than role-centric. For an effective way to control cloud spend, speak with ParkMyCloud about our cloud cost management software.  

DevFinOps: Why Finance Needs to be Integrated with Development and Operations

DevFinOps: Why Finance Needs to be Integrated with Development and Operations

The formation of DevOps brought together two distinct worlds, causing a shift in IT culture that can only be made better (and more cost effective) by the integration of financial strategy  – enter DevFinOps. We say this partially in jest… yeah, we know, you’ve had enough of the Dev-blank-blank mashups. But really, this is something that we’ve been preaching about since the start of ParkMyCloud. As long as the public cloud remains a utility, everyone should be responsible for controlling the cost of their cloud use, meaning “continuous cost control” should be integrated into the processes of continuous integration and delivery.  

What is DevFinOps?

Hear us out — you at least need to start thinking of financial management as an element in the DevOps process. Time and time again, we see DevOps teams overspend and face major organizational challenges when inevitably the Finance team (or the CTO) starts enforcing a stricter budget. Cost control becomes a project, derailing forward development motion by rerouting valuable resources toward implementing spend management processes.  

It doesn’t need to be this way.

As financial resources are finite, they should be an integrated element from the very beginning when possible, and otherwise as soon as possible. Our product manager, Andy Richman,  recently discussed this concept further in a podcast for The CloudCast.

There are a number of ways that finance can be integrated into DevOps, but one near and dear to our hearts is with automated cloud cost control. A mental disconnect between cloud resources and their costs causes strain on budgets and top-down pressure to get spending under control.

Changing the Mindset: Cloud is a Utility

The reason for this disconnect is that as development and operations have moved to the cloud, the way we assess costs has changed profoundly in the same way that infrastructure has changed. A move to the cloud is a move to pay-as-you-go compute resources.

This is due to the change in pricing structure and mindset that happened with the shift from traditional infrastructure to public cloud. As one of our customers put it:

“It’s been a challenge educating our team on the cloud model. They’re learning that there’s a direct monetary impact for every hour that an idle instance is running. The world of physical servers was all CapEx driven, requiring big up-front costs, and ending in systems running full time. Now the model is OpEx, and getting our people to see the benefits of the new cost-per-hour model has been challenging but rewarding.”

In a world where IT costs already tend to exceed budgets, there’s an added struggle to calculating long-term cost estimates for applications that are developed, built and run on a utility. But wasn’t the public cloud supposed to be more cost effective? Yes, but only if every team and individual is aware of their usage, accountable for it, and empowered with tools that will give them insight and control over what they use. The public cloud needs to be thought of like any other utility.

Take your monthly electric bill, for example. If everyone in the office left the lights on 24 hours a day and 7 days a week, those costs would add up rather quickly. Meanwhile, you’d be wasting money on all those nights and weekends that your beautifully lit office is completely empty. But that doesn’t happen because in most cases, people understand that lights cost money, so people have automated this process in the office by using sensors either based on motion (usage) or time-based schedules. Now apply that same thinking to the cloud and it’s easy to see why cost-effectiveness goes down the drain when individuals and teams aren’t aware or accountable for the resources they’re using.

Financial decisions regarding IT infrastructure fall into the category of IT asset management (ITAM), an area that merges the financial, contractual and inventory components of an IT project to support lifecycle management and strategic decision-making. That brings us back to DevFinOps: an expansion of ITAM, fixing financial cost and value of IT assets directly into IT infrastructure, updating calculations in real time and simplifying the budgeting process.

Why this is important now that you’re on cloud

DevFinOps proposes a more effective way to estimate costs is by breaking them down into smaller estimates over time as parts of the work get completed, integrating financial planning directly into IT and cloud development operations. To do this, the DevOps team needs visibility into how and when resources are being used and an understanding on opportunities for saving.

Like we’ve been saying: the public cloud is a utility – you pay for what you use. And with that in mind, the easiest way to waste money is by leaving your instances or VMs running 24 hours a day and 7 days a week, and the easiest way to save money is just as simple: turn them off when they’re idle. In a future post, we’ll discuss further on how you can implement this process for your organization using automated cost control – stay tuned.

Dear Daniel Ek: We Made You a Playlist About Your Google Cloud Spend.

Dear Daniel Ek: We Made You a Playlist About Your Google Cloud Spend.

Dear Daniel Ek,

Congrats on Spotify’s IPO! It’s certainly an exciting time for you and the whole company. We’re a startup ourselves, and it’s inspiring to see you shaking up the norms and succeeding on your first day on the stock exchange.

Of course, with big growth comes big operational changes. Makes sense. As cloud enthusiasts ourselves, we were particularly interested to see that you committed to 365 million euros/$447 million in Google Cloud spend over the next three years.

Congrats on choosing an innovative cloud provider that will surely serve your infrastructure needs well.

But we’d like to issue a word of warning. No, not about competing with Google – about something that hits the bottom line more directly, which I’m sure will concern you.

Maybe a playlist on our favorite music streaming service is the best way to say this:

What do we mean when we say not to waste money on Google Cloud resources you don’t need?

In fact, we estimate that up to $90 million of that spend could be on compute hours that no one is actually using – meaning it’s complete wasted.

How did we get there? On average, ⅔ of cloud spend is spent on compute. Of that, 44% is on non-production resources such as those used for development, testing, staging, and QA. Typically, those resources are only needed for about 35% of hours during the week (a 40- hour work week plus a margin of error), meaning the other 65% of hours in the week are not needed. More here.

That’s not to mention potential waste on oversized resources, orphaned volumes, PaaS services, and more.

Companies like McDonald’s, Unilever, and Sysco have chosen ParkMyCloud to reduce that waste by automatically detecting usage and then turning those resources off when they’re not needed – all while providing simple, governed access to their end users.

Daniel, we know you won’t want your team to waste money on your Google Cloud spend.

We’re here when you’re ready.

Cheers,

Jay Chapel

CEO, ParkMyCloud

Announcing SmartParking for Google Cloud Platform: Automated, Custom On/Off Schedules Based on GCP Metric Data

Announcing SmartParking for Google Cloud Platform: Automated, Custom On/Off Schedules Based on GCP Metric Data

Today we’re excited to announce the latest cloud provider compatible with ParkMyCloud’s SmartParkingTM – Google Cloud Platform! In addition to AWS and Azure, Google users will now benefit from the use of SmartParking to get automatic, custom on/off schedules for cloud resources based actual usage metrics.

The method is simple: ParkMyCloud will import GCP metric data to look for usage patterns for your GCP virtual machine instances. With your utilization data, ParkMyCloud creates recommended schedules for each instance to turn off when they are typically idle, eliminating potential cloud waste and saving you money on your Google Cloud bill every month. You will no longer have to go through the process of creating your own schedule or manually shutting your VMs off – unless you want to. SmartParking automates the scheduling for you, minimizing idle time and cutting costs in the process.

Customized Scheduling – Not “One-Size-Fits-All”

SmartParking’s benefits are not “one-size-fits-all.” The recommended schedules can be customized like an investment portfolio – choose between “conservative”, “balanced”, or “aggressive” based on your preferences.

And like an investment, a bigger risk comes with a bigger reward. When receiving recommendations based on your GCP metric data, you’ll have the power to decide which of the custom schedules is best for you. If you’re going for maximum savings, aggressive SmartParking is your best bet since you’ll be parked most of the time, but with a small “risk” of occasionally finding an instance parked when needed. But in the event that this does happen – no fear! You can still use ParkMyCloud’s “snooze button” to override the schedule and get the instance turned back on — and you can give your team governed access to do the same.

If you’d rather completely avoid having your instances shut off when needed, you can opt for a conservative schedule. Conservative SmartParking only recommends a parking schedule during times that instances are never used, ensuring that you won’t miss a beat when it comes to having instances off during any given time that you’ve ever used them.

If you’re worried about the risk of aggressive parking for maximum savings, but want more opportunities to save than conservative schedules will give you, then a “balanced” SmartParking schedule is a happy medium.

What People are Saying: Save More, Easier than Ever

Since ParkMyCloud debuted SmartParking in January for AWS, adding Azure in March, customers have given positive feedback to the new functionality:

“ParkMyCloud has helped my team save so much on our AWS bill already, and SmartParking will make it even easier,” said Tosin Ojediran, DevOps Engineer at a FinTech company. “The automatic schedules will save us time and make sure our instances are never running when they don’t need to be.”

ParkMyCloud customer Sysco Foods has more than 500 users across 50 teams using ParkMyCloud to manage their AWS environments. “When I’m asked by a team how they should use the tool, they’re exceedingly happy that they can go in and see when systems are idle,” Kurt Brochu, Sysco Foods’ Senior Manager of the Cloud Enablement Team, said of SmartParking. “To me, the magic is that the platform empowers the end user to make decisions for the betterment of the business.”

Already a ParkMyCloud user? Log in to your account to try out the new SmartParking. Note that you will need to update the permissions that ParkMyCloud has to access your GCP metric data — see the user guide for instructions on that.

Not yet a ParkMyCloud user? Start a free trial here.

Looking for a Google Cloud Instance Scheduling Solution? As You Wish

Looking for a Google Cloud Instance Scheduling Solution? As You Wish

Like other cloud providers, the Google Cloud Platform (GCP) charges for compute virtual machine instances by the amount of time they are running — which may lead you to search for a Google Cloud instance scheduling solution. If your GCP instances are only busy during or after normal business hours, or only at certain times of the week or month, you can save money by shutting these instances down when they are not being used. So can you set up this scheduling through the Google Cloud console? And if not – what’s the best way to do it?

Why bother scheduling a Google VM to turn off?

As mentioned, depending on your purchasing option, Google Cloud pricing is based on the amount of time an instance is running, charged at a per-second rate. We find that at least 40%, of an organization’s cloud resources (and often much more) are for non-production purposes such as development, testing, staging, and QA. These resources are only needed when employees are actively using them for those purposes — so every second that they are left running when not being used is wasted spend. Since non-production VM instances often have predictable workloads, such as a 7 AM to 7 PM work week, 5 days a week, which means the other 64% of spend is completely wasted. Inconceivable!

The good news is, that means these resources can be scheduled to turn off during nights and weekends to save money. So, let’s take a look at a couple of cloud scheduling options.

Scheduling Option 1: GCP set-scheduling Command

If you were to do a Google search on “google cloud instance scheduling,” hoping to find out how to shut your compute instances down when they are not in use, you would see numerous promising links. The first couple of references appear to discuss how to set instance availability policies and mention a gcloud command line interface for “compute instances set-scheduling”. However, a little digging shows that these interfaces and commands simply describe how to fine-tune what happens when the underlying hardware for your Google virtual machine goes down for maintenance. The options in this case are to migrate the VM to another host (which appears to be a live migration), or to terminate the VM, and if the instance should be restarted if it is terminated. The documentation for the command goes so far as to say that the command is intended to let you set “scheduling options.”  While it is great to have control over these behaviors, I feel I have to paraphrase Inigo Montoya – You keep using that word “scheduling” – I do not think it means what you think it means…

Scheduling Option 2: GCP Compute Task Scheduling

The next thing that looks schedule-like is the GCP Cron Service. This is a highly reliable networked version of the Unix cron service, letting you leverage the GCP App Engine services to do all sorts of interesting things. One article describes how to use the Cron Service and Google App Engine to schedule tasks to execute on your Compute Instances. With some App Engine code, you could use this system to start and stop instances as part of regularly recurring task sequences. This could be an excellent technique for controlling instances for scheduled builds, or calculations that happen at the same time of a day/week/month/etc.

While very useful for certain tasks, this technique really lacks flexibility. Google Cloud Cron Service schedules are configured by creating a cron.yaml file inside the app engine application. The GCP Cron Service triggers events in the application, and getting the application to do things like start/stop instances are left as an exercise for the developer. If you need to modify the schedule, you need to go back in and modify the cron.yaml. Also, it can be non-intuitive to build a schedule around your working hours, in that you would need one event for when you want to start an instance, and another when you want to stop it. If you want to set multiple instances to be on different schedules, they would each need to have their own events. This brings us to the final issue, which is that any given application is limited to 20 events for free, up to a maximum of 250 events for a paid application. Those sound like some eel-infested waters.

Scheduling Option 3: ParkMyCloud Google Cloud Instance Scheduling

Google Cloud Platform and ParkMyCloud – mawwage – that dweam within a dweam….

Given the lack of other viable instance scheduling options, we at ParkMyCloud created a SaaS app to automate instance scheduling, helping organizations cut cloud costs by 65% or more on their monthly cloud bill with AWS, Azure, and, of course, Google Cloud.

We aim to provide a number of benefits that you won’t find with, say, the GCP Cron Service. ParkMyCloud’s cloud management software:

  • Automates the process of switching non-production instances on and off with a simple, easy-to-use platform – more reliable than the manual process of switching GCP Compute instances off via the GCP console.
  • Provides a single-pane-of-glass view, allowing you to consolidate multiple clouds, multiple accounts within each cloud, and multiple regions within each account, all in one easy-to-use interface.
  • Does not require a developer background, coding, or custom scripting. It is also more flexible and cost-effective than having developers write scheduling scripts.
  • Can be used with a mobile phone or tablet.
  • Avoids the hard-coded schedules of the Cron Service. Users can temporarily override schedules if they need to use an instance on short notice.
  • Supports Teams and User Roles (with optional SSO), ensuring users will only have access to the resources you grant.
  • Helps you identify idle instances by monitoring instance performance metrics, displaying utilization heatmaps, and automatically generating utilization-based “SmartParking” schedule recommendations, which you can accept or modify as you wish...
  • Provides “rightsizing” recommendations to identify resources that are routinely underutilized and can be converted to a different Google Cloud server size to save 50-75% of the cost of the resource.
  • Has a 14-day free trial, so you can try the platform out in your own environment. There’s also a free-forever tier, useful for startups and those on the Google Cloud free tier, as well as paid tiers with more advanced options for enterprises with a larger Google Cloud footprint.

How Much Can You Save with Scheduling?

While it depends on your exact schedule, many non-production Google Cloud VMs – those used for development, testing, staging, and QA – can be turned off for 12 hours/day on weekdays, and 24 hours/day on weekends. For example, the resource might be running from 7 AM to 7 PM Monday through Friday, and “parked” the rest of the week. This comes out to about 64% savings per resource.

Currently, the average savings per scheduled VM in the ParkMyCloud platform is about $200/month.

How Enterprises Are Benefitting from ParkMyCloud’s Scheduling Software

If you’re not quite ready to start your own trial, take a look at this use case from Workfront, a work management software provider. Workfront uses both AWS and Google Cloud Compute Engine, and needed to coordinate cloud management software across both public clouds. They required automation in order to optimize and control cloud resource costs, especially given users’ tendency to leave resources running when they weren’t being used.

Workfront found that ParkMyCloud would meet their automatic scheduling needs. Now, 200 users throughout the company use ParkMyCloud to:

  • Get recommendations of resources that are not being used 24×7, and use policies to automatically apply on/off schedules to them
  • Get notifications and control the state of their resources through Slack
  • Easily report savings to management
  • Save over $200,000 per year

Ways to Save on Google Cloud VMs, Beyond Scheduling

Google has done a great job of creating offerings for customers to save money through regular cloud usage. The two you’ll see mentioned the most are sustained use discounts and committed use discounts. Sustained use discounts give Google Cloud users automatic discounts the longer an instance is run. This post outlines the break-even points between letting an instance run for the discount vs. parking it. Committed use discounts, on the other hand, require an upfront commitment for 1 or 3 years’ usage. We have found that they’re best applicable for predictable workloads such as production environments. There are also the pre-emptible VMs, which are offered at a discount from on demand VMs in exchange for being short-lived – up to 24 hours.

How to Create a Google Cloud Schedule with ParkMyCloud 

Getting started with ParkMyCloud is easy. Simply register for a free trial with your email address and connect to your Google Cloud Platform to allow ParkMyCloud to discover and manage your resources. A 14-day free trial free gives your organization the opportunity to evaluate the benefits of ParkMyCloud while you only pay for the cloud computing power you use. At the end of the trial, there is no obligation on you to continue with our service, and all the money your organization has saved is, of course, yours to keep.

Have fun storming the castle!

How to Choose a CI/CD Tool: Cost Scaling, Languages, and Platforms, and More

How to Choose a CI/CD Tool: Cost Scaling, Languages, and Platforms, and More

How should CI/CD tool cost scaling, language support, and platform support affect your implementation decisions? In a previous post, we looked at the factors you should consider when choosing between a SaaS CI/CD tool vs. a self-hosted CI/CD solution. In this post, we will take a look at a number of other factors that should be considered when evaluating a SaaS CI/CD tool to determine if it’s the right fit for your organization, including cost scalability and language/platform support.

CI/CD Tool Cost Scaling

One thing that is important to keep in mind when deciding to use a paid subscription-based service is how the cost scales with your usage. There are a number of factors that can affect cost. Particularly, some CI/CD SaaS services limit the number of build processes that can be run concurrently. For example, Codeship’s free plan allows only one concurrent build at a time. Travis CI’s travis-ci.org product offers up to 5 concurrent builds for open source projects, but (interestingly) their $69 USD/mo plan on travis-ci.com only offers 1 concurrent build. All of this means that increased throughput will likely result in increased cost. If you expect to maintain a steady level of throughput (that is, you don’t expect to add significantly more developers, which would require additional CI/CD throughput) then perhaps limits on the number of concurrent build processes is not a concern for you. However, if you’re planning on adding more developers to your team, you’ll likely end up having more build/test jobs that need to be executed. Limits may hamper your team’s productivity.

Another restriction you may run across is a limit on the total number of “build minutes” for a given subscription. In other words, the cumulative number of minutes that all build/test processes can run during a given subscription billing cycle (typically a month) is capped at a certain amount. For example, CircleCI’s free plan is limited to 1,500 build minutes per month, while their paid plans offer unlimited build minutes. Adding more developers to your team will likely result in additional build jobs, which will increase the required amount of build minutes per month, which may affect your cost. Additionally, increasing the complexity of your build/test process may result in longer build/test times, which will further increase the number of build minutes you’ll need during each billing cycle. The takeaway here is that if you have a solid understanding of how your team and your build processes are likely to scale in the future, then you should be well equipped to make a decision on whether the cost of a build minute-limited plan will scale adequately to meet your organization’s needs.

Though not directly related to cost scaling, it’s important to note that some CI/CD SaaS providers place a limit on the length of time allowed for any single build/test job, independent of any cumulative per-billing-cycle limitations. For example, Travis CI’s travis-ci.org product limits build jobs to 50 minutes, while jobs on their travis-ci.com product are limited to 120 minutes per build. Similarly, Atlassian’s Bitbucket Pipelines limits builds to 2 hours per job. These limits are probably more than sufficient for most teams, but if you have any long-running build/test processes, you should make sure that your jobs will fit within the time constraints set by your CI/CD provider.

CI/CD Language and Platform Support

Not all languages and platforms are supported by all SaaS CI/CD providers. Support for programming languages, operating systems, containers, and third-party software installation are just a few of the factors that need to be considered when evaluating a SaaS CI/CD tool. If your team requires Microsoft Windows build servers, you are immediately limited to a very small set of options, of which AppVeyor is arguably the most popular. If you need to build and test iOS or Android apps, you have a few more options, such as Travis CI, fastlane, and Bitrise, among others.

Programming languages are another area of consideration. Most providers support the most popular languages, but if you’re using a less popular language, you’ll need to choose carefully. For instance, Travis CI supports a huge list of programming languages, but most other SaaS CI/CD providers support only a handful by comparison. If your project is written in D, Erlang, Rust, or some other less mainstream language, many SaaS CI/CD providers may be a no-go right from the start.

Further consideration is required when dealing with Docker containers. Some SaaS CI/CD providers offer first-class support for Docker containers, while other providers do not support them at all. If Docker is an integral part of your development and build process, some providers may be immediately disqualified from consideration due to this point alone.

Final Thoughts

As you can see, when it comes to determining the CI/CD tool that’s right for your team, there are numerous factors that should be considered, especially with regard to CI/CD tool cost. Fortunately, many SaaS CI/CD providers offer a free version of their service, which gives you the opportunity to test drive the service to ensure that it supports the languages, platforms, and services that your team uses. Just remember to keep cost scaling in mind before making your decision, as the cost of “changing horses” can be expensive should you find that your CI/CD tool cost scales disproportionately with the rest of your business.

In a future post, we will explore third-party integrations with CI/CD tools, with a focus on continuous delivery.