The New Infrastructure Automation: Continuous Cost Control

The New Infrastructure Automation: Continuous Cost Control

As applications and systems have evolved from single-host mainframes to distributed microservices architectures, infrastructure automation has become a key part of the toolkit for modern sysadmins and operations teams. This automation has gone from doing basic Operating System installation and setup to full-blown multi-step deployments of production code from a single developer’s commit. By automating these mundane processes and eliminating the human error, production systems have a much higher stability than ever before.

But why stop at automating deployments? There are other elements that need to be automated, too –– one of which is cost.

Rolling out new infrastructure over and over again without ever taking a step back to analyze the cost just leads to the panic-driven cloud-bill-based phone calls from your finance department. Instead, taking similar automation decisions as Puppet, Chef, Ansible, Terraform, or Jenkins and applying them to your cloud costs can help you incrementally save money so you never have that giant surprise bill.

Scaling Up Without Ever Spinning Down

Developers and operations teams often use infrastructure automation early in application development and deployment processes to get servers and databases deployed and functioning. Modern automation tools aren’t just powerful, but also quick to deploy and fit into your current workflow. This is fantastic, but the problem is that the automation effort can start to taper off once the environments are running. Too often, users and teams move on to the next project before figuring out a way to keep costs from getting out of control. Then it becomes too late, and they simply accept that money needs to be dumped into the deployment pipeline to keep everything on task.

Easy-to-use automation is the key to spinning these environments up efficiently, and can also be key for keeping the costs of these environments low.  Sure, you may need to keep the production systems scaled up for maximum application performance and customer satisfaction, but what about the test lab, sandbox environment, dev systems, UAT servers, QA deployments, staging hosts, and other pre-production workloads?  Having giant environments with system sizes that match production can be useful for some testing, but leaving it all running can be easily doubling your cloud costs for each environment like this that you have, for things that are used for a fraction of the time.

DevSecMonLogScalFinOps

As your infrastructure automation toolkit grows and evolves, there’s a few things that you’ll start building in to all of your applications and deployments:

  • Security
  • Monitoring
  • Logging
  • Scalability

As this list grows, there’s one more thing you need: Continuous Cost Control.

 

By building in cost control automation from the very beginning, you can keep your cloud costs low while maintaining the flexibility required to keep up the pace of innovation. Without this, your costs are destined to rise faster than you intended, and is only going to cause headaches (and endless meetings) for your future self. It may not be coming out of your bank account directly, but saving money at an enterprise organization is everyone’s job, and automating this is the key.

And that’s actually what thousands of customers around the world are using ParkMyCloud for today! Get started with continuous cost control today.

Google Kubernetes Engine (GKE) – The Leader in Hosted Container Orchestration

Google Kubernetes Engine (GKE) – The Leader in Hosted Container Orchestration

One of Google Cloud’s killer products is Google Kubernetes Engine, or GKE. Since Google was the original creator of the Kubernetes container scheduler, it’s fitting that they are considered to be at the forefront of Kubernetes management and development. In spite of the fact that Kubernetes is now managed by the Cloud Native Computing Foundation, Google is still a major contributor to the open-source Kubernetes project on Github. Let’s take a look at Google’s hosted version of Kubernetes and why so many cloud users prefer it to the competition.

GKE Overview

Google Kubernetes Engine is a hosted environment that can run your containerized applications. Unlike Google Compute Engine, which lets you run virtual machines with the operating system of your choice, Google Kubernetes Engine takes your application or code that is packaged into a Docker container and manages it according to your specifications. Ideally, the same containers that have gone through your testing and QA process can now be run at-scale in production, with the backing of Google’s security, availability, and management.

GKE was made publically available in 2015, after being used behind-the-scenes for many Google services (like Gmail and YouTube) for over 10 years. After open-sourcing the Kubernetes software, Google set up a hosted version so users didn’t have to worry about running the master node themselves. This hosted master node has built-in high availability, health checks, and an easy-to-use developer dashboard.

GKE manages Virtual Machines that containers are running on by using their own container-optimized OS. These VMs can scale up or down based on container load and application requirements, and can even utilize preemptible VMs for batch or low-priority jobs. The pricing of GKE is based solely on the number of seconds that those compute resources exist, as there’s no additional costs for the Kubernetes masters that you run for the clusters.

GKE vs. The Competition (AKS, EKS, and ECS)

Google Kubernetes Engine is often seen as the leader in hosted Kubernetes environments, both because Google wrote the original software, and because a decade of experience running it on some of the largest scale websites in the world is hard to discount. Google also had a two-year head start on Microsoft’s AKS service and a three-year head start on Amazon’s AKS platform, which helped work out the kinks and build brand awareness. More: cloud container services comparison.

There are also some technical reasons why GKE is a superior choice. Google deploys the latest version of Kubernetes faster than other providers, so you’re always on the bleeding edge of development. Clusters typically spin up faster, more nodes are allowed per cluster, and new workers start quicker. SOC and ISO compliance can be a factor for large organizations. The user experience of the Kubernetes dashboard is also noticeably better than some alternatives.

You Down With GKE? (Yeah, You Know Me)

At the end of the day, the biggest question we get asked about services like Google Kubernetes Engine is, “Should I use Google Kubernetes Engine for my containers?” As always, the answer is nuanced. If you aren’t embedded in a particular cloud provider (or if you have a multi-cloud strategy), then GKE is certainly a step above other hosted Kubernetes services. Throw in the fact that you don’t pay for master nodes, and it makes financial sense as well. However, if you’re fully committed to a different cloud provider, then the native container management tools are good enough to get the job done.

Why Serverless Won’t Replace Traditional Servers

Why Serverless Won’t Replace Traditional Servers

Curious why serverless is so popular – and why it won’t replace traditional servers in the cloud?

In the current cloud infrastructure, top service providers are dedicating a great deal of effort to expand on this architecture as a new approach to a cloud solution that focuses on applications rather than infrastructure. Today we’ll take a look at what serverless computing is good for, and what it can’t replace.

Understanding Serverless

For starters, serverless mostly refers to an application or API that depends on third-party, cloud-hosted applications and services, to manage server-side logic and state, propagating code hosted on Function as a Service (FaaS) platforms.

Even though the name “serverless” suggests that there are no servers involved, there will always be servers in use. Rather, it makes it so developers don’t have to deal directly with the servers – it is more about the implementation and management of them. To power serverless workloads,  cloud providers use automated systems that eliminate the need for server administrators, offering developers a way to manage applications and services without having to handle, tweak or scale the actual server infrastructure.

Top Serverless Providers

It is no surprise the top cloud providers that are investing in a major way on serverless include AWS, Microsoft Azure, and Google Cloud. In brief, here is how they approach serverless computing.

AWS Lambda is the current leader among serverless compute implementations. Lambda handles everything by automatically scaling your application by running your code as it’s triggered.

Microsoft Azure Functions enables you to run code-on-demand without having to explicitly provision or manage infrastructure.

Google Cloud Functions is a compute solution for creating event-driven applications and connects with GCP services by listening for and responding to events without needing to provision or manage servers.

Advantages and When to Use Serverless

Let’s look at why serverless is often a good choice. It allows organizations to reduce operational complications associated with infrastructure and related cost expenditures since they are computed for the actual usage or work the serverless platform performs.

When it comes to implementing, maintaining, debugging, monitoring the infrastructure, and setting up your environment, with serverless the heavy lifting is done for you. It allows developers to focus on application development, and not complex infrastructures, thus promoting team efficiency, better serving the customers and focusing on business goals.

Since serverless cost models are based on execution only, using serverless will reduce your costs of operations and save you money on cloud spend, making it more adaptable for short-term tasks on your environment, however, there are hidden costs to be aware of. Though we are considering advantages, this might as well be a disadvantage. Serverless apps rely on API calls, and the heavy use of API request can become very pricey indeed. In addition, networking costs can get very expensive when sending a lot of data and are generally more difficult to track in serverless costs models.

Some of the best use cases for serverless are:

  • Brand new applications that don’t already have an existing workload
  • Microservices-based architectures, with small chunks of code working together
  • Infrequently-used scripts that don’t need a server running 24/7

Disadvantages and When Not to Use Serverless

No doubt, there is an increased interest in serverless, but there are limitations that come with it. Perhaps these trade-offs are the reasons as to why some companies, though interested in serverless, are not ready to make the jump from traditional servers just yet.

Networking on serverless must be done through a private API endpoint and cannot be accessed through IPs, which can lead to vendor lock-in. This makes serverless unsuitable for long-term tasks, making serverless unusable for applications that have variable execution times, and for services that require information from an external source.

Serverless creates dependency upon cloud providers, and because of this you are not able to port your applications between different providers. Cloud providers own the burden of resource provisioning, so they are solely responsible for ensuring that the application instance has the back-end infrastructure it needs to execute when summoned.

By adopting serverless, you forfeit complete control over your infrastructure, like for example, scaling. Scaling is done automatically, but the absence of control makes it difficult to address and migrate errors related to serverless instances. This lack of control also applies to application performance issues, a metric that developers still need to worry about in a serverless environment. After all, serverless providers depend on an actual server that needs to be accessed and monitored.

Serverless is likely not a good fit for:

  • Rewriting existing apps
  • Applications with variable execution times
  • Long-term tasks
  • Monolithic applications

Why Serverless Won’t Replace Traditional Servers

Though every business has different needs when it comes to cloud infrastructures, serverless won’t surmount the current cloud infrastructure of traditional servers completely. There are too many use cases where serverless is not applicable, or not worth the tradeoff in control (or perhaps the cost – stay tuned for a future post on this). But as cloud service providers continue to invest heavily on serverless, it is fair to say that serverless usage will continue to grow in the years to come.  

Amazon EKS Overview: AWS’s Managed Kubernetes Service

Amazon EKS Overview: AWS’s Managed Kubernetes Service

Amazon EKS is a hosted Kubernetes solution that helps you run your container workloads in AWS without having to manage the Kubernetes control plane for your cluster. This is a great entry point for Kubernetes administrators who are looking to migrate to AWS services but want to continue using the tooling they are already familiar with. Often, users are choosing between Amazon EKS and Amazon ECS (which we recently covered, in addition to a full container services comparison), so in this article, we’ll take a look at some of the basics and features of EKS that make it a compelling option.

Amazon EKS 101

The main selling point of Amazon EKS is that the Kubernetes control plane is managed for you by AWS, so you don’t have to set up and run your own. When you set up a new cluster in EKS, you can specify if it’s going to be just available to the current VPC, or if it will be accessible to outside IP addresses. This flexibility highlights the two main deployment options for EKS:

  1. Fully within an AWS VPC, with complete integration to other AWS services you run in your account while being completely isolated from the outside world.
  2. Open and accessible, which enables hybrid-cloud, multi-cloud, or multi-account Kubernetes deployments.

Both options allow you the flexibility to use your own Kubernetes management tools, like Dashboard and kubectl, as EKS gives you the API Server Endpoint once you provision the cluster. This control plane utilizes multiple availability zones within the region you choose for redundancy.

Managed Container Showdown: EKS vs. ECS

Amazon offers two main container service options in EKS and ECS, and both are using Kubernetes under the hood. The biggest difference between the two options lies in who is doing the management of Kubernetes. WIth ECS, Amazon is running Kubernetes for you, and you just decide which tasks to run and when. Meanwhile, with EKS, you’re doing the Kubernetes management of your pods.

One consideration when considering EKS vs. ECS is networking and load balancing. Both services run EC2 servers behind the scenes, but the actual network connection is slightly different. ECS has network interfaces connected to individual tasks on each EC2 instance, while EKS has network interfaces connecting to multiple pods on each EC2 instance. Similarly, for load balancing, ECS can utilize Application Load Balancers to send traffic to a task, while EKS must use an Elastic Load Balancer to send traffic to an EC2 host (which can have a proxy via Kubernetes). Neither is necessarily better or worse, just a slight difference that may matter for your workload.

Sounds Great… How Much Does It Cost?

For each workload you run in Amazon EKS, there are two main charges that will apply.  First, there’s a charge of $0.20/hr (roughly $146/month) for each EKS Control Plane you run in your AWS account. Second, you’re charged for the underlying EC2 resources that are spun up by the Kubernetes controller. This second charge is very similar to how Amazon ECS charges you, and is highly dependant on the size and amount of resources you need.

Amazon EKS Best Practices

There’s no one-size-fits-all option for Kubernetes deployments, but Amazon EKS certainly has some good things going for it. If you’re already using Kubernetes, this can be a great way to seamlessly migrate to a cloud platform without changing your working processes. Also, if you’re going to be in a hybrid-cloud or multi-cloud deployment, this can make your life a little easier. That being said, for just simple Kubernetes clusters, the price of the control plane for each cluster may be too much to pay, which makes ECS a valid alternative.

More on container management and container optimization.

Amazon ECS Overview: What You Need To Know

Amazon ECS Overview: What You Need To Know

Amazon ECS is a great choice of container hosting platforms for AWS developers, among the many available options. Jumping into an ECS deployment can be daunting, as there are multiple options and varying terminology with hard-to-predict costs. We’ll go over some of the basics of Amazon ECS, including some terminology and price considerations you’ll need to consider.

Amazon ECS 101

Amazon ECS (which stands for Elastic Container Service) lets you run Docker containers without having to manage the orchestration of those containers. With ECS, you can deploy your containers on EC2 servers or in a serverless mode, which Amazon calls Fargate. Both deployment types handle the orchestration and underlying server management for you, so you can just schedule and deploy your containers.

Amazon ECS can work for both long-running jobs and short bursts of tasks, and includes tools for adjusting the scale of the container fleet as well as the scheduling of those containers. Task placement definitions let you choose which instances get which containers, or you can let AWS manage this by spreading across all Availability Zones.

Benefits of Amazon ECS include:

  • Easy integrations into other AWS services, like Load Balancers, VPCs, and IAM
  • Highly scalable without having to manage the cluster masters
  • Multiple management methods, including the AWS console, the AWS API, or CloudFormation templates
  • Elastic Container Registry helps you manage and sort your container images

Tasks and Services and Containers (Oh My!)

Diving into the world of containers on AWS requires the use of some terminology you may not be familiar with:

  • Container – An isolated environment that contains the bare minimum of services and code needed to run just a particular part of your application or microservice, designed to be run on any Docker-compatible OS.
  • Task Definition – A layout of the pieces required to run your application, which can include one or more containers along with networking and system requirements.
  • Task – An instantiation of a Task Definition.  Multiple tasks can use the same task definition.
  • Service – A layout of the boundaries and scaling options you set for your groupings of similar Tasks, which is similar to the relationship between AutoScaling Groups and EC2 Virtual Machines.
  • Cluster – A collection of EC2 instances running a specialized operating system where you will run your Service.

ECS Pricing: The (Hopefully Not) Million Dollar Question

Amazon ECS pricing has a few different variables, starting with your choice of deployment methods.  Since Fargate abstracts away the underlying infrastructure, you only pay for the seconds of vCPU and Memory that your Tasks are using (with a minimum of 1 minute for each Task). This pricing structure has the “serverless architecture” benefit of only paying for what you need when you need it, but also means that estimating these charges can be quite difficult.

Standard ECS pricing does not charge per-Task, but will charge based on the infrastructure you have deployed for your cluster. The cluster uses AutoScaling Groups of EC2 instances, and during setup of the cluster you can choose the instance size you want and the number of instances for the initial cluster deployment.  Since the cluster can scale up and down, you have the flexibility if you get a spike in task usage, but you do need to keep an eye on underutilized or idle instances.

Containing the Containers

As you can tell, utilizing Amazon ECS containers manages a lot of the back-end work for you, but brings a whole different set of considerations for your organization.  ParkMyCloud has some news coming later this year to help you manage your ECS containers! Contact us if you’d like to be notified when that’s available.

Not yet using containers, but have other AWS infrastructure? We can help control costs.

How to Turn AWS Utilization Data
into Automated Cost Control

 

 

 

 


Learn how your AWS utilization data in CloudWatch
can be harnessed to optimize your cloud costs.

June 26th | 2 PM ET