How AWS Firecracker Makes Containers and Serverless More Efficient

How AWS Firecracker Makes Containers and Serverless More Efficient

AWS Firecracker was announced at AWS re:Invent in November 2018 as a new AWS open source virtualization technology. The technology is purpose-built for creating and managing secure, multi-tenant container and function-based services. It was described by the AWS Chief Evangelist Jeff Barr as “what a virtual machine would look like if it was designed for today’s world of containers and functions.”

What is AWS Firecracker?

Firecracker is a Virtual Machine Manager (VMM) exclusively designed for running transient and short-lived processes. In other words, it helps to optimize the running of functions and serverless workloads. It’s also an important new component in the emerging world of serverless technologies and is used to enhance the backend implementation of Lambda and Fargate. Firecracker helps deliver the speed of containers combined with the security of VMs. If you use Lambda or Fargate, you’re already receiving the benefits of Firecracker. However, if you run/orchestrate a large volume of containers, you should take a look at this service with optimization in mind.

How AWS Firecracker Creates Efficiencies

AWS can realize the economic benefits of Firecracker by creating what they call “microVMs”, which allows them to spread serverless workloads around multiple servers thus getting a greater ROI from its investment in the servers behind serverless. In terms of customer benefit, using Firecracker enables these new microVMs to launch in 125 milliseconds or less, compared to the seconds (or longer) it can take to launch a container or spin up a traditional virtual machine. In a world where thousands of VMs can be spun up and down to tackle a specific workload, this will constitute a significant savings. And remember, these are fully fledged micro virtual machines, not just containers.The micro VM’s themselves are worth a closer look as each includes an in-process rate limiter to optimize shared network and storage resources. As a result, one server can support thousands of microVMs with widely varying processor and memory configurations.\

There is also the enhanced security and workload isolation only available from Kernel-based Virtual Machine (KVMs) – more secure than containers, which are less isolated. One particularly valuable security feature is that Firecracker is statically linked, which means all the libraries it needs to run are included in its executable code. This makes new Firecracker environments safer by eliminating outside libraries. Altogether, this offering and the combination of efficiency, security and speed created quite the buzz at the AWS re:Invent launch.

Will Firecracker make a “bang”?

There are a few caveats related to the still novel aspects of the technology. In particular, compared to alternatives, such as containers or Hyper-V VMs, it is prudent to confine to non-production workloads as the technology is still new and needs to be more fully battle-tested for production use.

However, as confidence, adoption and experience grow in the use of serverless technologies it certainly seems like Firecracker can offer a popular new method for provisioning compute resources and will likely help bridge the current gap between VMs and containers.

SaaS  vs. PaaS vs. IaaS – Where the Market is Going

SaaS vs. PaaS vs. IaaS – Where the Market is Going

SaaS, PaaS, IaaS – these are the three essential models of cloud services to compare, otherwise known as Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). Each of these has its own benefits, and it’s good to understand why providers offer these different models and what implications they have for the market. While SaaS, PaaS, and IaaS are different, they are not competitive – most software-focused companies use some form of all three. Let’s take a look at these main categories, and because I like to understand things by company name, I’ll include a few of the more common SaaS, PaaS, and IaaS providers in market today.

SaaS: Software as a Service

Software as a Service, also known as cloud application services, represents the most commonly utilized option for businesses in the cloud market. SaaS utilizes the internet to deliver applications, which are managed by a third-party vendor, to its users. A majority of SaaS applications are run directly through the web browser, and do not require any downloads or installations on the client side.

Prominent providers: Salesforce, ServiceNow, Google Apps, Dropbox and Slack (and ParkMyCloud, of course).

PaaS: Platform as a Service

Cloud platform services, or Platform as a Service (PaaS), provide cloud components to certain software while being used mainly for applications. PaaS delivers a framework for developers that they can build upon and use to create customized applications. All servers, storage, and networking can be managed by the enterprise or a third-party provider while the developers can maintain management of the applications.

Prominent providers and offerings: AWS Elastic Beanstalk, RedHat Openshift, IBM Bluemix, Windows Azure, and VMware Pivotal CF.

IaaS: Infrastructure as a Service

Cloud infrastructure services, known as Infrastructure as a Service (IaaS), are made of highly scalable and automated compute resources. IaaS is fully self-service for accessing and monitoring things like compute, storage, networking, and other infrastructure related services, and it allows businesses to purchase resources on-demand and as-needed instead of having to buy hardware outright.

Prominent Providers: Amazon Web Services (AWS), Microsoft Azure (Azure), Google Cloud Platform (GCP), and IBM Cloud.

SaaS vs. PaaS vs. IaaS

SaaS, PaaS and IaaS are all under the umbrella of cloud computing (building, creating, and storing data over the cloud). Think about them in terms of out-of-the-box functionality and building from the bottom up.

IaaS helps build the infrastructure of a cloud-based technology. PaaS helps developers build custom apps via an API that can be delivered over the cloud. And SaaS is cloud-based software companies can sell and use.

Think of IaaS as the foundation of building a cloud-based service — whether that’s content, software, or the website to sell a physical product, PaaS as the platform on which developers can build apps without having to host them, and SaaS as the software you can buy or sell to help enterprises (or others) get stuff done.

SaaS, PaaS, IaaS Market Share Breakdown

The SaaS market is by far the largest market, according to a Gartner study that reported that enterprises spent $182B+ on cloud services, with SaaS services making up 43% of that spend.

While SaaS is currently the largest cloud service in terms of spend, IaaS is currently projected to be the fastest growing market with a CAGR of 20% plus over the next 3 to 4 years. This bodes very well for the “big three” providers, AWS, Azure and GCP.

Where the Market is Going

What’s interesting is that many pundits argue that PaaS is the future, along with FaaS, DaaS and every other X-as-a-service. However, the data shows otherwise. As evidenced by the reports from Gartner above, IaaS has a larger market share and is growing the fastest.

First of all, this is because IaaS offers all the important benefits of using the cloud such as scalability, flexibility, location independence and potentially lower costs. In comparison with PaaS and SaaS, the biggest strength of IaaS is the flexibility and customization it offers. The leading cloud computing vendors offer a wide range of different infrastructure options, allowing customers to pick the performance characteristics that most closely match their needs.

In addition, IaaS is the least likely of the three cloud delivery models to result in vendor lock-in. With SaaS and PaaS, it can be difficult to migrate to another option or simply stop using a service once it’s baked into your operations. IaaS also charges customers only for the resources they actually use, which can result in cost reductions if used strategically. While much of the growth is from existing customers, it’s also because more organizations are using IaaS across more functions than either of the other models of cloud services.

AWS Lambda Pricing: Low, But Unpredictable

AWS Lambda Pricing: Low, But Unpredictable

Today’s entry into our exploration of public cloud prices focuses on AWS Lambda pricing.

Low costs are often cited as a benefit of using serverless. A recent survey showed that companies saved an average of 4 developer workdays per month by adopting serverless, and 21% of companies reported cost reduction as a main benefit. But why aren’t 100% of companies reporting cost savings?

In this article, we’ll take a look at the Lambda pricing model, and some things you need to keep in mind when estimating costs for serverless infrastructure.

How AWS Lambda Pricing Works

Core Pricing

AWS Lambda pricing is based on what you use. There are two major factors that contribute to the calculation of “what you use”:

  • Requests — Lambda counts a request each time it starts executing in response to an event notification or invoke call. Each request costs $0.0000002.
  • Duration — Duration is calculated from the time your code begins executing until it returns or otherwise terminates, rounded up to the nearest 100ms. But, the price is not charged per second. Rather, it is charged per GB-second, which is the duration in seconds multiplied by the maximum memory size in GB. Every GB-second costs $0.0000166667.

Free Tier

There is a free tier available to all Lambda users — and note that this is unrelated to your regular AWS free tier usage. Every user gets 1 million requests per month and 400,000 GB-Seconds per month, for free.

Additional Charges

In addition to requests and duration, you will also be charged for additional AWS services used  or data transfers – regardless of whether you’re using Lambda’s free tier. For many applications, API requests and data transfers will cost significantly more than the AWS Lambda core pricing.

Why AWS Lambda Pricing is So Confusing

Ultimately, Lambda pricing is confusing and hard to predict. Here’s why:

  • Granularity — the fact that cost is per each function execution makes it difficult to estimate compared to server-based pricing models. Thinking in terms of iterations of a microservices script requires some mental gymnastics.
  • Multiplicative costs — the fact that the duration charges are based on a calculation makes it harder to conceptualize and more variable than other pricing models – and if both duration and memory change, the costs increase quickly.
  • Additional charges at a cost of $3.50 per million calls, AWS API Gateway charges often make up a significant portion of the cost to run serverless – plus data transfers and other “on top” costs.
  • Wait time — if a function makes an outgoing call and sits idle waiting for the result, you’ll be charged for the wait time. Be sure to set a maximum function execution time to prevent this from driving up costs (as well as a maximum memory size).
  • Code maintenance it’s a murkier area when it comes to costs, but with more functions come more lines of code to maintain.

Of course, there are several AWS Lambda pricing calculators out there to help estimate costs — ranging from the simpler that include only the number of executions, memory allocation, and average duration (examples from Dashbird and A Cloud Guru) to those incorporating language, activity patterns, and EC2 comparisons from the cheekily named servers.lol.

AWS Lambda Costs Are Just One Factor

There are plenty of benefits to serverless, from low latency to scalability to simple deployment. However, alongside vendor lock-in, applications with long or variable execution times, and control over application performance, cost is another reason why serverless may not replace traditional servers for all situations.

Google Kubernetes Engine (GKE) – The Leader in Hosted Container Orchestration

Google Kubernetes Engine (GKE) – The Leader in Hosted Container Orchestration

One of Google Cloud’s killer products is Google Kubernetes Engine, or GKE. Since Google was the original creator of the Kubernetes container scheduler, it’s fitting that they are considered to be at the forefront of Kubernetes management and development. In spite of the fact that Kubernetes is now managed by the Cloud Native Computing Foundation, Google is still a major contributor to the open-source Kubernetes project on Github. Let’s take a look at Google’s hosted version of Kubernetes and why so many cloud users prefer it to the competition.

GKE Overview

Google Kubernetes Engine is a hosted environment that can run your containerized applications. Unlike Google Compute Engine, which lets you run virtual machines with the operating system of your choice, Google Kubernetes Engine takes your application or code that is packaged into a Docker container and manages it according to your specifications. Ideally, the same containers that have gone through your testing and QA process can now be run at-scale in production, with the backing of Google’s security, availability, and management.

GKE was made publically available in 2015, after being used behind-the-scenes for many Google services (like Gmail and YouTube) for over 10 years. After open-sourcing the Kubernetes software, Google set up a hosted version so users didn’t have to worry about running the master node themselves. This hosted master node has built-in high availability, health checks, and an easy-to-use developer dashboard.

GKE manages Virtual Machines that containers are running on by using their own container-optimized OS. These VMs can scale up or down based on container load and application requirements, and can even utilize preemptible VMs for batch or low-priority jobs. The pricing of GKE is based solely on the number of seconds that those compute resources exist, as there’s no additional costs for the Kubernetes masters that you run for the clusters.

GKE vs. The Competition (AKS, EKS, and ECS)

Google Kubernetes Engine is often seen as the leader in hosted Kubernetes environments, both because Google wrote the original software, and because a decade of experience running it on some of the largest scale websites in the world is hard to discount. Google also had a two-year head start on Microsoft’s AKS service and a three-year head start on Amazon’s AKS platform, which helped work out the kinks and build brand awareness. More: cloud container services comparison.

There are also some technical reasons why GKE is a superior choice. Google deploys the latest version of Kubernetes faster than other providers, so you’re always on the bleeding edge of development. Clusters typically spin up faster, more nodes are allowed per cluster, and new workers start quicker. SOC and ISO compliance can be a factor for large organizations. The user experience of the Kubernetes dashboard is also noticeably better than some alternatives.

You Down With GKE? (Yeah, You Know Me)

At the end of the day, the biggest question we get asked about services like Google Kubernetes Engine is, “Should I use Google Kubernetes Engine for my containers?” As always, the answer is nuanced. If you aren’t embedded in a particular cloud provider (or if you have a multi-cloud strategy), then GKE is certainly a step above other hosted Kubernetes services. Throw in the fact that you don’t pay for master nodes, and it makes financial sense as well. However, if you’re fully committed to a different cloud provider, then the native container management tools are good enough to get the job done.

Why Serverless Won’t Replace Traditional Servers

Why Serverless Won’t Replace Traditional Servers

Curious why serverless is so popular – and why it won’t replace traditional servers in the cloud?

In the current cloud infrastructure, top service providers are dedicating a great deal of effort to expand on this architecture as a new approach to a cloud solution that focuses on applications rather than infrastructure. Today we’ll take a look at what serverless computing is good for, and what it can’t replace.

Understanding Serverless

For starters, serverless mostly refers to an application or API that depends on third-party, cloud-hosted applications and services, to manage server-side logic and state, propagating code hosted on Function as a Service (FaaS) platforms.

Even though the name “serverless” suggests that there are no servers involved, there will always be servers in use. Rather, it makes it so developers don’t have to deal directly with the servers – it is more about the implementation and management of them. To power serverless workloads,  cloud providers use automated systems that eliminate the need for server administrators, offering developers a way to manage applications and services without having to handle, tweak or scale the actual server infrastructure.

Top Serverless Providers

It is no surprise the top cloud providers that are investing in a major way on serverless include AWS, Microsoft Azure, and Google Cloud. In brief, here is how they approach serverless computing.

AWS Lambda is the current leader among serverless compute implementations. Lambda handles everything by automatically scaling your application by running your code as it’s triggered.

Microsoft Azure Functions enables you to run code-on-demand without having to explicitly provision or manage infrastructure.

Google Cloud Functions is a compute solution for creating event-driven applications and connects with GCP services by listening for and responding to events without needing to provision or manage servers.

Advantages and When to Use Serverless

Let’s look at why serverless is often a good choice. It allows organizations to reduce operational complications associated with infrastructure and related cost expenditures since they are computed for the actual usage or work the serverless platform performs.

When it comes to implementing, maintaining, debugging, monitoring the infrastructure, and setting up your environment, with serverless the heavy lifting is done for you. It allows developers to focus on application development, and not complex infrastructures, thus promoting team efficiency, better serving the customers and focusing on business goals.

Since serverless cost models are based on execution only, using serverless will reduce your costs of operations and save you money on cloud spend, making it more adaptable for short-term tasks on your environment, however, there are hidden costs to be aware of. Though we are considering advantages, this might as well be a disadvantage. Serverless apps rely on API calls, and the heavy use of API request can become very pricey indeed. In addition, networking costs can get very expensive when sending a lot of data and are generally more difficult to track in serverless costs models.

Some of the best use cases for serverless are:

  • Brand new applications that don’t already have an existing workload
  • Microservices-based architectures, with small chunks of code working together
  • Infrequently-used scripts that don’t need a server running 24/7

Disadvantages and When Not to Use Serverless

No doubt, there is an increased interest in serverless, but there are limitations that come with it. Perhaps these trade-offs are the reasons as to why some companies, though interested in serverless, are not ready to make the jump from traditional servers just yet.

Networking on serverless must be done through a private API endpoint and cannot be accessed through IPs, which can lead to vendor lock-in. This makes serverless unsuitable for long-term tasks, making serverless unusable for applications that have variable execution times, and for services that require information from an external source.

Serverless creates dependency upon cloud providers, and because of this you are not able to port your applications between different providers. Cloud providers own the burden of resource provisioning, so they are solely responsible for ensuring that the application instance has the back-end infrastructure it needs to execute when summoned.

By adopting serverless, you forfeit complete control over your infrastructure, like for example, scaling. Scaling is done automatically, but the absence of control makes it difficult to address and migrate errors related to serverless instances. This lack of control also applies to application performance issues, a metric that developers still need to worry about in a serverless environment. After all, serverless providers depend on an actual server that needs to be accessed and monitored.

Serverless is likely not a good fit for:

  • Rewriting existing apps
  • Applications with variable execution times
  • Long-term tasks
  • Monolithic applications

Why Serverless Won’t Replace Traditional Servers

Though every business has different needs when it comes to cloud infrastructures, serverless won’t surmount the current cloud infrastructure of traditional servers completely. There are too many use cases where serverless is not applicable, or not worth the tradeoff in control (or perhaps the cost – stay tuned for a future post on this). But as cloud service providers continue to invest heavily on serverless, it is fair to say that serverless usage will continue to grow in the years to come.  

Amazon EKS Overview: AWS’s Managed Kubernetes Service

Amazon EKS Overview: AWS’s Managed Kubernetes Service

Amazon EKS is a hosted Kubernetes solution that helps you run your container workloads in AWS without having to manage the Kubernetes control plane for your cluster. This is a great entry point for Kubernetes administrators who are looking to migrate to AWS services but want to continue using the tooling they are already familiar with. Often, users are choosing between Amazon EKS and Amazon ECS (which we recently covered, in addition to a full container services comparison), so in this article, we’ll take a look at some of the basics and features of EKS that make it a compelling option.

Amazon EKS 101

The main selling point of Amazon EKS is that the Kubernetes control plane is managed for you by AWS, so you don’t have to set up and run your own. When you set up a new cluster in EKS, you can specify if it’s going to be just available to the current VPC, or if it will be accessible to outside IP addresses. This flexibility highlights the two main deployment options for EKS:

  1. Fully within an AWS VPC, with complete integration to other AWS services you run in your account while being completely isolated from the outside world.
  2. Open and accessible, which enables hybrid-cloud, multi-cloud, or multi-account Kubernetes deployments.

Both options allow you the flexibility to use your own Kubernetes management tools, like Dashboard and kubectl, as EKS gives you the API Server Endpoint once you provision the cluster. This control plane utilizes multiple availability zones within the region you choose for redundancy.

Managed Container Showdown: EKS vs. ECS

Amazon offers two main container service options in EKS and ECS, and both are using Kubernetes under the hood. The biggest difference between the two options lies in who is doing the management of Kubernetes. WIth ECS, Amazon is running Kubernetes for you, and you just decide which tasks to run and when. Meanwhile, with EKS, you’re doing the Kubernetes management of your pods.

One consideration when considering EKS vs. ECS is networking and load balancing. Both services run EC2 servers behind the scenes, but the actual network connection is slightly different. ECS has network interfaces connected to individual tasks on each EC2 instance, while EKS has network interfaces connecting to multiple pods on each EC2 instance. Similarly, for load balancing, ECS can utilize Application Load Balancers to send traffic to a task, while EKS must use an Elastic Load Balancer to send traffic to an EC2 host (which can have a proxy via Kubernetes). Neither is necessarily better or worse, just a slight difference that may matter for your workload.

Sounds Great… How Much Does It Cost?

For each workload you run in Amazon EKS, there are two main charges that will apply.  First, there’s a charge of $0.20/hr (roughly $146/month) for each EKS Control Plane you run in your AWS account. Second, you’re charged for the underlying EC2 resources that are spun up by the Kubernetes controller. This second charge is very similar to how Amazon ECS charges you, and is highly dependant on the size and amount of resources you need.

Amazon EKS Best Practices

There’s no one-size-fits-all option for Kubernetes deployments, but Amazon EKS certainly has some good things going for it. If you’re already using Kubernetes, this can be a great way to seamlessly migrate to a cloud platform without changing your working processes. Also, if you’re going to be in a hybrid-cloud or multi-cloud deployment, this can make your life a little easier. That being said, for just simple Kubernetes clusters, the price of the control plane for each cluster may be too much to pay, which makes ECS a valid alternative.

More on container management and container optimization.