9 Key Takeaways from our AWS Webinar on Automated Cost Control

9 Key Takeaways from our AWS Webinar on Automated Cost Control

We recently held our first AWS webinar, featuring speakers from AWS, Sysco, and our CTO Bill Supernor. If you missed “How to Turn AWS Utilization Data into Automated Cost Control,” not to worry! You can watch a replay here.

Here are 9 takeaways from this AWS webinar – and more resources to learn about them:

    • Cost Optimization is one of five key pillars in the AWS Well-Architected Framework, and we’re glad to see AWS prioritizing controlled costs so highly. If you’re not already familiar with the Well-Architected Framework, learn more on the AWS site. The other pillars, by the way, include operational excellence, security, reliability, and performance efficiency. 
    • Choose the right pricing model for your workload needs. Make sure to evaluate whether Reserved Instances are a good choice before committing, and don’t forget about Spot Instances either. 
    • Tagging resources according to cost allocation was emphasized by AWS as important for decision making – and of course it is! You have to be able to categorize your resources to make decisions about them. Here’s more on how to improve cloud automation through tagging.
    • Use AWS CloudWatch – similarly, use your CloudWatch data to optimize your environment. AWS is collecting data about your usage whether you’re looking at it or not – so put it to work!
    • Bagels work – Sysco Foods’ Kurt Brochu shared that he could motivate his team to show up for cost optimization trainings by providing bagels. Sometimes it takes a bit of prodding to get team members not directly responsible for budget to care about cost, so don’t be afraid to get creative. 
    • Use Gamification as a motivator – similarly, by turning cost savings into a race or other competition, you can awake interest that might otherwise be hard to find.
    • There are plenty more AWS webinars – AWS partners frequently hold webinars in conjunction with the cloud provider. One of the best places to learn about them is the @AWS_Partners Twitter channel.

Watch the replay of our AWS webinar for the full story – and let us know in the comments below what else you’d like to learn about in future webinars!

How Big is AWS?

How Big is AWS?

If you’re at all familiar with cloud computing, you know Amazon Web Services is a giant – but just how big is AWS? There are a number of measures of the size of a cloud business like Amazon’s –– here are answers to just a few of those questions.

How big is AWS’s staff?

While numbers for employees of Amazon as a whole are reported in the company’s quarterly earnings reports (630,600 as of Q1 2019), the number of those under AWS is less clear. 

AWS has just over 40,000 employees listed on LinkedIn, but of course, that’s not the most accurate measure. By eyeballing the operating expenses reported for the AWS segment compared to Amazon’s business as a whole, you could estimate up to 62,000, but it’s likely lower than that. As of this writing, AWS has 12,280 full-time job openings listed on their website, while Amazon as a whole has 32,454 openings.

We’ll be interested to see how this is affected by HQ2 joining ParkMyCloud’s neighborhood in Northern Virginia later this year.

How big is AWS’s infrastructure?

AWS has 66 availability zones within 21 geographic regions around the world. Each availability zone consists of one to dozens of individual data centers. To visualize these data centers, check out AWS’s exploration of them here

How big is AWS’s list of products?

When we last counted in April, there were 170 unique services listed on AWS’s offerings page, and there could certainly be more by now. These range from core compute products like EC2 to newer releases like AWS Deepracer for machine learning. Look for a spike in this count after AWS re:Invent in early December, as the cloud provider tends to save up announcements for its yearly user conference. 

Speaking of…

How big is AWS re:Invent?

In 2018, AWS re:Invent pulled 52,000 attendees, and AWS estimates a crowd of 62,000 for 2019, each year taking up a large portion of the Las Vegas Strip.

In comparison, Microsoft Ignite expects 25,000 attendees this year, Google Cloud Next estimated 30,000 attendees, and VMworld estimates 21,000 attendees. Then again, Salesforce’s Dreamforce 2018 drew 170,000 attendees, and Consumer Electronics Show (CES) reports 175,212 attendees for 2019. So while AWS re:Invent may be large for a cloud-specific conference, it’s not quite a giant as far as tech shows go. 

How big is AWS’s market share?

When looking at public cloud, it’s clear AWS still holds the largest portion of the market. A recent report put AWS at 47% of the market, with the next-closest competitor as Azure at 22%. More about AWS vs. Azure vs. Google cloud market share.

How big is AWS’s revenue?

For Q1 2019, AWS reported sales of $7.7 billion, showing consistent growth and the largest of any of the cloud service providers. For the full year of 2018, AWS reported $25.7 billion in revenue – that’s more than McDonald’s. Additionally, AWS has $16 billion or more in backlog revenue from contracts for future services. It is a growing proportion of Amazon’s business. In fact:

How big is AWS as a portion of Amazon?

In first quarter reports, AWS contributed about 50% of Amazon’s overall operating income, with an operating margin of 29%. Overall, AWS is growing as a contributor to Amazon’s income and growth. More here.

So how big is AWS? It’s up to you how you want to measure, but suffice it to say: big.

How AWS Firecracker Makes Containers and Serverless More Efficient

How AWS Firecracker Makes Containers and Serverless More Efficient

AWS Firecracker was announced at AWS re:Invent in November 2018 as a new AWS open source virtualization technology. The technology is purpose-built for creating and managing secure, multi-tenant container and function-based services. It was described by the AWS Chief Evangelist Jeff Barr as “what a virtual machine would look like if it was designed for today’s world of containers and functions.”

What is AWS Firecracker?

Firecracker is a Virtual Machine Manager (VMM) exclusively designed for running transient and short-lived processes. In other words, it helps to optimize the running of functions and serverless workloads. It’s also an important new component in the emerging world of serverless technologies and is used to enhance the backend implementation of Lambda and Fargate. Firecracker helps deliver the speed of containers combined with the security of VMs. If you use Lambda or Fargate, you’re already receiving the benefits of Firecracker. However, if you run/orchestrate a large volume of containers, you should take a look at this service with optimization in mind.

How AWS Firecracker Creates Efficiencies

AWS can realize the economic benefits of Firecracker by creating what they call “microVMs”, which allows them to spread serverless workloads around multiple servers thus getting a greater ROI from its investment in the servers behind serverless. In terms of customer benefit, using Firecracker enables these new microVMs to launch in 125 milliseconds or less, compared to the seconds (or longer) it can take to launch a container or spin up a traditional virtual machine. In a world where thousands of VMs can be spun up and down to tackle a specific workload, this will constitute a significant savings. And remember, these are fully fledged micro virtual machines, not just containers.The micro VM’s themselves are worth a closer look as each includes an in-process rate limiter to optimize shared network and storage resources. As a result, one server can support thousands of microVMs with widely varying processor and memory configurations.\

There is also the enhanced security and workload isolation only available from Kernel-based Virtual Machine (KVMs) – more secure than containers, which are less isolated. One particularly valuable security feature is that Firecracker is statically linked, which means all the libraries it needs to run are included in its executable code. This makes new Firecracker environments safer by eliminating outside libraries. Altogether, this offering and the combination of efficiency, security and speed created quite the buzz at the AWS re:Invent launch.

Will Firecracker make a “bang”?

There are a few caveats related to the still novel aspects of the technology. In particular, compared to alternatives, such as containers or Hyper-V VMs, it is prudent to confine to non-production workloads as the technology is still new and needs to be more fully battle-tested for production use.

However, as confidence, adoption, and experience grow in the use of serverless technologies it certainly seems like Firecracker can offer a popular new method for provisioning compute resources and will likely help bridge the current gap between VMs and containers.

SaaS  vs. PaaS vs. IaaS – Where the Market is Going

SaaS vs. PaaS vs. IaaS – Where the Market is Going

SaaS, PaaS, IaaS – these are the three essential models of cloud services to compare, otherwise known as Software as a Service (SaaS), Platform as a Service (PaaS), and Infrastructure as a Service (IaaS). Each of these has its own benefits, and it’s good to understand why providers offer these different models and what implications they have for the market. While SaaS, PaaS, and IaaS are different, they are not competitive – most software-focused companies use some form of all three. Let’s take a look at these main categories, and because I like to understand things by company name, I’ll include a few of the more common SaaS, PaaS, and IaaS providers in market today.

SaaS: Software as a Service

Software as a Service, also known as cloud application services, represents the most commonly utilized option for businesses in the cloud market. SaaS utilizes the internet to deliver applications, which are managed by a third-party vendor, to its users. A majority of SaaS applications are run directly through the web browser, and do not require any downloads or installations on the client side.

Prominent providers: Salesforce, ServiceNow, Google Apps, Dropbox and Slack (and ParkMyCloud, of course).

PaaS: Platform as a Service

Cloud platform services, or Platform as a Service (PaaS), provide cloud components to certain software while being used mainly for applications. PaaS delivers a framework for developers that they can build upon and use to create customized applications. All servers, storage, and networking can be managed by the enterprise or a third-party provider while the developers can maintain management of the applications.

Prominent providers and offerings: AWS Elastic Beanstalk, RedHat Openshift, IBM Bluemix, Windows Azure, and VMware Pivotal CF.

IaaS: Infrastructure as a Service

Cloud infrastructure services, known as Infrastructure as a Service (IaaS), are made of highly scalable and automated compute resources. IaaS is fully self-service for accessing and monitoring things like compute, storage, networking, and other infrastructure related services, and it allows businesses to purchase resources on-demand and as-needed instead of having to buy hardware outright.

Prominent Providers: Amazon Web Services (AWS), Microsoft Azure (Azure), Google Cloud Platform (GCP), and IBM Cloud.

SaaS vs. PaaS vs. IaaS

SaaS, PaaS and IaaS are all under the umbrella of cloud computing (building, creating, and storing data over the cloud). Think about them in terms of out-of-the-box functionality and building from the bottom up.

IaaS helps build the infrastructure of a cloud-based technology. PaaS helps developers build custom apps via an API that can be delivered over the cloud. And SaaS is cloud-based software companies can sell and use.

Think of IaaS as the foundation of building a cloud-based service — whether that’s content, software, or the website to sell a physical product, PaaS as the platform on which developers can build apps without having to host them, and SaaS as the software you can buy or sell to help enterprises (or others) get stuff done.

SaaS, PaaS, IaaS Market Share Breakdown

The SaaS market is by far the largest market, according to a Gartner study that reported that enterprises spent $182B+ on cloud services, with SaaS services making up 43% of that spend.

While SaaS is currently the largest cloud service in terms of spend, IaaS is currently projected to be the fastest growing market with a CAGR of 20% plus over the next 3 to 4 years. This bodes very well for the “big three” providers, AWS, Azure and GCP.

Where the Market is Going

What’s interesting is that many pundits argue that PaaS is the future, along with FaaS, DaaS and every other X-as-a-service. However, the data shows otherwise. As evidenced by the reports from Gartner above, IaaS has a larger market share and is growing the fastest.

First of all, this is because IaaS offers all the important benefits of using the cloud such as scalability, flexibility, location independence and potentially lower costs. In comparison with PaaS and SaaS, the biggest strength of IaaS is the flexibility and customization it offers. The leading cloud computing vendors offer a wide range of different infrastructure options, allowing customers to pick the performance characteristics that most closely match their needs.

In addition, IaaS is the least likely of the three cloud delivery models to result in vendor lock-in. With SaaS and PaaS, it can be difficult to migrate to another option or simply stop using a service once it’s baked into your operations. IaaS also charges customers only for the resources they actually use, which can result in cost reductions if used strategically. While much of the growth is from existing customers, it’s also because more organizations are using IaaS across more functions than either of the other models of cloud services.

AWS Lambda Pricing: Low, But Unpredictable

AWS Lambda Pricing: Low, But Unpredictable

Today’s entry into our exploration of public cloud prices focuses on AWS Lambda pricing.

Low costs are often cited as a benefit of using serverless. A recent survey showed that companies saved an average of 4 developer workdays per month by adopting serverless, and 21% of companies reported cost reduction as a main benefit. But why aren’t 100% of companies reporting cost savings?

In this article, we’ll take a look at the Lambda pricing model, and some things you need to keep in mind when estimating costs for serverless infrastructure.

How AWS Lambda Pricing Works

Core Pricing

AWS Lambda pricing is based on what you use. There are two major factors that contribute to the calculation of “what you use”:

  • Requests — Lambda counts a request each time it starts executing in response to an event notification or invoke call. Each request costs $0.0000002.
  • Duration — Duration is calculated from the time your code begins executing until it returns or otherwise terminates, rounded up to the nearest 100ms. But, the price is not charged per second. Rather, it is charged per GB-second, which is the duration in seconds multiplied by the maximum memory size in GB. Every GB-second costs $0.0000166667.

Free Tier

There is a free tier available to all Lambda users — and note that this is unrelated to your regular AWS free tier usage. Every user gets 1 million requests per month and 400,000 GB-Seconds per month, for free.

Additional Charges

In addition to requests and duration, you will also be charged for additional AWS services used or data transfers – regardless of whether you’re using Lambda’s free tier. For many applications, API requests and data transfers will cost significantly more than the AWS Lambda core pricing.

Why AWS Lambda Pricing is So Confusing

Ultimately, Lambda pricing is confusing and hard to predict. Here’s why:

  • Granularity — the fact that cost is per each function execution makes it difficult to estimate compared to server-based pricing models. Thinking in terms of iterations of a microservices script requires some mental gymnastics.
  • Multiplicative costs — the fact that the duration charges are based on a calculation makes it harder to conceptualize and more variable than other pricing models – and if both duration and memory change, the costs increase quickly.
  • Additional charges at a cost of $3.50 per million calls, AWS API Gateway charges often make up a significant portion of the cost to run serverless – plus data transfers and other “on top” costs.
  • Wait time — if a function makes an outgoing call and sits idle waiting for the result, you’ll be charged for the wait time. Be sure to set a maximum function execution time to prevent this from driving up costs (as well as a maximum memory size).
  • Code maintenance it’s a murkier area when it comes to costs, but with more functions come more lines of code to maintain.

Of course, there are several AWS Lambda pricing calculators out there to help estimate costs — ranging from the simpler that include only the number of executions, memory allocation, and average duration (examples from Dashbird and A Cloud Guru) to those incorporating language, activity patterns, and EC2 comparisons from the cheekily named servers.lol.

AWS Lambda Costs Are Just One Factor

There are plenty of benefits to serverless, from low latency to scalability to simple deployment. However, alongside vendor lock-in, applications with long or variable execution times, and control over application performance, cost is another reason why serverless may not replace traditional servers for all situations.

Google Kubernetes Engine (GKE) – The Leader in Hosted Container Orchestration

Google Kubernetes Engine (GKE) – The Leader in Hosted Container Orchestration

One of Google Cloud’s killer products is Google Kubernetes Engine, or GKE. Since Google was the original creator of the Kubernetes container scheduler, it’s fitting that they are considered to be at the forefront of Kubernetes management and development. In spite of the fact that Kubernetes is now managed by the Cloud Native Computing Foundation, Google is still a major contributor to the open-source Kubernetes project on Github. Let’s take a look at Google’s hosted version of Kubernetes and why so many cloud users prefer it to the competition.

GKE Overview

Google Kubernetes Engine is a hosted environment that can run your containerized applications. Unlike Google Compute Engine, which lets you run virtual machines with the operating system of your choice, Google Kubernetes Engine takes your application or code that is packaged into a Docker container and manages it according to your specifications. Ideally, the same containers that have gone through your testing and QA process can now be run at-scale in production, with the backing of Google’s security, availability, and management.

GKE was made publically available in 2015, after being used behind-the-scenes for many Google services (like Gmail and YouTube) for over 10 years. After open-sourcing the Kubernetes software, Google set up a hosted version so users didn’t have to worry about running the master node themselves. This hosted master node has built-in high availability, health checks, and an easy-to-use developer dashboard.

GKE manages Virtual Machines that containers are running on by using their own container-optimized OS. These VMs can scale up or down based on container load and application requirements, and can even utilize preemptible VMs for batch or low-priority jobs. The pricing of GKE is based solely on the number of seconds that those compute resources exist, as there’s no additional costs for the Kubernetes masters that you run for the clusters.

GKE vs. The Competition (AKS, EKS, and ECS)

Google Kubernetes Engine is often seen as the leader in hosted Kubernetes environments, both because Google wrote the original software, and because a decade of experience running it on some of the largest scale websites in the world is hard to discount. Google also had a two-year head start on Microsoft’s AKS service and a three-year head start on Amazon’s AKS platform, which helped work out the kinks and build brand awareness. More: cloud container services comparison.

There are also some technical reasons why GKE is a superior choice. Google deploys the latest version of Kubernetes faster than other providers, so you’re always on the bleeding edge of development. Clusters typically spin up faster, more nodes are allowed per cluster, and new workers start quicker. SOC and ISO compliance can be a factor for large organizations. The user experience of the Kubernetes dashboard is also noticeably better than some alternatives.

You Down With GKE? (Yeah, You Know Me)

At the end of the day, the biggest question we get asked about services like Google Kubernetes Engine is, “Should I use Google Kubernetes Engine for my containers?” As always, the answer is nuanced. If you aren’t embedded in a particular cloud provider (or if you have a multi-cloud strategy), then GKE is certainly a step above other hosted Kubernetes services. Throw in the fact that you don’t pay for master nodes, and it makes financial sense as well. However, if you’re fully committed to a different cloud provider, then the native container management tools are good enough to get the job done.