How to Use Google Preemptible VMs to Get 80% Savings

How to Use Google Preemptible VMs to Get 80% Savings

Google Cloud has always had a knack for non-standard virtual machines, and their option of creating Google preemptible VMs is no different. Traditional virtual machines are long-running servers with standard operating systems that are only shut down when you say they can be shut down. On the other hand, preemptible VMs last no longer than 24 hours and can be stopped on a moment’s notice (and may not be available at all). So why use them?

Google Cloud Preemptible VM Overview

Preemptible VMs are designed to be a low-cost, short-duration option for batch jobs and fault-tolerant workloads. Essentially, Google is offering up extra capacity at a huge discount – with the tradeoff that if that capacity is needed for other (full-priced)  resources, your instances can be terminated or “preempted”. Of course, if you’re using them for batch processing, being preempted will slow down your job without completely stopping it. 

You can create your preemptible VMs in a managed instance group in order to easily manage a collection of VMs as a single entity – and, if a VM is preempted, the VM will be automatically recreated. Alternatively, you can use Kubernetes Engine container clusters to automatically recreate preempted VMs.  

Preemptible VM Pricing

Pricing is fixed, not variable, and you can view the preemptible price alongside the on demand prices in Google’s compute pricing list and/or pricing calculator. Prices are 70-80% off on demand, and upward of 50% off even compared to a 3-year committed use discount

Google does not charge you for instances if they are preempted in the first minute after they start running.

Note: Google Cloud Free Tier credits for Compute Engine do not apply to preemptible instances. 

Use Cases for Google Preemptible VMs

As with most trade-offs, the biggest reason to use a preemptible VM is cost. Preemptible VMs can save you up to 80% compared to a normal on-demand virtual machine. (By the way – AWS users will want to use Spot Instances for the same reason, and Azure users can check out Low Priority VMs). This is a huge savings if the workload you’re trying to run consists of short-lived processes or things that are not urgent and can be done any time. This can include things like financial modeling, rendering and encoding, and even some parts of your CI/CD pipeline or code testing framework.

How to Create a Google Preemptible VM

To create a preemptible VM, you can use the Google Cloud Platform console, the ‘gcloud’ command line tool, or the Google Cloud API. The process is the same as creating a standard VM: you select your instance size, networking options, disk setup, and SSH keys, with the one minor change that you enable the ‘preemptible’ flag during setup. The other change you’ll want to make is to create a shutdown script to decide what happens to your processes and data if the instance is stopped without your knowledge. This script can even perform different actions if the instance was preempted as opposed to shut down from something you did.

One nice benefit of Google preemptible VMs is the ability to attach local SSD drives and GPUs to the instances. This means you can get added extensibility and performance for the workload that you are running, while still saving money. You can also have preemptible instances in a managed instance group for high scalability when the instances are available. This can help you process more of your jobs at once when the preemptible virtual machines are able to run.

FAQs About Google Preemptible Instances

How long do GCP preemptible VMs last?

These instances can last up to 24 hours. If you stop or start an instance, the 24-hour counter is reset because the instance transitions into a terminated state. If an instance is reset or other actions that keep it in a running state, the 24-hour clock is not reset. 

Is pricing variable?

No, pricing for preemptible VMs is fixed, so you know in advance what you will pay.

What happens when my instance is preempted? 

When your instance is preempted, you will get a 30 second graceful shutdown period. The instance will get a preemption notice in the form of an ACPI G2 Soft Off signal. You can use a shutdown script to complete cleanup actions before the instance stops. If an instance does not stop after 30 seconds, it will get an ACPI G3 Mechanical Off signal to the operating system, and terminate it. You can practice what this looks like by stopping the instance.

By using managed instance groups, you can automatically recreate your instances if capacity is available. 

How often are you actually preempted?

Google reports an average preemption rate from 5-15% per day per project, with occasional spikes depending on time and zone. This is not a guarantee, though, and you can be preempted at any time. 

How does Google choose which instances to preempt?

Google avoids preempting too many instances from a single customer, and preempts new instances over older instances whenever possible – this is to avoid losing work across your cluster. 

How to Use Google Preemptible VMs to Optimize Costs

Our customers who have the most cost-effective use of Google resources often mix Google preemptible VMs with other instance types based on the workloads. For instance, production systems that need to be up 24/7 can buy committed-use discounts for up to 57% savings on those servers. Non-production systems, like dev, test, QA, and staging, can use on-demand resources with schedules managed by ParkMyCloud to save 65%. Then, any batch workloads or non-urgent jobs can use Google preemptible instances to run whenever available for up to 80% savings. Questions about optimizing cloud costs? We’re happy to help – email us or use the chat client on this page (staffed by real people, including me!).

Further reading on Google Cloud cost optimization:

AWS EBS Volume Types & What to Use Them For

AWS EBS Volume Types & What to Use Them For

AWS offers several EBS volume types that you can use for your storage needs. Here’s a quick overview of what options are available and how they differ. 

What is EBS?

Amazon Elastic Block Store (EBS) is AWS’s block-level, persistent local storage solution for Amazon EC2. For example, for relational and NoSQL databases, data warehousing, Big Data processing, and/or backup and recovery.

Each network-attached block is presented as a simple volume. Since they are distributed, EBS is easily scaled (hence the “elastic”.) They are also easily backed up with snapshots.

It is just one of many AWS storage options, which also include:

  • Amazon Elastic File System (EFS) – scalable elastic file system for Linux-based workloads for use with AWS cloud services and on-premises resources. It can scale on demand automatically as you add and remove files.
  • Amazon Simple Storage Service (S3) – general purpose object store for user-generated content, active archive, serverless, etc.
  • Amazon S3 Glacier & Amazon S3 Glacier Deep Archive – inexpensive long term storage for infrequently accessed data, and assists with compliance in highly regulated fields.

Types of EBS Volumes

Amazon EBS volume types are broken into two main categories: 

  • SSD-backed volumes are optimized for IOPS, which are best for workloads involving frequent read/write operations with small I/O size.
  • HDD-backed volumes are optimized for throughput (measured in MiB/s) for large streaming workloads. Cannot include boot volumes.

Within each of those groups are two options. The default type is General Purpose SSD (gp2), and there are 3 others available:

  • General Purpose SSD (gp2) – general purpose, balances price and performance.
    • Use cases: Most workloads such as virtual desktops, dev and test environments, and low-latency interactive apps.
  • Provisioned IOPS SSD (io1)highest-performance SSD volume for mission-critical low-latency or high-throughput workloads that require sustained IOPS performance, or more than 16,000 IOPS or 250 MiB/s of throughout per volume.
    • Use cases: Mission-critical applications, large database workloads such as MongoDB, Microsoft SQL Server, Cassandra, Oracle, MySQL, and PostgreSQL
  • Throughput Optimized HDD (st1) – low-cost HDD volume for frequently accessed workloads with high throughput.
    • Use cases: Streaming workloads, big data, data warehouses, log processing.
  • Cold HDD (sc1) lowest cost HDD volume for less-frequently accessed workloads
    • Use cases: Throughput-oriented storage for large volumes of data that is infrequently accessed

You may see references to Magnetic HDD type volumes in older articles about types of volumes in EBS – those are now considered a “previous generation”. 

Interested in managing costs for your EBS volumes and snapshots? Stay tuned for announcements from ParkMyCloud coming soon on new ways the platform can optimize your costs. 

Further Reading:

How Containerization in the Cloud Reduces Vendor Lock-in

How Containerization in the Cloud Reduces Vendor Lock-in

As you accelerate your organization’s containerization in the cloud, key stakeholders may worry about putting all your eggs in one cloud provider’s basket. This combination of fears – both a fear of converting your existing (or new) workloads into containers, plus a fear of being too dependent on a single cloud provider like Amazon AWS, Microsoft Azure, or Google Cloud – can lead to hasty decisions to use less-than-best-fit technologies. But what if using more of your chosen cloud provider’s features meant you were less reliant on that cloud provider?

The Core Benefit of Containers

Something that can get lost in the debate about whether containerization is good or worthwhile is the feature of portability. When Docker containers were first being discussed, one of the main use cases was the ability to run the container on any hardware in any datacenter without worrying if it would be compatible. This seemed to be a logical progression from virtual machines, which had provided the ability to run a machine image on different hardware, or even multiple machines on the same hardware. Most container advocates seem to latch on to this from the perspective of container density and maximizing hardware resources, which makes much more sense in the on-prem datacenter world.

In the cloud, however, hardware resource utilization is now someone else’s problem. You choose your VM or container size and pay just for that size, instead of having to buy a whole physical server and pay for the entirety of it up-front. Workload density still matters, but is much more flexible than on-prem datacenters and hardware. With a shift to containers as the base unit instead of Virtual Machines, your deployment options in the cloud are numerous. This is where container portability comes into play.

The Dreaded “Vendor Lock-in”

Picking a cloud provider is a daunting task, and choosing one and later migrating away from it can have enormous impacts of lost money and lost time. But do you need to worry about vendor lock-in? What if, in fact, you can pivot to another provider down the road with minimal disruption and no application refactoring?

Implementing containerization in the cloud means that if you ever choose to move your workloads to a different cloud provider, you’ll only need to focus on pointing your tooling to the new provider’s APIs, instead of having to test and tinker with the packaged application container. You also have the option of running the same workload on-prem, so you could choose to move out of the cloud as well. That’s not to say that there would be no effort involved, but the major challenge of “will my application work in this environment” is already solved for you. This can help your Operations team and your Finance team to worry less about the initial choice of cloud, since your containers should work anywhere. Your environment will be more agile, and you can focus on other factors (like cost) when considering your infrastructure options. 

Further Reading

How to Communicate Software Development Costs to Your Finance Department

How to Communicate Software Development Costs to Your Finance Department

If you’re in engineering or development, communicating about cloud infrastructure and other software development costs with your finance department is tricky. For one thing, those costs are almost certainly rising.

Also, you are in different roles with different priorities. This naturally creates barriers of communication. You may think your development costs are perfectly reasonable while your CFO thinks there’s a problem – or you may be focused on different parts of the bill than your colleagues in finance are. 

Here are some ways to break down that communication barrier and make your software development costs sound a little less scary. 

Use the CFO’s Language 

Engineering and finance use different language to talk about the same things – which means there’s going to be an element of translation involved. Before meeting with someone who lives in a different day-to-day world than you do, consider how they may talk about cost areas in a way that’s meaningful to their role. For example: 

  • Dev-speak: “Non-production workloads” – or dev or test or stage
  • Finance-speak: R&D costs

 

  • Dev-speak: “Production workloads”
  • Finance-speak: Cost of goods sold or COGS

Focus on Business Growth Impact

So your software development costs are probably going up. There will be some wasted spend that can be eliminated, but for the most part, this growth is unavoidable for a growing business. Highlight the end results that drove decisions to increase spending on software development, for example:

  • We increased our headcount and sprint velocity to speed time to market and beat our competition for offering A.
  • We are developing multiple applications in parallel.
  • Our user base is growing, which is increasing our infrastructure costs. 
  • Our open bug count is down by 50% YOY, increasing customer satisfaction and retention.

Know the Details, But Don’t Get Bogged Down in Them

Are your S3 costs surging? Did you just commit to a bunch of 3-year reserved instances upfront (wait –– did you really?) Did your average salary per developer increase due to specialized skill requirements, or by moving outsourced QA in-house?

You should know the answers to all of these questions, but there’s no need to lead with them in a conversation. Use them as supporting information to answer questions, but not the headline.

Share Your Cost Control Plans – and Automate

Everybody likes an action plan. Identify the areas where you can reduce costs.

  • Consider roles where outsourcing may be prudent – such as apps outside your core offering
  • Automate QA testing – You’re not going to replace human software developers with bots (yet), but there are a few areas where automation can reduce costs, such as QA testing.
  • Optimize your existing infrastructure to turn off when not needed, and size resources to match demand based on utilization metrics, automatically
  • Reduce other wasted infrastructure spend by decommissioning legacy systems, 

Like many things in business, effective communication and collaboration can go a long way. While it’s important to optimize costs to make your software development costs go the furthest, they are going to continue to rise. And that’s okay.

The Rise of the Enterprise Cloud Manager

The Rise of the Enterprise Cloud Manager

There is a growing job function among companies using public cloud: the Enterprise Cloud Manager. We did a study on ParkMyCloud users which showed that a growing proportion of them have “cloud” or the name of their cloud provider such as “AWS” in their job title. This indicates a growing degree of specialization for individuals who manage cloud infrastructure as demonstrated by their cloud computing job titles.  And, in some companies, there is a dedicated role for cloud management – such as an Enterprise Cloud Manager.

Why would you need an Enterprise Cloud Manager?

The world of cloud management is constantly changing and becoming increasingly complex even for the best cloud manager. Recently, the increased adoption of hybrid and multi-cloud environments by organizations to take advantage of best-of-breed solutions, make it more confusing, expensive, and even harder to control. If someone is not fully versed in this field, they may not always know how to handle problems related to governance, security, and cost control. It is important to dedicate resources in your organization to cloud management and related cloud job roles. This chart from Gartner gives us a look at all the things that are involved in cloud management so we can better understand how many parts need to come together for it to run smoothly.

Having a role in your organization that is dedicated to cloud management allows others, who are not specialized in that field, to focus on their jobs, while also centralizing responsibility.  With the help of an Enterprise Cloud Manager, responsibilities are delegated appropriately to ensure cloud environments are handled according to best practices in governance, security, and cost control.

After all, just because you adopt public cloud infrastructure does not mean you have addressed any governance or cost issues – which seems rather obvious when you consider that there are sub-industries created around addressing these problems, but you’d be surprised how often eager adopters assume the technology will do the work and forget that cloud management is not a technological but a human behavior problem.

And someone has to be there to bring the motivational bagels to the “you really need to turn your instances off” meeting.

A Larger Approach: The Cloud Center of Excellence

Cohesively, businesses with a presence in the cloud, regardless of their size, should also consider adopting the functionalities of a Cloud Center of Excellence (CCoE) – which, if the resources are available, can be like an entire department of  Enterprise Cloud Managers. Essentially, a CCoE brings together cross-functional teams to manage cloud strategy, governance, and best practices, and serve as cloud leaders for the entire organization.

The role of an Enterprise Cloud Manager or cloud center of excellence (or cloud operations center or cloud enablement team, whatever you want to call it)  is to oversee cloud operations. They know all the ins and outs of cloud management so they are able to create processes for resource provisioning and services. Their focus is on optimizing their infrastructure which will help streamline all their cloud operations, improve productivity, and optimize cloud costs. 

Moreover, the Enterprise Cloud Manager can systematize the foundation that creates a CCoE with some key guiding principles like the ones outlined by AWS Cloud Center of Excellence here.  

With the Enterprise Cloud Manager leadership, DevOps, CloudOps, Infrastructure, and Finance teams within the CCoE can ensure that the organization’s diverse set of business units are using a common set of best practices to spearhead their cloud efforts while keeping balanced working relationships, operational efficiency, and innovative thinking needed to achieve organizational goals. 

A Note on Job Titles

It’s worth noting that while descriptive, the “Enterprise Cloud Manager” title isn’t necessarily something widely adopted. We’ve run across folks with titles in Cloud Manager, Cloud Operations Manager, Cloud Project Manager, Cloud Infrastructure Manager, Cloud Delivery Manager, etc.

If you’re on the job hunt, we have a few other ideas for cloud and AWS jobs for you to check out.

Automation Tools are Essential

With so much going on in this space, it isn’t possible to expect just one person or a team to manage all of this by hand – you need automation tools. The great thing is that these tools deliver tangible results that make automation a key component for successful enterprise cloud operations and work for companies of any size. Primary users can be people dedicated to this full time, such as an Enterprise Cloud Manager, as well as people managing cloud infrastructure on top of other responsibilities.

Why are these tools important? They provide two main things: visibility and action to act on those recommendations. (That is, unless you’re willing to let go of the steering wheel and let the platform make the decisions – but most folks aren’t, yet.) Customers that were once managing resources manually are now saving time and money by implementing an automation tool. Take a look at the automation tools that are set up through your cloud vendor, as well as third-party tools that are available for cost optimization and beyond. Setting up these tools for automation will lessen the need for routine check-ins and maintenance while ensuring your infrastructure is optimized. 

Do we really need this role?

To put it simply, if you have more than a handful of cloud instances: yes. If you’re small, it may be part of someone’s job description. If you’re large, it may be a center of excellence. 

But if you want your organization to be well informed and up to date, then it is important that you have the organizational roles in place to oversee your cloud operations – an Enterprise Cloud Manager, CCoE and automation tools.

EC2 Instance Types Comparison (and how to remember them)

EC2 Instance Types Comparison (and how to remember them)

AWS offers a range of EC2 instance types optimized for various purposes. It’s great that they provide so much variety, but of course, it means one more thing that you have to learn. It’s worth taking the time to do so, as ⅔ of IaaS spend goes toward compute – that’s a lot of EC2.

Check out a brief breakdown in this video, which also compares EC2 purchasing options. Check it out here:

Or, read on for a look into each of the AWS instance types. Remember that within each type, you’ll still need to choose the AWS instance sizes that suit your specific needs. Additionally, older generations within each instance types are available for purchase – for example, c5 is the latest “c” instance, but c4 and c3 are still available – but as the newer types tend to perform better at a cheaper price, you’ll only want to use the older types if you have an AMI or other dependency. The differences matter for some users… but you probably already know who you are. 

Note: a version of this blog was originally published in July 2018. It has been rewritten and updated for 2020. New EC2 instance types since our last writeup include A1, T3, z1d, high memory, R5, G4, and F1. 

Quick EC2 Instance Info 

This chart shows a quick summary of what we’ll cover. We’re including a brief description and mnemonic for each (hopefully helpful if you’re studying for an AWS certification!)

EC2 Instance Info

If you’ve taken a look at AWS training materials, you may have seen a couple of overall acronyms to remember all of these – perhaps Dr McGiFT Px or FIGHT Dr McPX. Whether these acronyms are useful at all is perhaps a point of discussion, but to ensure that all the instance types above are in your list, we suggest:

  • Fight Czar MXPD 
  • Fright Camp DXZ
  • March Gift PZXD

(and don’t forget high memory and Inf!)

General Purpose

These general purpose AWS EC2 instance types are a good place to start, particularly if you’re not sure what type to use. There are three general purpose types.

t instance type

The t3 family is a burstable instance type. If you have an application that needs to run with some basic CPU and memory usage, you can choose t3. It also works well if you have an application that gets used sometimes but not others. When the resource is idle, you’ll generate CPU credit, which you’ll utilize when the resource is used. It’s useful for things that come and go a lot, such as websites or development environments, and while generally inexpensive, make sure you understand how the CPU credits work before deploying these. There’s a little bit of math and they may not be as cheap as they look at first glance. 

Make sure you also understand the difference between t3 and the older t2 – t3 are in “unlimited mode” by default, so instead of throttling down to baseline CPU when your instance runs out of credits, you pay for overages.

For each of the EC2 types we cover here, we’ll also add a mnemonic to help you remember the purpose of each instance type.

Mnemonic: t is for tiny or turbo.

m instance type

The m5 instance type is similar, but for more consistent workloads. It has a nice balance of CPU, memory, and disk. It’s not hard to see why almost half of EC2 workloads are on “m” instances. In addition to m5, you also have the option of m6g, which are powered by Arm-based AWS Graviton2 processors, making them more cost-efficient. There’s also m5a, m5n, and m4 – most of which are safe to ignore unless you have a specific use case for one of the other processors besides m5’s Intel Xeon Platinum 8175 processors. If you aren’t sure what to choose, m5 is the most versatile of all the Amazon instance types. 

Mnemonic: m is for main choice or happy medium.

a1 instance type

The a1 instance type was announced in late 2018 and can be a less expensive option than other EC2. They are suited for scale-out workloads such as web servers, containerized microservices, caching fleets, distributed data stores, and development environments.  The instances are powered by Arm processors and suited for Arm-based workloads.

Mnemonic: a is for Arm processor

Compute Optimized

c instance type

The c5 instance type has a high ratio of compute/CPU versus memory. If you have a compute-intensive application – maybe scientific modelling, intensive machine learning, or multiplayer gaming – these instances are a good choice. There is also the c5d option, which is SSD-backed. See also the C5n which have up to 100 Gbps network bandwidth and increased memory compared to equivalent c5 instances. The c4 family is also still available.

Mnemonic: c is for compute (at least that one’s easy!)

Memory Optimized

r instance family

The r instance family is memory-optimized, which you might use for in-memory databases, real-time processing of unstructured big data, or Hadoop/Spark clusters. You can think of it as a kind of midpoint between the m5 and the x1e. In addition to r5, there are r5a which deliver lower cost per GiB memory and r5n which have higher bandwidth for applications that need improved network throughput and packet rate performance.

Mnemonic: r is for RAM.

x1 instance family

The x1 family has a much higher ratio of memory, so this is a good choice if you have a full in-memory application or a big data processing engine like Apache Spark or Presto. X1e are optimized for high-performance databases, in-memory databases, and other memory intensive enterprise applications.

Mnemonic: x is for xtreme, as in “xtreme RAM” seems to be generally accepted, but we think this is a bit weak. If you have any suggestions, comment below.

High Memory instance family

We’re not sure why these didn’t get an alphabet soup name like the rest of the AWS instances, but at least it’s easy to remember and understand. As you might guess, high memory instances run large in-memory databases, including production deployments of SAP HANA. 

Mnemonic: we’ll leave this one up to you.

z1d instance family

The z1d instances combine high compute capacity with a high memory footprint. They have a sustained core frequency of up to 4.0 GHz, the fastest of AWS’s offerings. These are best for electronic design automation (EDA) and some relational database workloads with high per-core licensing costs.

Mnemonic: z is for zippy 

Accelerated Computing

p instance type

If you need GPUs on your instances, p3 instances are a good choice. They are useful for video editing, and AWS also lists use cases of “computational fluid dynamics, computational finance, seismic analysis, speech recognition, autonomous vehicles” – so it’s fairly specialized. p2 instances are also available.

Mnemonic: p is for pictures (graphics).

Inf1 instance type

The Inf1 instances are a specialized EC2 type for machine learning inference applications, such as recommendation engines, forecasting, image and video analysis, advanced text analytics, document analysis, voice, conversational agents, translation, transcription, and fraud detection.

Mnemonic: inf is for inference

g instance type

The g instance type uses Graphics Processing Units (GPUs) to accelerate graphics-intensive workloads, and also designed to accelerate machine learning inference. This could include adding metadata to an image, automated speech recognition, and language translation, as well as graphics workstations, video transcoding, and game streaming in the cloud. 

g4 is the latest family, and g3 are available as well.

Mnemonic: g is for graphics or GPU

F1 instance type

f1 instances offer customizable hardware acceleration with field programmable gate arrays (FPGAs) – hence the “f”. Applications could include genomics research, financial analysis, and real-time video processing.

Mnemonic: f is for FPGA 

Storage Optimize

i3 instance type

The i3 instance type is similar to h1, but it is SSD backed, so if you need an NVMe drive, choose this type. Use it for NoSQL databases, in-memory databases, Elasticsearch, and more. The i3en option has higher network bandwidth with Elastic Network Adapter (ENA)-based enhanced networking. 

Mnemonic: i is for IOPS.

d2 instance type

d2 instances have an even higher ratio of disk to CPU and memory, which makes them a good fit for Massively Parallel Processing (MPP), MapReduce and Hadoop distributed computing, and similar applications.

Mnemonic: d is for dense.

h1 instance type

The h1 type is HDD backed, with a balance of compute and memory. You might use it for distributed file systems, network file systems, or data processing applications.

Mnemonic: h is for HDD.

What EC2 instance types should you use?

As AWS has continued to add options to EC2, there are now EC2 instance types for almost any application. If you have comparison questions around pricing, run them through the AWS monthly calculator. And if you don’t know, then generally starting with t3 or m5 is the way to go.

Looking for info on the other cloud providers?