How Microsoft Azure Deallocate VM vs. Stop VM States Differ

How Microsoft Azure Deallocate VM vs. Stop VM States Differ

Do you know the difference between Azure “deallocate VM” and “stop VM” states? They are similar enough that in conversation, I’ve noticed some confusion around this distinction.  

If your VM is not running, it will have one of two states – Stopped, or Stopped (deallocated). Essentially, if something is “allocated” – you’re still paying for it. So while deallocating a virtual machine sounds like a harsh action that may be permanently deleting data, it’s the way you can save money on your infrastructure costs and eliminate wasted Azure spend with no data loss.

Azure’s Stopped State

When you are logged in to the operating system of an Azure VM, you can issue a command to shut down the server. This will kick you out of the OS and stop all processes, but will maintain the allocated hardware (including the IP addresses currently assigned). If you find the VM in the Azure console, you’ll see the state listed as “Stopped”. The biggest thing you need to know about this state is that you are still being charged by the hour for this instance.

Azure’s Deallocated State

The other way to stop your virtual machine is through Azure itself, whether that’s through the console, Powershell, or the Azure CLI. When you stop a VM through Azure, rather than through the OS, it goes into a “Stopped (deallocated)” state.  This means that any non-static public IPs will be released, but you’ll also stop paying for the VM’s compute costs. This is a great way to save money on your Azure costs when you don’t need those VMs running, and is the state that ParkMyCloud puts your VMs in when they are parked.

Which State to Choose?

The only scenario in which you should ever choose the stopped state instead of the deallocated state for a VM in Azure is if you are only briefly stopping the server and would like to keep the dynamic IP address for your testing. If that doesn’t perfectly describe your use case, or you don’t have an opinion one way or the other, then you’ll want to deallocate instead so you aren’t being charged for the VM.

If you’re looking to automate scheduling when you deallocate VMs in Azure, ParkMyCloud can help with that. ParkMyCloud makes it easy to identify idle resources using Azure Metrics and to automatically schedule your non-production servers to turn off when they are idle, such as overnight or on weekends. Try it for free today to save money on your Azure bill!

Further reading:

How to Use Google Preemptible VMs to Get 80% Savings

How to Use Google Preemptible VMs to Get 80% Savings

Google Cloud has always had a knack for non-standard virtual machines, and their option of creating Google preemptible VMs is no different. Traditional virtual machines are long-running servers with standard operating systems that are only shut down when you say they can be shut down. On the other hand, preemptible VMs last no longer than 24 hours and can be stopped on a moment’s notice (and may not be available at all). So why use them?

Google Cloud Preemptible VM Overview

Preemptible VMs are designed to be a low-cost, short-duration option for batch jobs and fault-tolerant workloads. Essentially, Google is offering up extra capacity at a huge discount – with the tradeoff that if that capacity is needed for other (full-priced)  resources, your instances can be terminated or “preempted”. Of course, if you’re using them for batch processing, being preempted will slow down your job without completely stopping it. 

You can create your preemptible VMs in a managed instance group in order to easily manage a collection of VMs as a single entity – and, if a VM is preempted, the VM will be automatically recreated. Alternatively, you can use Kubernetes Engine container clusters to automatically recreate preempted VMs.  

Preemptible VM Pricing

Pricing is fixed, not variable, and you can view the preemptible price alongside the on demand prices in Google’s compute pricing list and/or pricing calculator. Prices are 70-80% off on demand, and upward of 50% off even compared to a 3-year committed use discount

Google does not charge you for instances if they are preempted in the first minute after they start running.

Note: Google Cloud Free Tier credits for Compute Engine do not apply to preemptible instances. 

Use Cases for Google Preemptible VMs

As with most trade-offs, the biggest reason to use a preemptible VM is cost. Preemptible VMs can save you up to 80% compared to a normal on-demand virtual machine. (By the way – AWS users will want to use Spot Instances for the same reason, and Azure users can check out Low Priority VMs). This is a huge savings if the workload you’re trying to run consists of short-lived processes or things that are not urgent and can be done any time. This can include things like financial modeling, rendering and encoding, and even some parts of your CI/CD pipeline or code testing framework.

How to Create a Google Preemptible VM

To create a preemptible VM, you can use the Google Cloud Platform console, the ‘gcloud’ command line tool, or the Google Cloud API. The process is the same as creating a standard VM: you select your instance size, networking options, disk setup, and SSH keys, with the one minor change that you enable the ‘preemptible’ flag during setup. The other change you’ll want to make is to create a shutdown script to decide what happens to your processes and data if the instance is stopped without your knowledge. This script can even perform different actions if the instance was preempted as opposed to shut down from something you did.

One nice benefit of Google preemptible VMs is the ability to attach local SSD drives and GPUs to the instances. This means you can get added extensibility and performance for the workload that you are running, while still saving money. You can also have preemptible instances in a managed instance group for high scalability when the instances are available. This can help you process more of your jobs at once when the preemptible virtual machines are able to run.

FAQs About Google Preemptible Instances

How long do GCP preemptible VMs last?

These instances can last up to 24 hours. If you stop or start an instance, the 24-hour counter is reset because the instance transitions into a terminated state. If an instance is reset or other actions that keep it in a running state, the 24-hour clock is not reset. 

Is pricing variable?

No, pricing for preemptible VMs is fixed, so you know in advance what you will pay.

What happens when my instance is preempted? 

When your instance is preempted, you will get a 30 second graceful shutdown period. The instance will get a preemption notice in the form of an ACPI G2 Soft Off signal. You can use a shutdown script to complete cleanup actions before the instance stops. If an instance does not stop after 30 seconds, it will get an ACPI G3 Mechanical Off signal to the operating system, and terminate it. You can practice what this looks like by stopping the instance.

By using managed instance groups, you can automatically recreate your instances if capacity is available. 

How often are you actually preempted?

Google reports an average preemption rate from 5-15% per day per project, with occasional spikes depending on time and zone. This is not a guarantee, though, and you can be preempted at any time. 

How does Google choose which instances to preempt?

Google avoids preempting too many instances from a single customer, and preempts new instances over older instances whenever possible – this is to avoid losing work across your cluster. 

How to Use Google Preemptible VMs to Optimize Costs

Our customers who have the most cost-effective use of Google resources often mix Google preemptible VMs with other instance types based on the workloads. For instance, production systems that need to be up 24/7 can buy committed-use discounts for up to 57% savings on those servers. Non-production systems, like dev, test, QA, and staging, can use on-demand resources with schedules managed by ParkMyCloud to save 65%. Then, any batch workloads or non-urgent jobs can use Google preemptible instances to run whenever available for up to 80% savings. Questions about optimizing cloud costs? We’re happy to help – email us or use the chat client on this page (staffed by real people, including me!).

Further reading on Google Cloud cost optimization:

How Containerization in the Cloud Reduces Vendor Lock-in

How Containerization in the Cloud Reduces Vendor Lock-in

As you accelerate your organization’s containerization in the cloud, key stakeholders may worry about putting all your eggs in one cloud provider’s basket. This combination of fears – both a fear of converting your existing (or new) workloads into containers, plus a fear of being too dependent on a single cloud provider like Amazon AWS, Microsoft Azure, or Google Cloud – can lead to hasty decisions to use less-than-best-fit technologies. But what if using more of your chosen cloud provider’s features meant you were less reliant on that cloud provider?

The Core Benefit of Containers

Something that can get lost in the debate about whether containerization is good or worthwhile is the feature of portability. When Docker containers were first being discussed, one of the main use cases was the ability to run the container on any hardware in any datacenter without worrying if it would be compatible. This seemed to be a logical progression from virtual machines, which had provided the ability to run a machine image on different hardware, or even multiple machines on the same hardware. Most container advocates seem to latch on to this from the perspective of container density and maximizing hardware resources, which makes much more sense in the on-prem datacenter world.

In the cloud, however, hardware resource utilization is now someone else’s problem. You choose your VM or container size and pay just for that size, instead of having to buy a whole physical server and pay for the entirety of it up-front. Workload density still matters, but is much more flexible than on-prem datacenters and hardware. With a shift to containers as the base unit instead of Virtual Machines, your deployment options in the cloud are numerous. This is where container portability comes into play.

The Dreaded “Vendor Lock-in”

Picking a cloud provider is a daunting task, and choosing one and later migrating away from it can have enormous impacts of lost money and lost time. But do you need to worry about vendor lock-in? What if, in fact, you can pivot to another provider down the road with minimal disruption and no application refactoring?

Implementing containerization in the cloud means that if you ever choose to move your workloads to a different cloud provider, you’ll only need to focus on pointing your tooling to the new provider’s APIs, instead of having to test and tinker with the packaged application container. You also have the option of running the same workload on-prem, so you could choose to move out of the cloud as well. That’s not to say that there would be no effort involved, but the major challenge of “will my application work in this environment” is already solved for you. This can help your Operations team and your Finance team to worry less about the initial choice of cloud, since your containers should work anywhere. Your environment will be more agile, and you can focus on other factors (like cost) when considering your infrastructure options. 

Further Reading

Why AWS CPU Credits Mean T-Series Instances Aren’t as Cheap as You Think

Why AWS CPU Credits Mean T-Series Instances Aren’t as Cheap as You Think

AWS CPU credits are unique to T-series instances – and they can be a bit tricky to figure out. Whether you’re using the AWS free tier or just trying to use the smallest EC2 compute instance you can, you’ll need to keep track of these credits. These credits are both generated and used by the T2 and T3 instance families to decide how much CPU power you can actually use on those EC2 instances. This can be confusing if you aren’t expecting your virtual machine to have it’s CPU power throttled, or are wondering why the cost is much higher than you thought it would be.

T-series History

AWS first released a “burstable” instance type in the form of the t1.micro instance size in 2010, which was four years after the first EC2 instance size was released (m1.small in 2006, for you historians). Up until 2010, new instance sizes had always been bigger than the m1.small size, but there was demand for a VM size that could accommodate low-throughput or inconsistent workloads.

The t1.micro was the only burstable instance size for another four years, until the t2.medium was released in 2014. Soon, there was a whole range of t2 instances to cover the use case of servers that were low-powered while idle, but could have lots of potential compute resources available for the couple minutes each hour they were needed. In 2018, AWS introduced the t3 family that uses more modern CPUs and the AWS Nitro system for virtualization.

AWS CPU Credits 101

The key reason why T-series instances have a lower list price than corresponding M-series instances (in standard mode, more on that later) is the CPU credits that are tracked and used on each resource. The basic premise is that an idle instance earns credits, while a busy instance spends those credits. A “credit” corresponds to 1 minute’s worth of full 100% CPU usage, but this can be broken down in different ways if the usage is less than 100%.  For instance, 10% of CPU usage for 10 minutes also uses 1 credit. Each T-series machine size not only has a number of CPUs available, but also earns credits at different rates.

Here’s where the math starts getting a little tricky. A t2.micro instance earns 6 credits per hour with 1 available CPU. If you run that instance at 10% utilization for a full hour, it’ll spend 6 credits per hour (or 1 credit every 10 minutes). This means that any time spent under 10% utilization is a net increase in CPU credits, while any time spent above 10% utilization is a net decrease in CPU credits. A t3.large instance has 2 CPUs and earns 36 credits per hour, which means the balancing point where the net credit use is zero will be at 30% utilization per CPU.

So what happens when you run out of credits or never use your credits?

Standard Mode vs. Unlimited Mode

One of the differences between the t2 family and the t3 family is the default way each handles running out of credits. The t2 family defaults to Standard Mode, which means that once the instance has run out of credits to use, the CPU is throttled to the baseline value we calculated above (so 10% for t2.micro) and will continue maxing out at that value until credits have built back up. In practice, this means that your process or application that has burst up to use a lot more CPU than normal will soon be slow and unusable if the load remains high.

In 2017, AWS introduced Unlimited Mode as an option for t2 instances – and later, in 2018, as the default for t3 instances when they were introduced. Unlimited mode means that instead of throttling down to the baseline CPU when your instance runs out of credits, you can continue to run at a high CPU load and just pay for the overages. This price is 5¢ per CPU hour for Linux and 9.6¢ per CPU hour for Windows. In practice, this means that a t2.micro that has run out of credits and is running at 35% CPU utilization for a full 24 hours would cost an additional 30¢ that day on top of the normal 27.84¢ for 24hr usage, meaning the price is more than doubled.

Using T-series Instead of M-series

These overage charges for Unlimited Mode of t2 and t3 instances means that while the list price of the instance is much cheaper than corresponding m4 and m5 instances, you need to figure out if the utilization pattern of your workload makes sense for a burstable instance family. For example, an m5.large in us-east-1 costs 9.6¢/hr and a t3.large with similar specs costs 8.32¢/hr with a 30% CPU baseline. If your t3.large server is going to be running higher than 55.6% CPU for the hour on a consistent basis, then the price of the m5.large is actually lower.

When to Stop T-series and When to Let Them Run

One perk of using the t2 instances in Standard mode is that each time you start the server, you receive 30 launch credits that allow a high level of CPU usage when you first start the instance from a stopped state. These launch credits are tracked separately from accrued credits and are used first, so servers that only need to run short-lived processes when first starting can take advantage of this fact. The downside of stopping t2 servers is that accrued credits are lost when you stop the instance.

On the other hand, t3 servers persist earned credits for 7 days after stopping the instance, but don’t earn launch credits when they are first started. This is useful to know for longer-running processes that don’t have huge spikes, as they can build up credits but you don’t need to worry about losing the credits if you stop the server. 

At ParkMyCloud, we specialize in scheduling servers and databases to turn off on a schedule, which is perfect for non-production servers. We find that lots of users have t2 and t3 instances for these dev and test workloads, but want to know what happens to credits if you park those servers overnight. As we discussed, AWS CPU credits go away in T2 standard mode (but with additional launch credits) but persist in T3 Unlimited mode. Knowing this, you can pick the right instance size for the workload you’re running and confidently save money using ParkMyCloud. 

  • Best for non-production instances that have a quick burst of usage when starting = T2 instance with ParkMyCloud parking schedule
  • Best for non-production instances with unpredictable, but sporadic spikes = T3 instance with ParkMyCloud parking schedule

Try it for free to see how we can make the cost of your t2 and t3 servers even lower.

Further reading on saving money on AWS: 

 

 

 

7 Favorite AWS Training Resources

7 Favorite AWS Training Resources

When it comes to AWS training resources, there’s no shortage of information out there. Considering the wide range of videos, tutorials, blogs, and more, it’s hard knowing where to look or how to begin. Finding the best resource depends on your learning style, your needs for AWS, and getting the most updated information available. Whether you’re just getting started in AWS or consider yourself an expert, there’s an abundance of resources for every learning level. With this in mind, we came up with our 7 favorite AWS training resources, sure to give you the tools you need to learn AWS:

1. AWS Self-Paced Labs

What better way to learn that at your own pace? AWS self-paced labs give you hands-on learning in a live AWS environment, with AWS cloud services, and actual scenarios you would encounter in the cloud. There are two different ways to learn with these labs. You can either take an individual lab or follow a learning quest. Individual labs are intended to help users get familiar with an AWS service as quickly as 15 minutes. Learning quests guide you through a series of labs so you can master any AWS scenario at your own pace. Once completed, you will earn a badge that you can boast on your resume, LinkedIn, website, etc. 

Whatever your experience level may be, there are plenty of different options offered. Among the recommended labs you’ll find an Introduction to Amazon Elastic Compute Cloud (EC2), and for more advanced users, a lab on Maintaining High Availability with Auto Scaling (for Linux).

2. AWS Free Tier

Sometimes the best way to learn something is by jumping right in. With the AWS Free Tier, you can try AWS services for free. This is a great way to test out AWS for your business, or for the developers out there, to try services like AWS CodePipeLine, AWS Data Pipeline, and more. While you are still getting a hands-on opportunity to learn a number of AWS services, the only downside is that there are certain usage limits. You can track your usage with a billing alarm to avoid unwanted charges, or you can try ParkMyCloud and park your instances when they’re not in use so you get the most out of your free tier experience. In fact, ParkMyCloud started its journey by using AWS’s free tier!

3. AWS Documentation and Whitepapers

AWS Documentation is like a virtual encyclopedia of tools, terms, training, and everything AWS. You’ll find case studies, tutorials, cloud computing basics, and so much more. This resource is a one-stop-shop for all of your AWS documentation needs, whether you’re a beginner or advanced user. No matter where you are in your AWS training journey, AWS documentation is always a useful reference and certainly deserves a spot in your bookmarks. 

Additionally, you’ll find whitepapers that give users access to technical AWS content that is written by AWS and individuals from the AWS community, to help further your knowledge of their cloud. These whitepapers include things from technical guides, reference material, and architecture diagrams.

4. YouTube

So far, we’ve gone straight to the source for 3 out of 7 of our favorite AWS training resources. Amazon really does a great job of providing hands-on training, tutorials, and documentation for users with a range of experience. However, YouTube opens up a whole new world of video training that includes contributions from not only Amazon, but other great resources as well. Besides the obvious Amazon Web Services channel, there are also popular and highly rated videos by Edureka, Simplilearn, Eli the Computer Guy, and more.

5. Bloggers

As cloud technology usage continues to expand and evolve, blogs are a great way to stay up to speed with AWS and the world of cloud computing. Of course, in addition to aws labs, a free-trial, extensive documentation, and their own YouTube channel, AWS also has their own blog. Since AWS actually has a number of blogs that vary by region and technology, we recommend that you start by following Jeff Barr – Chief Evangelist at Amazon Web Services, and primary contributor. Edureka was mentioned in our recommended YouTube channels, they also have a blog that covers plenty of AWS topics. The CloudThat blog is an excellent resource for AWS and all things cloud, and was co-founded by Bhaves Goswami – a former member of the AWS product development team. Additionally, AWS Insider is a great source for all things AWS. Here you’ll find blogs, webcasts, how-to, tips, tricks, news articles and even more hands-on guidance for working with AWS. If you prefer newsletters straight to your inbox, check out Last Week in AWS and Inside Cloud

6. Online Learning Platforms

As public cloud computing continues to grow – and AWS continues to dominate the market – people have become increasingly interested in this CSP and what it has to offer. In the last 8-10 years, two massive learning platforms were developed, Coursera and Udemy. These platforms offer online AWS courses, specializations, training, and degrees. The abundance of courses that these platforms provide can help you learn all things AWS and give you a wide array of resources to help you train for different AWS certifications and degrees. 

7. GitHub

GitHub is a developer platform where users work together to review and host code, build software and manage projects. This platform has access to a number of materials that can help further your AWS training. In fact, here’s a great list of AWS training resources that can help you prepare for an Amazon Cloud certification. The great thing about this site is the collaboration among the users. The large number of people in this community brings together people from all different backgrounds so they are able to provide knowledge about their own specialties and experiences. With access to everything from ebooks, video courses, free lectures, and sample tests, posts like these can help you get on the right certification track. 


There’s plenty of information out there when it comes to AWS training resources. We picked our 7 favorite resources for their reliability, quality, and range of information. Whether you’re new to AWS or consider yourself an expert, these resources are sure to help you find what you’re looking for.