There is a growing job function among companies using public cloud: the Enterprise Cloud Manager.We did a study on ParkMyCloud users which showed that a growing proportion of them have “cloud” or the name of their cloud provider such as “AWS” in their job title. This indicates a growing degree of specialization for individuals who manage cloud infrastructure as demonstrated by their cloud computing job titles. And, in some companies, there is a dedicated role for cloud management – such as an Enterprise Cloud Manager.
Why would you need an Enterprise Cloud Manager?
The world of cloud management is constantly changing and becoming increasingly complex even for the best cloud manager.Recently, the increased adoption of hybrid and multi-cloud environments by organizations to take advantage of best-of-breed solutions, make it more confusing, expensive, and even harder to control. If someone is not fully versed in this field, they may not always know how to handle problems related to governance, security, and cost control. It is important to dedicate resources in your organization to cloud management and related cloud job roles. This chart from Gartner gives us a look at all the things that are involved in cloud management so we can better understand how many parts need to come together for it to run smoothly.
Having a role in your organization that is dedicated to cloud management allows others, who are not specialized in that field, to focus on their jobs, while also centralizing responsibility. With the help of an Enterprise Cloud Manager, responsibilities are delegated appropriately to ensure cloud environments are handled according to best practices in governance, security, and cost control.
After all, just because you adopt public cloud infrastructure does not mean you have addressed any governance or cost issues – which seems rather obvious when you consider that there are sub-industries created around addressing these problems, but you’d be surprised how often eager adopters assume the technology will do the work and forget that cloud management is not a technological but a human behavior problem.
Cohesively, businesses with a presence in the cloud, regardless of their size, should also consider adopting the functionalities of a Cloud Center of Excellence (CCoE) – which, if the resources are available, can be like an entire department of Enterprise Cloud Managers. Essentially, a CCoE brings together cross-functional teams to manage cloud strategy, governance, and best practices, and serve as cloud leaders for the entire organization.
The role of an Enterprise Cloud Manager or cloud center of excellence (or cloud operations center or cloud enablement team, whatever you want to call it) is to oversee cloud operations. They know all the ins and outs of cloud managementso they are able to create processes for resource provisioning and services. Their focus is on optimizing their infrastructure which will help streamline all their cloud operations, improve productivity, and optimize cloud costs.
Moreover, the Enterprise Cloud Manager can systematize the foundation that creates a CCoE with some key guiding principles like the ones outlined by AWS Cloud Center of Excellence here.
With the Enterprise Cloud Manager leadership, DevOps, CloudOps, Infrastructure, and Finance teams within the CCoE can ensure that the organization’s diverse set of business units are using a common set of best practices to spearhead their cloud efforts while keeping balanced working relationships, operational efficiency, and innovative thinking needed to achieve organizational goals.
A Note on Job Titles
It’s worth noting that while descriptive, the “Enterprise Cloud Manager” title isn’t necessarily something widely adopted. We’ve run across folks with titles in Cloud Manager, Cloud Operations Manager, Cloud Project Manager, Cloud Infrastructure Manager, Cloud Delivery Manager, etc.
If you’re on the job hunt, we have a few other ideas for cloud and AWS jobs for you to check out.
Automation Tools are Essential
With so much going on in this space, it isn’t possible to expect just one person or a team to manage all of this by hand – you need automation tools. The great thing is that these tools deliver tangible results that make automation a key component for successful enterprise cloud operations and work for companies of any size. Primary users can be people dedicated to this full time, such as an Enterprise Cloud Manager, as well as people managing cloud infrastructure on top of other responsibilities.
Why are these tools important? They provide two main things: visibility and action to act on those recommendations. (That is, unless you’re willing to let go of the steering wheel and let the platform make the decisions – but most folks aren’t, yet.) Customers that were once managing resources manually are now saving time and money by implementing an automation tool. Take a look at the automation tools that are set up through your cloud vendor, as well as third-party tools that are available for cost optimization and beyond. Setting up these tools for automation will lessen the need for routine check-ins and maintenance while ensuring your infrastructure is optimized.
Do we really need this role?
To put it simply, if you have more than a handful of cloud instances: yes. If you’re small, it may be part of someone’s job description. If you’re large, it may be a center of excellence.
But if you want your organization to be well informed and up to date, then it is important that you have the organizational roles in place to oversee your cloud operations – an Enterprise Cloud Manager, CCoE and automation tools.
Looking for ways to manage cloud costs? If you use the cloud, the answer should always be yes. If you don’t have proper management of your cloud spend, then you could end up spending more than you actually need to. We’ve compiled a list of tips/best practices that will help guide you to track and rightsize cloud spend and align capacity and performance to actual demand so your cloud environment is optimized.
1. Start with the Organizational Problem
It’s easy to find lots of specific ways to reduce and manage public cloud costs – and we have plenty of those to share. But let’s start with the core issue. Public cloud resources are provisioned and used throughout organizations – and governance and budgeting are organizational issues. You need to start at the root of the problem: who is responsible for what cloud costs? And how do you evaluate whether those costs are acceptable – or need to be addressed for wasted spend?
Many organizations solve this problem with a dedicated enterprise cloud manager or cloud center of excellence, a person or department (depending on the size of the organization and extent of cloud deployment) dedicated entirely to the use of cloud by employees, with cost a major focus.
2. Get Familiar with the Cloud-Native Management Tools
The major public cloud providers offer native resource and cost management tools. Since you’re already enmeshed in their infrastructure offerings, it makes sense to evaluate the options within the cloud portals.
For example, on the issue of resource on/off scheduling, AWS, Azure, and Google Cloud each offer a tool. However, they have limitations – ignoring resource types that may benefit from scheduling, not providing actions, and providing data but not recommendations, to name a few. Here is a quick rundown of each of those tools and what they include.
Another example is the AWS Compute Optimizer – a big name in promise, and certainly worth reviewing for AWS users.
3. But, Know that Cloud Providers Won’t Solve All the Problems they Create
Enter the realm of third-party software. Whether because cloud providers don’t actively want you to save money (you might guess this is the case, but they want their services to be “sticky” and therefore promote cost optimization options) or because it’s simply not a revenue driver for them, cloud cost management is often an afterthought for cloud providers. We’re seeing a change in the winds as providers turn toward built-in savings options (for example, Google Cloud’s sustained use discounts), but cloud resource provisioning and optimization are a wild, ever-changing beast that cloud providers aren’t keeping up with.
That’s why it may be time to…
4. Find a Cost Management Tool That Fits Your Needs
As IT infrastructure changes organizations need for tools and processes dedicated to cloud cost management and cost control have become a necessity. Using third-party tools for cloud optimization help with cost visibility and governance and cost optimization. Make sure you aren’t just focusing on cost visibility and recommendations, but find a tool that takes that extra step and takes those actions for you.
It’d be beneficial to find a tool that can work with multiple clouds, multiple accounts within each cloud and in multiple regions within each account so you can view recommendations across all your accounts in one place in one easy to use interface. This added visibility and insight helps simplify managing cloud costs.
By the way – automation is key. By including cost optimization software in your cloud strategy, organizations eliminate the need for developers to write scheduling scripts and deploy them to fit a specific team´s requirements. This automation reduces the potential for human error and saves organizations time and money by allowing developers to reallocate their time to more beneficial tasks.
5. Get Visibility on Your Bill
If you’re going to manage your cloud costs better, you need to understand where your spending is going. Here’s a guide to get a consolidated billing view in AWS.
Relatedly, you’re also going to need to understand what each resource is for – which means you need a robust labeling strategy.
6. Use a Resource Tagging Strategy to Better Manage Cloud Costs
Tags are labels or identifiers that are attached to your instances. This is a way for you to provide custom metadata to accompany the existing metadata, such as instance family and size, region, VPC, IP information, and more. This helps manage your cloud costs by sorting, searching and filtering through your cloud environment.
With the application of tagging best practices in place, you can automate governance, improve your workflows and make sure your costs are controlled. Additionally, there are management and provisioning tools that can automate and maintain your tagging standards.
In ParkMyCloud, our software reads the names and tags assigned to VMs and recommends which are suitable for scheduling (“parking”).
7. Identify Idle/Underutilized Resources
Okay, so that’s how you get to the step of optimizing costs. So what are the ways you can actually manage cloud costs and optimize spending?
The easiest way to quickly and significantly reduce cloud costs is to identify resources that are not actually being used (typically in non-production environments).
Examples of resources that you may leave idle are; On-Demand Instances/VMs, Relational Databases, Load Balancers and Containers.
Once you’ve identified them, then you can schedule them to turn off when not needed, or as we like to say, “park” them.
By setting schedules for your instances to turn off when they are typically idle, you are eliminating potential cloud waste and saving you money on your cloud bill. Typically, schedules would turn off instances between the hours of 7:00 pm and 9:00 am on weekdays and on weekends. This way you don’t have to worry about manually turning on and off instances when you aren’t using them. By keeping workloads on just during business hours, you can save around 65% on your cloud bill.
8. Rightsize Your Instances
Another major source of cloud waste is oversized resources. When you RightSize you are matching a workload to the best supporting virtual machine size, helping you optimize costs. This is important because many virtual machines in the cloud are sized much larger than necessary for the workloads running on them – a single instance change can save 50% or more of the cost. (Try it free to see how much you can save.)
9. Know Your Purchasing Options & Discounts Offered by Cloud Providers – Starting with Reserved Instances
Each of the ‘big three’ cloud providers offer an assortment of purchasing options to lower costs from the listed On-Demand prices.
Another sort of “purchasing option” is related to contract agreements. All three major cloud providers offer enterprise contracts. Typically, these are to encourage large companies to commit to specific levels of usage and spend in exchange for an across-the-board discount – examples of this would be AWS EDPs and Azure Enterprise Agreements.
14. Make Sure You’re Using Lambda Efficiently
It can be easy to get caught up while building Lambda based applications that you forget to optimize and plan for the costs Lambda will incur. While it may be cheap and easy to build these applications, if you run heavy workloads without taking costs into account, you’ll end up running up your bill.
Continuously keeping track of spend, monitoring usage and understanding its behavior is essential to keeping Lambda costs controlled and optimized.
15. Review Credit Options
Each of the cloud providers offers ways to get credits you can put toward your bill. By offering these credits, Google Cloud, Azure and AWS are trying to make it easy and in some cases free to get started using their cloud platforms.
16. Keep Your Instance Types Up to Date
Did you ever think that simply modernizing your VMs and databases to make sure they are running on the latest instance family can save you money?
Cloud providers incentivize instance modernization by pricing the newest generations the lowest. Typically, new instance families come out with newer CPU types, but can also refer to networking or memory improvements as well.
So you get a cheaper price (10-20% discount) and better performance – modernizing your instances is almost a no brainer.
…and the list goes on. Managing cloud costs can seem like a daunting task but it doesn’t have to be! Follow these tips and start optimizing your cloud environment.
Got any tips we should add? Let us know in the comments below!
AWS offers a range of EC2 instance types optimized for various purposes. It’s great that they provide so much variety, but of course, it means one more thing that you have to learn. It’s worth taking the time to do so, as ⅔ of IaaS spend goes toward compute – that’s a lot of EC2.
Check out a brief breakdown in this video, which also compares EC2 purchasing options. Check it out here:
Or, read on for a look into each of the AWS instance types. Remember that within each type, you’ll still need to choose the AWS instance sizes that suit your specific needs. Additionally, older generations within each instance types are available for purchase – for example, c5 is the latest “c” instance, but c4 and c3 are still available – but as the newer types tend to perform better at a cheaper price, you’ll only want to use the older types if you have an AMI or other dependency. The differences matter for some users… but you probably already know who you are.
Note: a version of this blog was originally published in July 2018. It has been rewritten and updated for 2020. New EC2 instance types since our last writeup include A1, T3, z1d, high memory, R5, G4, and F1.
Quick EC2 Instance Info
This chart shows a quick summary of what we’ll cover. We’re including a brief description and mnemonic for each (hopefully helpful if you’re studying for an AWS certification!)
If you’ve taken a look at AWS training materials, you may have seen a couple of overall acronyms to remember all of these – perhaps Dr McGiFT Px or FIGHT Dr McPX. Whether these acronyms are useful at all is perhaps a point of discussion, but to ensure that all the instance types above are in your list, we suggest:
Fight Czar MXPD
Fright Camp DXZ
March Gift PZXD
(and don’t forget high memory and Inf!)
These general purpose AWS EC2 instance types are a good place to start, particularly if you’re not sure what type to use. There are three general purpose types.
t instance type
The t3 family is a burstable instance type. If you have an application that needs to run with some basic CPU and memory usage, you can choose t3. It also works well if you have an application that gets used sometimes but not others. When the resource is idle, you’ll generate CPU credit, which you’ll utilize when the resource is used. It’s useful for things that come and go a lot, such as websites or development environments, and while generally inexpensive, make sure you understand how the CPU credits work before deploying these. There’s a little bit of math and they may not be as cheap as they look at first glance.
Make sure you also understand the difference between t3 and the older t2 – t3 are in “unlimited mode” by default, so instead of throttling down to baseline CPU when your instance runs out of credits, you pay for overages.
For each of the EC2 types we cover here, we’ll also add a mnemonic to help you remember the purpose of each instance type.
Mnemonic: t is for tiny or turbo.
m instance type
The m5 instance type is similar, but for more consistent workloads. It has a nice balance of CPU, memory, and disk. It’s not hard to see why almost half of EC2 workloads are on “m” instances. In addition to m5, you also have the option of m6g, which are powered by Arm-based AWS Graviton2 processors, making them more cost-efficient. There’s also m5a, m5n, and m4 – most of which are safe to ignore unless you have a specific use case for one of the other processors besides m5’s Intel Xeon Platinum 8175 processors. If you aren’t sure what to choose, m5 is the most versatile of all the Amazon instance types.
Mnemonic: m is for main choice or happy medium.
a1 instance type
The a1 instance type was announced in late 2018 and can be a less expensive option than other EC2. They are suited for scale-out workloads such as web servers, containerized microservices, caching fleets, distributed data stores, and development environments. The instances are powered by Arm processors and suited for Arm-based workloads.
Mnemonic: a is for Arm processor
c instance type
The c5 instance type has a high ratio of compute/CPU versus memory. If you have a compute-intensive application – maybe scientific modelling, intensive machine learning, or multiplayer gaming – these instances are a good choice. There is also the c5d option, which is SSD-backed. See also the C5n which have up to 100 Gbps network bandwidth and increased memory compared to equivalent c5 instances. The c4 family is also still available.
Mnemonic: c is for compute (at least that one’s easy!)
r instance family
The r instance family is memory-optimized, which you might use for in-memory databases, real-time processing of unstructured big data, or Hadoop/Spark clusters. You can think of it as a kind of midpoint between the m5 and the x1e. In addition to r5, there are r5a which deliver lower cost per GiB memory and r5n which have higher bandwidth for applications that need improved network throughput and packet rate performance.
Mnemonic: r is for RAM.
x1 instance family
The x1 family has a much higher ratio of memory, so this is a good choice if you have a full in-memory application or a big data processing engine like Apache Spark or Presto. X1e are optimized for high-performance databases, in-memory databases, and other memory intensive enterprise applications.
Mnemonic: x is for xtreme, as in “xtreme RAM” seems to be generally accepted, but we think this is a bit weak. If you have any suggestions, comment below.
High Memory instance family
We’re not sure why these didn’t get an alphabet soup name like the rest of the AWS instances, but at least it’s easy to remember and understand. As you might guess, high memory instances run large in-memory databases, including production deployments of SAP HANA.
Mnemonic: we’ll leave this one up to you.
z1d instance family
The z1d instances combine high compute capacity with a high memory footprint. They have a sustained core frequency of up to 4.0 GHz, the fastest of AWS’s offerings. These are best for electronic design automation (EDA) and some relational database workloads with high per-core licensing costs.
Mnemonic: z is for zippy
p instance type
If you need GPUs on your instances, p3 instances are a good choice. They are useful for video editing, and AWS also lists use cases of “computational fluid dynamics, computational finance, seismic analysis, speech recognition, autonomous vehicles” – so it’s fairly specialized. p2 instances are also available.
Mnemonic: p is for pictures (graphics).
Inf1 instance type
The Inf1 instances are a specialized EC2 type for machine learning inference applications, such as recommendation engines, forecasting, image and video analysis, advanced text analytics, document analysis, voice, conversational agents, translation, transcription, and fraud detection.
Mnemonic: inf is for inference
g instance type
The g instance type uses Graphics Processing Units (GPUs) to accelerate graphics-intensive workloads, and also designed to accelerate machine learning inference. This could include adding metadata to an image, automated speech recognition, and language translation, as well as graphics workstations, video transcoding, and game streaming in the cloud.
g4 is the latest family, and g3 are available as well.
Mnemonic: g is for graphics or GPU
F1 instance type
f1 instances offer customizable hardware acceleration with field programmable gate arrays (FPGAs) – hence the “f”. Applications could include genomics research, financial analysis, and real-time video processing.
Mnemonic: f is for FPGA
i3 instance type
The i3 instance type is similar to h1, but it is SSD backed, so if you need an NVMe drive, choose this type. Use it for NoSQL databases, in-memory databases, Elasticsearch, and more. The i3en option has higher network bandwidth with Elastic Network Adapter (ENA)-based enhanced networking.
Mnemonic: i is for IOPS.
d2 instance type
d2 instances have an even higher ratio of disk to CPU and memory, which makes them a good fit for Massively Parallel Processing (MPP), MapReduce and Hadoop distributed computing, and similar applications.
Mnemonic: d is for dense.
h1 instance type
The h1 type is HDD backed, with a balance of compute and memory. You might use it for distributed file systems, network file systems, or data processing applications.
Mnemonic: h is for HDD.
What EC2 instance types should you use?
As AWS has continued to add options to EC2, there are now EC2 instance types for almost any application. If you have comparison questions around pricing, run them through the AWS monthly calculator. And if you don’t know, then generally starting with t3 or m5 is the way to go.
Google Cloud credits are an incentive offered by Google that help you get started on Google’s Cloud Platform for free. Like Amazon and Microsoft, Google is trying to make it easy and in some cases free to get started using their Cloud Platform or certain services on their platform that they believe are “sticky” – which is beneficial if you’d like to try the services out for personal use or for a proof-of-concept. There is both a spend and a time limit for Google’s free credits, but then they also offer “always free” products that do not count against the free credit and can be used forever, or until Google decides to pull the plug, with usage limits.
1. Google Cloud Free Tier
The most basic way to use Google Cloud products is the Google Cloud Free Tier. This extended free trial gives you access to free cloud resources so you can learn about Google Cloud services by trying them on your own.
The Google Cloud Free Tier has two parts:
A 12-month free trial with a $300 credit to use with any Google Cloud services.
Always Free, which provides limited access to many common Google Cloud resources, free of charge.
12-Month Free Trial
The Google Cloud 12-month free trial and $300 credit is for new customers/trialers. Be sure to check through the full list of eligibility requirements on Google’s website. (No cryptomining – sorry!)
Before you start spinning up machines, be sure to note the following limitations:
You can’t have more than 8 cores (or virtual CPUs) running at the same time.
You can’t add GPUs to your VM instances.
You can’t request a quota increase.
You can’t create VM instances that are based on Windows Server images.
Your free trial ends when 12 months have elapsed since you signed up and/or you have spent your $300 in Google Cloud credit. When you use resources covered by Always Free during your free trial period, those resources are not charged against your free trial credit.
At the end of the Free Trial you either begin paying or you lose your services and data, it’s pretty black and white, and you can upgrade at any time during your Free Trial with any remaining credits being applied against your bill.
Google Cloud Always Free
The Always Free program is essentially the “next step” of free usage after a trial. These offerings provide limited access to many Google Cloud resources. The resources are usually provided at monthly intervals, and they are not credits — they do not accumulate or roll over from one interval to the next, it’s use it or lose it. The Always Free is a regular part of your Google Cloud account, unlike the Free Trial.
Not all Google Cloud services offer resources as part of Always Free program. For a full list of the services and usage limits please see here – a few of the more popular services include Compute Engine, Cloud Storage, Cloud Functions, Google Kubernetes Engine (GKE), Big Query and more. Be sure to check the usage limits before spinning up resources, as usage above the Always Free tier will be billed at standard rates.
2. Google Cloud for Startups
Google is motivated to get startups to build their infrastructure on Google Cloud while they’re still early stage, to gain long-term customers. If you work for an early-stage startup, reach out to your accelerator, incubator, or VC about Google Cloud credit. You can get up too $100,000 in credit – but it will come at the price of a large percentage of equity.
Google offers several options for students, teachers, and researchers to get up and running with Google Cloud.
GCP Credits for Learning – Faculty can apply for $100 in credits and $50 per student. This offering is intended for students who are learning GCP for career purposes.
Research credits – Research faculty can apply for $5,000 in credits for Google Cloud resources to support academic research, or $1,000 for PhD candidates. The research can be in any field. Learn more here.
There are also several offerings related to making education accessible without associated credits. See more on the Google Cloud Education page.
4. Vendor Promotions and Events
Various vendors that are Google Cloud partners run occasional promotions, typically in the form of a credit greater than $300 for the Google Cloud Free Trial, although we’ve also seen straight credits offered. For example, CloudFlare offers a credit program for app developers.
Also check out events that might offer credit – for example, TechStars startup weekends offers $3,000 in Google Cloud credits for attendees. Smaller awards of a few hundred dollars can be found through meetups and other events.
Google Cloud Credits do offer people and companies a way to get started quickly, and the Always Free program is a unique way to entice users to try different services at no cost, albeit in a limited way. Be sure to check out the limitations before you get started, and have fun!
AWS CPU credits are unique to T-series instances – and they can be a bit tricky to figure out. Whether you’re using the AWS free tier or just trying to use the smallest EC2 compute instance you can, you’ll need to keep track of these credits. These credits are both generated and used by the T2 and T3 instance families to decide how much CPU power you can actually use on those EC2 instances. This can be confusing if you aren’t expecting your virtual machine to have it’s CPU power throttled, or are wondering why the cost is much higher than you thought it would be.
AWS first released a “burstable” instance type in the form of the t1.micro instance size in 2010, which was four years after the first EC2 instance size was released (m1.small in 2006, for you historians). Up until 2010, new instance sizes had always been bigger than the m1.small size, but there was demand for a VM size that could accommodate low-throughput or inconsistent workloads.
The t1.micro was the only burstable instance size for another four years, until the t2.medium was released in 2014. Soon, there was a whole range of t2 instances to cover the use case of servers that were low-powered while idle, but could have lots of potential compute resources available for the couple minutes each hour they were needed. In 2018, AWS introduced the t3 family that uses more modern CPUs and the AWS Nitro system for virtualization.
AWS CPU Credits 101
The key reason why T-series instances have a lower list price than corresponding M-series instances (in standard mode, more on that later) is the CPU credits that are tracked and used on each resource. The basic premise is that an idle instance earns credits, while a busy instance spends those credits. A “credit” corresponds to 1 minute’s worth of full 100% CPU usage, but this can be broken down in different ways if the usage is less than 100%. For instance, 10% of CPU usage for 10 minutes also uses 1 credit. Each T-series machine size not only has a number of CPUs available, but also earns credits at different rates.
Here’s where the math starts getting a little tricky. A t2.micro instance earns 6 credits per hour with 1 available CPU. If you run that instance at 10% utilization for a full hour, it’ll spend 6 credits per hour (or 1 credit every 10 minutes). This means that any time spent under 10% utilization is a net increase in CPU credits, while any time spent above 10% utilization is a net decrease in CPU credits. A t3.large instance has 2 CPUs and earns 36 credits per hour, which means the balancing point where the net credit use is zero will be at 30% utilization per CPU.
So what happens when you run out of credits or never use your credits?
Standard Mode vs. Unlimited Mode
One of the differences between the t2 family and the t3 family is the default way each handles running out of credits. The t2 family defaults to Standard Mode, which means that once the instance has run out of credits to use, the CPU is throttled to the baseline value we calculated above (so 10% for t2.micro) and will continue maxing out at that value until credits have built back up. In practice, this means that your process or application that has burst up to use a lot more CPU than normal will soon be slow and unusable if the load remains high.
In 2017, AWS introduced Unlimited Mode as an option for t2 instances – and later, in 2018, as the default for t3 instances when they were introduced. Unlimited mode means that instead of throttling down to the baseline CPU when your instance runs out of credits, you can continue to run at a high CPU load and just pay for the overages. This price is 5¢ per CPU hour for Linux and 9.6¢ per CPU hour for Windows. In practice, this means that a t2.micro that has run out of credits and is running at 35% CPU utilization for a full 24 hours would cost an additional 30¢ that day on top of the normal 27.84¢ for 24hr usage, meaning the price is more than doubled.
Using T-series Instead of M-series
These overage charges for Unlimited Mode of t2 and t3 instances means that while the list price of the instance is much cheaper than corresponding m4 and m5 instances, you need to figure out if the utilization pattern of your workload makes sense for a burstable instance family. For example, an m5.large in us-east-1 costs 9.6¢/hr and a t3.large with similar specs costs 8.32¢/hr with a 30% CPU baseline. If your t3.large server is going to be running higher than 55.6% CPU for the hour on a consistent basis, then the price of the m5.large is actually lower.
When to Stop T-series and When to Let Them Run
One perk of using the t2 instances in Standard mode is that each time you start the server, you receive 30 launch credits that allow a high level of CPU usage when you first start the instance from a stopped state. These launch credits are tracked separately from accrued credits and are used first, so servers that only need to run short-lived processes when first starting can take advantage of this fact. The downside of stopping t2 servers is that accrued credits are lost when you stop the instance.
On the other hand, t3 servers persist earned credits for 7 days after stopping the instance, but don’t earn launch credits when they are first started. This is useful to know for longer-running processes that don’t have huge spikes, as they can build up credits but you don’t need to worry about losing the credits if you stop the server.
At ParkMyCloud, we specialize in scheduling servers and databases to turn off on a schedule, which is perfect for non-production servers. We find that lots of users have t2 and t3 instances for these dev and test workloads, but want to know what happens to credits if you park those servers overnight. As we discussed, AWS CPU credits go away in T2 standard mode (but with additional launch credits) but persist in T3 Unlimited mode. Knowing this, you can pick the right instance size for the workload you’re running and confidently save money using ParkMyCloud.
Best for non-production instances that have a quick burst of usage when starting = T2 instance with ParkMyCloud parking schedule
Best for non-production instances with unpredictable, but sporadic spikes = T3 instance with ParkMyCloud parking schedule
Try it for free to see how we can make the cost of your t2 and t3 servers even lower.