Modernize Your Virtual Machines To The Latest Instance Family

Modernize Your Virtual Machines To The Latest Instance Family

Longtime readers of the ParkMyCloud blog know about some of the pillars of cost savings – Reserved Instances for production workloads, schedule your non-production servers to turn off on nights and weekends, and resize your VMs to a smaller size if it’s underutilized – our data shows that 95% of instances in the public cloud are operating at less than 50% average CPU – but one of the more underrated methods of saving money on your cloud bill is by making sure your VMs and databases are running on the latest instance family.  Let’s take a look at what this means, what your options are, and how much you can expect to save.

Instance Family 101

When you spin up a virtual machine in a public cloud like AWS, Microsoft Azure, or Google Cloud, you get to decide the specifications of the machine.  In addition to disk options and network options, you’ll often choose CPU and memory in a “bundle” of pre-built sizes. These sizes have an instance family they are a part of, which usually helps you choose based on whether the application you plan to run is CPU-intensive, memory-intensive, or requires a GPU.

For example, if you are setting up an EC2 virtual machine in AWS, you’ll get to pick from a couple different instance sizes and types as one of the first screens you see in the console.  If you pick the instance type of “m5.large”, then “m5” is the instance family and “large” is the size. M5 in AWS is a balanced instance family, while C5 is meant for CPU-intensive applications.  Microsoft Azure has a similar idea, with their D-series being a balanced instance and the F-series being optimized for CPU.

Google Cloud does VM sizing a bit differently, but still has the concept of an instance family. A general purpose VM in GCP is often of the type “n2-standard”.  Specializing in CPU offers a few different options, where you have the choice between “n2-highcpu” instances for more vCPUs or “c2-standard” for higher performance of those vCPUs.  Additionally, GCP offers custom VM sizes, so you can individually pick your vCPU count and the amount of memory you need.  

Why Modernize?

Cloud providers incentivize instance modernization by pricing the newest generations the lowest. Most new instance families come out due to better-performing hardware.  This usually comes in the form of newer CPU types, but can also refer to networking or memory improvements as well. This means that not only are you getting a server that performs better (even with the same specs), but it’s also cheaper as well.  The same size but in a more modern family gets you 10%-20% discounts in price. This combination of better performance and better price means that unless your application doesn’t interact well with the latest hardware, then it’s a no-brainer to switch.

ParkMyCloud Can Help Modernize

One of the recommendations that ParkMyCloud makes, in addition to schedules for non-production resources and size recommendations based on usage data, is to modernize a VM to a newer instance family so that you can optimize performance with the lowest cost.  If you choose to accept this recommendation to move to the latest family, then you can choose to resize right away, or to pick a time in the future (like during a maintenance window) — ParkMyCloud takes the action for you. Note that this involves restarting the machine, so you may want to make sure it’s not in use at the time of resizing.  

Remember, VM sizing and type selection has a drastic effect on cost –– one size down within the same VM family can reduce the cost by 50%, and with changes between families or across more than one size, savings can be even greater. ParkMyCloud’s user interface helps you see how much you can save by making this modernization update, so you know that you’re getting the most out of your cloud spend.  Try out ParkMyCloud today to get recommendations for parking, rightsizing, and modernizing your instances!

How Cloud has affected the Centralization vs Decentralization of IT

How Cloud has affected the Centralization vs Decentralization of IT

Every week, we find ourselves having a conversation about cost optimization with a wide variety of enterprises. In larger companies, we often talk to folks in the business unit that most people traditionally refer to as Information Technology (IT). These meetings usually include discussions about the centralization vs decentralization of IT and oftentimes they don’t realize it, as we are discussing cloud and how it’s built, run and managed in the organization. 

Enterprises traditionally organized their IT team as a single department under the leadership of the CIO. The IT team works across organizational departments and supports the enterprise to meet various tooling and project needs requested by other business units or the executive team.  Although there are significant efficiencies from this type of approach, there are some risks that can affect the entire organization, in particular, one that seems to stem from the ‘need for speed’ (agility). The LOB depends on IT to deliver services, hardware, software, and other ‘tools’, but this is not always done quickly and efficiently, mostly due to internal processes.

Benefits of Centralized IT Structures

The benefits of this type of organizational structure were often associated with increased purchasing power, improved information flow between IT team members, skilled hiring efficiencies, and a watchful view of the enterprise’s technical infrastructure from both an operational network and security perspective. Let’s dig into these in a bit more detail.  

  • Lowered expenses and increased purchasing power – the centralized environment will always provide a business with more buying power at a lower cost by combining all of the needs of the business into a centralized buying pool.
  • Improved productivity for IT staff – IT teams are like any other team, they thrive with collaboration and mutual understanding and respect for each other’s skillsets. It also makes installations and technical resolution(s) easier as you’re addressing a centralized resource.
  • Enterprise-wide information dissemination – the centralized organization will build its network from the center out – LOBs will typically share the same networked resources – such as an ERP or CRM. This avoids the dangers of siloed information that could be critical to another LOB, but without access, there’s no visibility into the information that is available.

Despite the benefits stated above, a centralized team has several limitations and challenges – one of those challenges with the greatest enterprise-wide exposure is how best to prioritize project requests from each of the LOBs – enter decentralization and cloud — IaaS, PaaS and SaaS.

Decentralization is a type of organizational structure in which daily operations and decision-making responsibilities are delegated by top management to middle and lower-level managers and their respective business units. This frees up top management to focus more on major decisions. For a small business, growth may create the need to decentralize to continue efficient operations. Decentralization offers several advantages and is a practical approach when different departments or business units in a company have different IT needs and strategies.

Benefits of Decentralized IT Structures

  • The ability to tailor IT selection and configuration. When individual departments have IT decision-making power, they can choose and configure IT resources based on their own specific needs. For example, each department has its own servers optimized to run its required applications.
  • More fail-safes and organizational redundancy. Decentralizing makes servers and applications more resilient—and it can do the same for IT networks, too. If each department maintains its own server, one can function as a backup server in case another server fails. (Of course, this type of redundancy would need to be properly configured in advance.)
  • Respond faster to new IT trends. Since departments in decentralized organizations can make independent decisions, it’s easier for them to take advantage of new technology in the cloud.

One drawback of decentralized IT structures is that this model often leads to information silos – collections of data and information that cannot be easily shared across departments. Centralized IT structures help prevent these silos, leading to better knowledge-sharing and cooperation between departments. For example, using one centrally managed CRM system makes it possible for any employee in a company to access customer information from anywhere — think SalesForce.

The Reality is Hybrid IT

As we see above and in real life, there are many reasons an organization might be tempted to move toward or away from a centralized IT organizational structure but in practice many companies practice a hybrid model – some IT systems like your CRM and ChatOps are centralized, while others like your Cloud Provider and Orchestration tool may be decentralized (buy business unit). The top reasons for this hybrid model that come to mind are technical agility and the availability of tools through SaaS, IaaS and PaaS providers – IT no longer needs to build every solution and tool for you. And decentralized IT organizational structures are typically best for companies that rely on technical agility to remain competitive. These include newer, smaller companies (e.g., startups), and organizations that need to respond quickly to new IT developments (e.g., software and hardware companies or app developers). And, for larger companies that want to bring that mentality and model to their business, here is a great example, Capital One, a bank wanting to be a technology company. 

What are your thoughts on the centralization vs decentralization of IT?

The Latest Public Cloud Market Share and Beyond

The Latest Public Cloud Market Share and Beyond

With another recent round of earnings reports from Amazon, Microsoft and Google out of the way it’s always enjoyable to stand back and see what we can discern about the public cloud market share.

According to Synergy Research Group who closely monitor such trends, they saw 37% overall growth year-over-year in public cloud. They reported that it has taken just two years for the public IaaS and PaaS markets to double in size and their forecast shows them doubling within the next three years. Within the overall market it is possible to discern some interesting trends amongst the top three providers, which we discuss below. 

 

Amazon Web Services (AWS)

Last Thursday, Amazon reported that its cloud division revenue increased 35% in the third quarter, which was down from 37% in the previous quarter, and its slowest growth rate in five years. AWS finished its third-quarter with $9 billion in revenue. Each of the three previous quarters also showed a decline in growth which can be seen below.

Microsoft Azure

Microsoft followed AWS’s report with Azure reporting a revenue growth rate of 59%. In a similar vein to AWS, growth was reported as slowing and was down on the previous quarter which was 64% and down from 76% from a year ago. While Microsoft doesn’t break out specific revenue amounts for Azure (unlike AWS) Microsoft did report that its “Intelligent Cloud” business revenue increased 27% to $10.8 billion, with revenue from server products and cloud services increasing 30%

Azure also hit the headlines around the same time as their earnings report with the announcement of their securing the lucrative, high profile and highly contested $10B Pentagon’s JEDI Cloud contract. This was viewed as a key strategic win for the company and a game changer in the face-off with AWS.

Google Cloud Platform (GCP)

Last to report were Google’s parent company Alphabet. During their analysts call a few references were made to overall performance which included the Alphabet CEO calling out Thomas Kurian, who leads the GCP business, in saying “Obviously, ever since Thomas has come in, he has continued to invest across the board. He’s definitely focused a lot on scaling up our sales, partner and operational teams, and it’s playing out well”. Furthermore, it was reported that GCP had hired more sales, engineering and product managers, and that GCP, analytics and compute would continue to be a focus of the company’s investments going forward.

GCP falls into Alphabets “other” revenue bucket, which includes Google Play and hardware. Of businesses, GCP had the highest revenue. Other revenue was $6.43 billion in Q3, which was a 39% increase over $4.64 billion a year ago. There is no doubt that the cloud business is the largest of the three but Alphabet didn’t break out more specific numbers for cloud.

Other Providers

Some of the companies outside The Big Three, including Alicloud, IBM, Oracle, etc. are all growing, but they continue to lose ground to these three dominant market leaders. To compete, hyper-scale really matters, and these three bring that in spades.

Cloud in 2020 and beyond

As we enter the next decade a number of market watchers are speculating about what the reported slow down in growth means for the public cloud market share. As has been widely observed in other markets it’s a truism that as hyper-scale is achieved growth rates will decline. However, even with the overall reduction in growth they still exceed almost every other area within the broader technology market. As of Q3 2019, the overall quarterly run rate size was $25 billion, implying the annual run rate is now over $100 billion and still growing fast. It’s unlikely that there are too many other markets with better prospects going into next year.

Why Use One Cloud, When You Can Use Any Cloud?

Why Use One Cloud, When You Can Use Any Cloud?

No, seriously, why would we just use one cloud?

Let’s stop for a moment and think about what has happened over the course of the last few years in public cloud computing and the hypervisor wars on-premises.  VMware has largely dominated the data center, but we are seeing a strong push from Microsoft on the hypervisor front.  KVM and Xen continue to grow in popularity for certain sectors, and all across the spectrum we see lots of folks running more than one hypervisor.

The cloud is no different.  The reason that we are all seeking the “AWS killer” just like the elusive “iPhone killer” is that there is some bizarre need to locate a winner of the platform war. 

This isn’t a zero-sum game.  The real shift in our industry is the broad acceptance of multiple platforms inside every IT portfolio.  We jumped right past the cloud to the multi-cloud.

Why Run More Than One Cloud?

Technology is not the problem, it’s the solution.  Business challenges are being answered by technology which is what really matters.  So, why would we run more than one cloud?  The reason is a technological one usually.  Certain features, APIs, and architectures may be supported on one more than another.  There are raw economics involved as well.  There are overall availability concerns which drive businesses to disperse their IT across multiple data centres, so why not do the same in the cloud?

The reason that AWS and OpenStack are often pitted against each other is that there are capabilities to enable AWS API access within the OpenStack platform. This is something that Randy Bias and many in the community fought for over the last few years.  The reason that it becomes important is that we see the huge adoption of AWS and being able to take the same workloads and move them to OpenStack using the same API calls and interactions would be a massive win for OpenStack as a platform.

If we stick to strictly public cloud providers, we can start with what we would call the big three:  AWS, Microsoft Azure, Google Cloud Platform.  Among those three, we see a lot of parrying as we see features and pricing updates happening regularly.  Features more so than pricing lately. That results in an ever-growing set of services that can be easily consumed.  As we see common orchestration and operational platforms like Mesos, Kubernetes, and the like gaining in popularity, it gives even more credence to the commoditization of cloud.  (Author’s opinion note:  The supposed “race to zero” for cloud costs is over.  They have all agreed that pricing isn’t where they win the customers any more)

Reducing the Complexity of Multi-Cloud

Complexity is the one thing that will slow the multi-cloud adoption a bit longer.  There are clearly different ways to consume resources, and to programmatically create and destroy resources in the public cloud platforms.   Especially when you go outside of the big three.  That means consumers of the public cloud will have to start with one target and generally work up to a deep comfort there before moving to embrace a multi-cloud strategy.

Once we remove or reduce complexity from the list of barriers, that opens up the door for embracing the economic value of a multi-cloud strategy.  This is where we can embrace spot pricing and on-demand growth to tackle scaling needs, while making the workload truly portable and making sure that price becomes the real win.  Networking stacks across the clouds are rather different for a reason.  If every car manufacturer used the same exact parts, they would lower the chances of you coming back to them for up-sell opportunities.  The same goes for the cloud.  Networking and security (they should always be paired) will most likely be the greatest challenge that technologists face in architecting their single multi-cloud solutions.

Next-Generation applications are being built as cloud-native where possible.  This opens up the door for what has been talked about for years.  Supposed freedom from vendor lock-in.  I’m always rather skeptical when a representative from one cloud company says “come to us and avoid vendor lock-in” because every vendor, even public cloud ones, have lock-in.  

What we do gain by embracing the cloud-native approach to application development and deployment is that we reduce the risk of lock-in.

The more we learn from the forward-leaning development teams, the more we are able to give ourselves agility in a multi-cloud architecture.  As all of the public cloud pundits who represent one faction or another are arguing over who will be the last one to be all-in on the public cloud running cloud-native applications, they forgot about one thing:  they opened the door for their competition too.

How Do I Stop Wasting Money on Reserved Instances?

How Do I Stop Wasting Money on Reserved Instances?

“How do I stop wasting money on Reserved Instances?” 

It’s a question we’ve heard before from despairing AWS users. They were told Reserved Instances (RIs) would save them money, so they purchased them. Now, halfway into a three-year contract, they realize they’re not utilizing the RIs they’re paying for. Or worse… they may not even know what RIs they have. 

Amazon offers Reserved Instances to ostensibly help get your cloud costs in control. The message is that RIs help you save money on your EC2 instances by offering discounted hourly rates in exchange for a 1- or 3-year commitment. Before we get into how you can cut your cloud spending with an AWS RI, here’s a bit of background and what you need to know about AWS EC2 Reserved Instance pricing. 

How do EC2 Reserved Instance Purchasing Options Work?

When it comes to Reserved Instances purchasing options, you can either choose a 1- or 3-year contract. The longer the commitment, the greater the cost savings compared to On-Demand. By choosing one of these contracts, customers are promised savings of up to 75%.

There are a few risks that come with the longer commitment times. For starters, if AWS drops pricing, then the promised savings are reduced or may disappear. And when AWS introduces a new generation of an instance type family it may attract your users away from your contracts – these are based on the older generation. If you don’t know your future needs, it may be appealing to use the 1-year instead of a 3-year contract, which has savings vs. On Demand at about 31-40%.

There are three different types of EC2 Reserved Instances that customers can purchase – Standard Reserved Instances, Convertible Reserved Instances, or Scheduled Reserved Instances. With Standard Reserved Instances, customers would see the most significant savings. However, Convertible Reserved Instances are attractive to customers because it gives them added flexibility like the ability to use different instance families, operating systems, or tenancies over the term. Scheduled RIs allow you buy an RI that is only used at certain times each day in a recurring schedule.

When an RI expires, you are charged again at the normal rate.  See the recently released option to queue RI purchases in advance. This may help provide the greatest savings by eliminating gaps in your coverage from reservations.

Additional Ways To Save

AWS also offers additional discounts if you have more than $500,000 worth of Reserved Instances in a region – the more Reserved Instances you have, the larger your discount. 

You may also buy RIs on the Reserved Instance Marketplace from third-party sellers. The great thing about this is that these third parties tend to list their RIs at lower prices for a shorter period of time. And if you find you have too many RIs, you can sell them on the Marketplace as well.

Payment plans

There are three different payment plans offered with Reserved Instances. Payments can be made either All Upfront, Partial Upfront, or No Upfront. It is important to note that if you pay all up front, you will have greater savings because there are no other costs or additional charges during the term regardless of the usage hours. 

Some may think that the need to pay upfront and be locked in undermines both “pay as you go” and the notion of being “elastic”- almost like a step backward to the old economic model. 

An example of the savings offered by each EC2 RI option, along with the percent of savings each has over the On-Demand price is shown below. From these graphs, you can see that with a 3-year contract, your savings would be much greater. Other things to note is that you will have greater savings with Standard Instances, as well as if you choose the “All Upfront” payment plan. While you would receive discounted hourly rates for choosing Partial Upfront or No Upfront as a payment plan, if you can, All Upfront would be your best option with the most savings.

How should I use my Reserved Instances?

In non-production environments such as dev, test, QA, and training, Reserved Instances are not your best bet. Why is this the case? These environments are less predictable; you may not know how many instances you need and when you will need them, so it’s better to not waste spend on these usage charges. Instead, schedule such instances (preferably using ParkMyCloud). Scheduling instances to be only up 12 hours per day on weekdays will save you 65% – better than all but the most restrictive 3-year RIs!

Reserved Instances are very much a “use it or lose it” proposition. In other words, there are no rollover minutes – if you don’t use your reserved instances one month you don’t get extra time the next month. Here’s why they are like this:

  • The EC2 options available are specific to Region, Availability Zone, Instance Type (e.g. m5.large) with some exceptions, Platform Type (e.g. Linux or Windows), and Tenancy. AWS, behind the scenes, attempts to randomly match instances you launch to the Reserved Instance contracts you have in place, based on the specific criteria. When there is a match, the cost benefit is applied. It is not uncommon for people to believe they are launching instances that match all the criteria, when in fact they are not, so the contracts are under-utilized. And you won’t know what matches were made until you get your bill at the end of the month.
  • AWS decrements the contract amount for every hour when not used, meaning your return on investment diminishes. 
  • For every hour in your RI term, you pay the fee for hourly usage regardless of whether there has been any usage during that hour. 

Given all of the tradeoffs mentioned above, Reserved Instances make the most sense in a production environment, where instances need to always be “on.” 

How ParkMyCloud Can Help Manage Your Reserved Instances

ParkMyCloud is an easy to use platform that allows users to automatically identify and eliminate wasted cloud spend. You can use the ParkMyCloud platform to fully optimize your non-production instances without committing to an AWS EC2 RI term that will go underutilized. The platform does this by scheduling, rightsizing, and identifying idle instances. Recently, we added the ability to view all your existing Reserved Instances in the platform so you can better track what commitments you have already made, with more optimization functionality coming soon.

With ParkMyCloud, you can create parking schedules that automatically turn EC2 instances on and off according to your specifications. ParkMyCloud provides customized parking recommendations based on criteria provided by the user, which makes identifying “parkable” instances easier – and you can automatically accept these recommendations if you like. Turning this into an automated process cuts down on time and costs, thus further optimizing your cloud environments. Another perk of ParkMyCloud is that the platform tracks costs, projected 30-day savings, and actual savings for the current month – giving you better visibility. 

ParkMyCloud easily achieves EC2 savings of 50-73% with no annual commitment, upfront payment, or risk of instance termination or price cuts. In fact, we had a customer cancel a $10,000 order for AWS Reserved Instances in favor of EC2 instances that they could turn on and off after they found out just how easy and powerful this cost savings tool can be. Here are some of the advantages that come with using ParkMyCloud:

  • Better savings
  • No commitment or upfront payment
  • Price cut protection

Try out ParkMyCloud for yourself and get started parking your non-production systems and RightSizing your resources to ensure that your environments are running in the most efficient way possible.

If you use AWS RIs, you need to use the new queuing option

If you use AWS RIs, you need to use the new queuing option

The AWS reserved instance (AWS RI) offerings got a recent upgrade with the release of a “queue” function. This means that you can now purchase reserved instances that, rather than going into effect immediately, are scheduled for future purchase. (Yes – despite the fact that RI’s have been available for a decade, this is a new feature!)

Back up – what was released? 

If you haven’t used AWS RIs before, it’s worth a brief primer. When you purchase a reservation, you’re not buying a specific instance or even capacity: it’s a billing function. In exchange for a commitment over 1 or 3 years, you get an attractive discount. These discounts are applied on the back end of the billing process, and are allocated against specific instances on an hour-by-hour basis over the course of the month. 

There are a few variations within the AWS RI purchasing options, such as the term; how much you pay upfront vs. monthly; the option for them to be scheduled; whether the scope of the discount covers instances in a single region or in a particular availability zone; etc.

More on those options and whether you should actually be using Reserved Instances, in this post. (TL;DR: RIs are the right choice when you have 24×7 long-term production workloads; otherwise they’re usually not.) 

So, the new feature is the option to purchase these reservation discounts to begin on a future date rather than immediately. This is designed to make it easier for users to have uninterrupted reserved instance coverage. Previously, at the end of a 1- or 3-year term, many users would be unaware that their reservation expired and would have a spike in cost…which they may or may not notice. 

How does queuing work?

Now, when planned correctly, you can avoid the lapse of Reserved Instance coverage for your workloads by scheduling a new reservation purchase to go into effect as soon as the previous one expires. The furthest in advance you can schedule a purchase is three years, which is also the longest RI term available. 

Before queueing was available, customers had the option to either just go ahead and purchase a new reservation a few days/hours/weeks before the previous RI was due to expire, or set a reminder to go in and buy a new reservation after the previous one had lapsed. Either way, there was an extra cost – either a time window with too many RIs, or one with too few. So it is easy to see that RI queueing can save you money. Queueing can also save you some hassle, as you no longer have to set reminders and build your daily/weekly schedule around going in to buy a new RI. (Reminiscent of some late-night eBay sessions, waiting for the end of an auction to roll around.)

There are a few limitations. AWS RI purchases can be queued for regional Reserved Instances, but not zonal Reserved Instances. Regional RIs are the broader option as they cover any availability zone in a region, while zonal RIs are for a specific availability zone and actually reserve capacity as well. 

Cancellation is an option: since payment is processed only at the scheduled purchase time in the queue, you can cancel a purchase at any time before it is processed. 

We find it interesting that these are designed as new purchases rather than a “renewable” RIs – likely due to an idea that users may queue an evolving RI type or purchase profile, instead of the same instance type/duration/payment terms over time.

Beware the AWS RI Black Hole

Of course, the downside to queuing a purchase in advance is that you now have a new commitment to track – and one that may not meet your needs by the time the purchase goes into effect. 

It’s already difficult to shine light on your existing reservations, especially with options in place such as instance size flexibility and the broad applicability of regional RIs.

That’s why ParkMyCloud has released our first support for Reserved Instances this week. You told us that RIs are the next biggest thing that need optimization help on your cloud bills, and we listened. Now, you can see all your AWS RIs – past, present, and queued future purchases – in one place in ParkMyCloud. Next, we’ll be working on more recommendations and optimization – stay tuned!