Why Reserved Instance Pricing Needs Careful Evaluation

Why Reserved Instance Pricing Needs Careful Evaluation

Once or twice a year we like to take a look at what is going on in the world of reserved instance pricing. We review both the latest offerings and options put out by cloud providers, as well as how users are choosing to use Reserved Instances (AWS), Reserved VMs (Azure) and Committed Use (Google Cloud).

A good place to start when it comes to usage patterns and trends is the annual Rightscale (Flexera) State of Cloud Report. The 2019 report shows that current reservation usage stands at 47% for AWS, 23% for Azure and 10 percent of GCP. These are some interesting data when you view them alongside companies overall reporting that their number one cloud initiative for the coming year is optimizing their existing use of the cloud. All of these cloud providers have a major focus on pre-selling infrastructure via their reservations programs as this provides them with predictable revenue (something much loved by Wall St) plus also allows them to plan for and match supply with demand. In return for an upfront commitment they offer discounts of ‘up to 80%”, albeit much as your local furniture retailer has big saving headlines, these discount levels still warrant further investigation.

While working on an upcoming new feature release we began to dig a little deeper into the nature of current reserved instance pricing and discounts. From our research it appears that a real world discount level is in the 30%-50% range. To achieve some of the much higher level discounts you might see the cloud providers pushing, typically requires commitments of three years; being restricted to only certain regions; restrictions on OS types; and generally a willingness to commit to spending a few million dollars.

Reservation discounts, while not as volatile as spot instances, do change and need to be carefully monitored and analyzed. For example as of this writing, one of the more popular modern m5.large instance types in a US East Region costs $0.096 per hour when purchased on demand, but reduces to $0.037, a significant 62% saving. However, to secure such a discount requires a three-year commitment and prepayment in full up front. While the numbers of such organizations committing to contracts of this nature is not publicly known, it is likely that only the most confident of organizations with large cash reserves would be positioned to make a play like this.

Depending on the precise program used to purchase the reservations, there can be certain options to either convert specific instance families, instance types and OS’s for other types or even to resell the instances on a secondary exchange for a penalty fee of 12%, on AWS for example. Or to terminate the agreement for the same 12% fee on Azure. GCP’s Committed Use program seems to be the most stringent as there is no way to cancel the contract or resell pre-purchased instances, albeit Google does not offer a pre-purchase option.

As the challenge of optimizing cloud spend has slowly moved up the priority list to take the #1 slot, so has a maturation process taken place inside organizations when it comes to undertaking economic analysis and understanding the various tradeoffs. Some organizations are using tools to support such analysis, others are hiring consultants or using in house analytics resources. Whatever the approach in terms of analyzing an organization’s use of cloud, this typically requires looking at balancing the purchase of different types of reservations, spot instances or using on-demand infrastructure that is highly optimized through automation tools. Whatever the approach, the level of complexity in such analysis is certainly not reducing, and mistakes are common. However, the potential savings are significant if you achieve the right balance and is clearly something you should not ignore.

The relative balance between the different options to purchase and consume cloud services in many ways reflects the overall context within which organizations operate, their specific business models and broader macro issues such as the outlook for the overall economy. Understanding the breadth of options is key and although for most organizations, reservations are likely to be a key component it is worth digging into just how large the relative trade offs might be.

3 Things to Look Forward to at Google Cloud Next 2019

3 Things to Look Forward to at Google Cloud Next 2019

Google Cloud Next 2019 will be our first Google event – and we’re looking forward to it! Google hopes to attract 30,000 attendees this year – up from 23,000 last year – to the San Francisco conference. This is the largest gathering of Google Cloud users, and features three days of networking, learning, and problem solving. Here are 3 things to look forward to at the event this year.

1. Announcements

As with any event of this scale, Google Cloud has been saving up announcements to make at their flagship event. At the event last year, Google Cloud made over 100 announcements. While some listed seem to stretch the idea of an announcement – customer case studies, for example – others were more interesting, ranging from Google Cloud Functions (serverless) to Istio for microservices management to resource-based pricing. They’re sure to have some exciting developments to share for 2019.

2. Speakers & Sessions

This year, the event has more than 30 featured speakers, and attendees will get to hear from executives from throughout the Google Cloud organization as well as their top customers and partners.

There will be hundreds of breakout sessions on 18 tracks. While the sessions you choose to attend will likely focus on the track most relevant to your job role and areas where you’re looking to grow, be sure to scan the full list for other cool sessions. A few that caught my eye…

You can also get certified while at the conference. If possible, we recommend doing this on Monday so you don’t miss out on sessions, but see what your schedule looks like.

3. Fun

Don’t forget to have fun while you’re there. Start with a visit to the expo when you have a break during conference hours – sponsors from Salesforce to DataDog to CloudHealth will have booths where you can learn about their offerings, cool demos, and of course, get the latest in innovative swag and giveaways. Don’t forget to come see ParkMyCloud! We’ll be at the group of booths right when you walk in the main entrance at the expo hall, #S1151.

After hours, various vendors & sponsors are having happy hours, so check out the websites, blogs, and emails from your favorite products to see if there are any you’d like to join. Plus, enjoy the city of San Francisco!

See You At Google Cloud Next 2019

If you’ll be at the event, be sure to stop by and say hi to ParkMyCloud at booth S1151 – schedule a time to stop by and we’ll give you an extra scratch-off card for a chance to win an Amazon.com gift card. We’d love to chat and hear what you think of the event.

Psst — if you haven’t yet registered, shoot me an email and I might be able to hook you up with a discount code.

The Next Evolution of Cloud Management: Container Management

The Next Evolution of Cloud Management: Container Management

As we are talking to prospects and customers alike, one of the more requested features we get asked about is container management. Containers – one of several growing optimization trends – help you package and run applications ‘anywhere’ in isolated environments to reduce configuration time when deploying to production. In theory this, like VMs, will help you increase the efficiency of your infrastructure – and we’re big fans of efficiency and optimization.

Are there enough containers that need management?

As we begin to plan our container management offering for later in the year, we need to understand whether this is just hype (as it seems everything is) or is this something our customers actually want and need.

First, let’s review the players in the container field. There are the primary container services from Docker and Kubernetes, as well as the offerings from cloud service providers (CSPs) for managed services like AWS ECS and AWS EKS, Azure AKS and Google GKE, based on Kubernetes (longer container services comparison here).  

So let’s dig into how big that market actually is. Most industry publications estimated $1.5B was spent in 2018 on container technology, and the Compound Annual Growth Rate (CAGR) is in the 30% range. Here is one summary from 451 Research that shows strong projected growth:

What kind of container management is needed?

The containers are there, so the next question is: what type of management is needed – especially for the CSP managed container services?

Container management, like the broader cloud management, includes orchestration, security, monitoring, and of course, optimization.

In terms of optimization alone, we have identified 5 ways we think you can optimize and save on your container cloud costs:

  1. Rightsize your Pods
  2. Turn off your Idle Pods
  3. Rightsize your Nodes
  4. Consider Storage Opportunities
  5. Review Purchasing Options

Do you need to focus on container management?

In short, if you plan to use any sort of containers in the cloud: yes. Containers provide opportunities for efficiency and more lightweight application development, but like any on-demand computing resource, they also leave the door open for wasted spend. Earlier this year, we estimated that at least $14.1 billion will be wasted on idle and oversized VMs alone. Unused, idle, and otherwise suboptimal container options will contribute billions more to that waste.

So yes: container management and optimization should be part of your cloud optimization plan.

Top Cloud Finance Questions from CFOs

Top Cloud Finance Questions from CFOs

Taking control of your cloud finance is now more important than ever and there is no room for wasted spend. More organizations are shifting to cloud-based infrastructures – according to forecasting done by Gartner last year, the worldwide public cloud revenue is expected to grow 17.3 percent in 2019. While this is good news for technology innovation, from the finance side of the table, elastic infrastructure poses a challenge. CFOs need to ensure that IT and development departments are optimizing spend even while encouraging innovation and growth.

The Challenge When it Comes to Cloud Finance

Finance departments continue the search for capital optimization by lowering costs while prioritizing business models that transform and expand worldwide with flexibility. With this flexibility, though, comes complexity that is difficult to manage, deploy, and – most frustrating of all – to forecast.

With rapid growth comes rapid responsibility. If an organization is not cautious, cloud spending can spiral out of control, and using the cloud might seem counterproductive. Finance and IT departments must come to and work together to achieve key business goals and connect the disconnect to avoid a cost control strategy from becoming a project instead of an actionable and executable plan.

Smart Questions CFOs Should Be Asking

With the struggle to control cloud spend, CFOs need to address cloud finance questions and understand their impacts on operations. After all, most organizations cite lowering costs as one of their primary reasons for moving to the cloud. In order to make sure that financial teams and IT departments are on the same page, here are three smart top cloud finance questions CFOs should ask.  

1. Are we thinking about the cloud cost model correctly?  

Out of habit from the on-premises mindset, many organizations moving to the cloud purchase far more capacity than they actually need. Given that the major benefits for moving to the cloud are flexibility – to allow you to use the cloud based on your real-time needs, and capacity – to match in theory the physical space an on-site data center would provide. Unfortunately, the latter is not true, the majority of companies overspend in cloud resources they are not using for much or all of the time.  

So, when CFOs talk to their IT counterparts about cloud spending, they need to ensure that everyone is now in an OpEx mindset, rather than the on-prem model of CapEx.

2. Are we wasting cloud spend?

The answer is most likely yes. To further explain why this happens we need to look at the factors that contribute to this waste. A huge contributing factor is idle resources. The cloud runs 24/7, but most non-production resources used for development, testing, staging, and QA are only needed during the work week. In perspective, if you work a 40-hour week and only need to use resources then, you are paying for resources to stay idle after work hours. Assuming a twelve-hour workday window five days a week, that means 65% of the time you’re paying for, the resources site idle.

Another contributing factor is oversized resources. We recently found that the average CPU usage of resources managed in our platform is only 4.9%. That points to a trend of massive underutilization when resources can easily be sized down for 50-70% cost savings.

3. What steps are we taking to control and reduce cloud spend?

IT and development departments will be focused on growth, so it’s often the role of Finance to ensure that these teams are putting cost control measures in place on public cloud. Ensure that your technical departments have an actionable – preferably, automated – plan in place to combat wasted cloud spend. Ask for reports broken down by project or team over time, and research cloud optimization platforms that the technical teams should take advantage of. Furthermore, using a cloud optimization platform with automated and analytical capabilities will help you discover cost-savings opportunities and enable more efficient workflows between departments.

The Bottom Line

Finance departments can push the cloud conversation toward optimization of resources, ensuring that IT departments are both innovative and within budget. Create a competitive cloud finance strategy to include visibility, flexibility, and governance to create an opportunity for the business to function effectively across departments. This will increase ROI, reporting, and fundamentally, the implementation of better solutions to thrive in the cloud.  

15 AWS Best Practices for 2019

15 AWS Best Practices for 2019

There are a ton of great blogs that cover AWS best practices and use cases. To provide a little more insight into the latest practices offered by AWS, we put together 15 of the best practices since the beginning of 2019, consisting of tips and quotes from different experts.

1. Take Advantage of AWS Free Online Training Resources

“There’s no shortage of good information on the internet on how to use Amazon Web Services (AWS). Whether you’re looking for ways to supplement your certification study efforts or just want to know what the heck it’s all about, check out this compilation of free training and resources on all things AWS.”

2. Keep Up With Instance Updates So You Can Periodically Make Changes to Costs and Uses

“AWS expands its choices regularly, so you need to dynamically re-evaluate as your business evolves. The cloud presents many arbitrage opportunities including instance families, generations, types, and regions—but trying to do this manually is a recipe for time-consuming frustration. Don’t fall victim to Instance Inertia: even though the process of making a change is simple enough, it can be difficult to accomplish without having any conclusive evidence of either cost gains or performance improvements.”

3. Limit Access by Assigning User Permissions

“Your configuration of IAM, like any user permission system, should comply with the principle of “least privilege.” That means any user or group should only have the permissions required to perform their job, and no more.”

4. Visibility Across Multiple Accounts in One Frame Helps Make More Informed Decisions

Use a cloud security solution that provides visibility into the volume and types of resources (virtual machines, load balancers, security groups, users, etc.) across multiple cloud accounts and regions in a single pane of glass. Having visibility and an understanding of your environment enables you to implement more granular policies and reduce risk.”

5. Tag IAM Entities to Help Manage Access Granted to Resources Based on an Attribute  

AWS has now added the ability to tag IAM users and roles, which eases management of IAM entities by enabling the delegation of tagging rights and enforcement of tagging schemes.”

“A primary use case for the new feature is to grant IAM principals access to AWS resources dynamically based on attributes. This can now be achieved by matching AWS resource tags with principal tags in a condition”

“As cloud deployments grow, teams deal with an increasing amount of resources that are constantly moving, growing, and changing. Projects may be shared between teams or customers and can rely on different regions and platforms. This makes it easy to lose track of what’s being used until the bill comes due. For tags to be actionable at scale, most teams require visibility of exactly which resources are at play at any given time, who is using them, and what they are being used for, and who is responsible for them. Essentially, the more high-quality information associated with a resource, the easier it becomes to manage.”

“Within each of these categories, you can then define your own tags that are specific to your organization for standardization”

6. Creating a Start/Stop Schedule With an Instance Scheduler Will Help You Optimize Costs

“EC2 is a main compute service on AWS, they’re your (Windows and Linux) virtual machines. Running compute resources costs money, simple as that….”

“Paying only for the resources you actually need and use can save you a LOT of money.”

7. Decrease Errors and Streamline Your Deployments With An Automation Tool

“Whether you choose to use AWS CodeDeploy or a different tool, automating your software deployments helps you more consistently deploy an application across development, test, and production environments. The importance of automation in deployment in order to decrease errors and increase speed cannot be overstated.”

“Automate your deployment. This saves you from potentially costly and damaging human error. With the automation services available today, you have many options to customize every part of your deployment without letting automation fully take over if you prefer.”

8. Have a Reserved Instances Strategy

“Purchasing an RI is only the beginning; you should have a process in place to continuously monitor RI utilization and modify unused RIs (split/join or exchange convertible RIs) to maximize their usage. A common AWS billing model is a centralized account with consolidated billing, linked to autonomous accounts so individual accounts can purchase RIs based on their individual usage patterns.”

9. Account For the Capacity You Will Need So You Have a Size That Fits Your Environment

“We know that AWS EC2 instance types are sized and priced exponentially. With millions of sizing options and pricing points, choosing the wrong instance type can mean a major pricing premium—or worse, a substantial performance penalty! We see many organizations choose an instance type based on generic guidelines that do not take their specific requirements into account.”

“AWS offers a variety of types and sizes of EC2 instances. That means that it’s perfectly possible to select an instance type that’s too large for your actual needs, which means you’ll be paying more than necessary. In fact, the data shows that this is happening most of the time.

10. Save Your Team Time and Money with Serverless Management

“AWS data is housed in different regions all over the world. Its cloud-based system means you’re able to access your data in just a matter of minutes.”

“No more having to set up and maintain your own servers. That’s just more stress and money out of your pocket. Instead, you can leave it to the experts at AWS who will ensure the infrastructure your business is running efficiently.”

“The AWS Serverless Application Repository allows developers to deploy, publish, and share common serverless components among their teams and organizations. Its public library contains community-built, open-source, serverless components that are instantly searchable and deployable with customizable parameters and predefined licensing. They are built and published using the AWS Serverless Application Model (AWS SAM), the infrastructure as code, YAML language, used for templating AWS resources.”

11. Set up a Secure Multi-Account with AWS Landing Zone

“With the large number of design choices, setting up a multi-account environment can take a significant amount of time, involve the configuration of multiple accounts and services, and require a deep understanding of AWS services.

This solution can help save time by automating the set-up of an environment for running secure and scalable workloads while implementing an initial security baseline through the creation of core accounts and resources.”

12. Ensure Consistency in your Environment with Containers

“Containers offer a lightweight way to consistently port software environments for applications. This makes them a great resource for developers looking to improve infrastructure efficiency, becoming the new normal over virtual machines (VMs).”

“Containers share an operating system installed on the server and run as resource-isolated processes, ensuring quick, reliable, and consistent deployments, regardless of environment.”

13. Auto Scaling Groups

“Auto Scaling Groups can be used to control backend resources behind an ELB, provide self-replication (when the instance crashes, Auto Scaling Group will immediately provision a new one to maintain the desired capacity), simplify deployments (regular releases, blue/green deployments, etc.), and for many other use cases…..

The unnecessary spending on EC2 instances is usually caused by unused, or underused, compute resources, that increase your monthly bill. This is an age-old problem where you provision more than you need, to make sure you have enough to handle the expected, but also unexpected traffic. An Auto Scaling Group solves this issue by handling the scalability requirements for you.”

14. Automatically Backup Tasks

“AWS Backup performs automated backup tasks across an organization’s various assets stored in the AWS cloud, as well as on-premises. It provides a centralized environment, accessible through the AWS Management Console, for organizations to manage their overall backup strategies.

AWS Backup eliminates the need for organizations to custom-create their own backup scripts for individual AWS services, the company contends.”

15. Use API Gateway to Manage APIs at Scale

“Capable of accepting and processing hundreds of thousands of concurrent API calls, API Gateway can manage such related tasks as: API version management; authorization and access control; traffic management and monitoring.”

Have any AWS best practices you’ve learned recently? Let us know in the comments below!

How MSPs Can Educate Customers on Cloud Cost Models

How MSPs Can Educate Customers on Cloud Cost Models

Part of the role of any managed service provider managing cloud services is to guide their customers through the process of creating and evaluating cloud cost models. This is important whether migrating to the cloud, re-evaluating an existing cloud environment, or simply understanding a monthly cloud bill. Many customers may be more familiar with on-prem cost models, so relating to that mindset is crucial. Here are a few important things to keep in mind when educating your customers about cloud costs.

1.  Explain CapEx vs. OpEx

One of the biggest shifts in mentality that must be made when evaluating cloud cost models is the shift from predominantly Capital Expenditures with on-prem workloads as compared to predominantly Operational Expenditures with cloud workloads.

As one of our customers explained the mindset problem:

“It’s been a challenge educating our team on the cloud model. They’re learning that there’s a direct monetary impact for every hour that an idle instance is running.”

Another contact added: “The world of physical servers was all CapEx driven, requiring big up-front costs, and ending in systems running full time. Now the model is OpEx, and getting our people to see the benefits of the new cost-per-hour model has been challenging but rewarding.”

Deploying a project in a private cloud involves lots of up-front purchases and ongoing maintenance, including servers, power, hardware, buildings, and more. On top of the actual purchase cost, you must account for amortization, depreciation, and the opportunity cost of those purchases.

Cloud workloads often work on a pay-as-you-go model, where you pay only for what services and features you use and how long you use them. This provides organizations with almost no Capital Expenditures for these resources, but results in a dramatic increase in Operational Expenditures. Neither is necessarily a bad thing, but your job as an MSP is to clearly articulate this shift so the customer can understand why the ongoing costs appear so much higher. And, of course, you’ll have to incorporate your own value into the equation.

2.  Make Sure Your Clients Understand Their Cloud Bill Breakdown

For on-prem services, the details of the cost model don’t usually require detail about what software or service is actually running on the physical machine. A database server and a web server may have different specs, but everything becomes normalized to the physical hardware that must be purchased as a one-time fee. This provides a certain level of simplicity in your calculations, but still must account for all the additional physical factors like power, air conditioning, redundancy, cabling, racks, and maintenance.

Cloud services not only charge based on time used, but also have very different costs for each service. A database server and a web server are going to have very different cost structures, and will show up on your monthly bill as separate items. This often makes the bill look much more complex, but the flip side of that is that you have many opportunities for optimization and cost allocation.

3.  Be the Authority on IT Costs

Creating cloud cost models for your customers can require a big mental shift from other cost models, but it’s an important step for current and future IT projects. Understanding what the options are, what the costs are, and what your usage will be, are all factors. Make sure to convey all of these aspects to the stakeholders of your client in a clear way to avoid the surprise bill at the end of each month.  

Ultimately, the market for cloud managed services is growing, which is good for managed service providers. As customers migrate to the cloud, they will need cost optimization expertise, which is a great angle for MSPs to get a foot in the door.