New: RightSizing Now Generally Available in ParkMyCloud for Data-Driven Cloud Optimization

New: RightSizing Now Generally Available in ParkMyCloud for Data-Driven Cloud Optimization

Exciting news: RightSizing is now generally available in ParkMyCloud! You can now use this method for automated cost optimization alongside scheduling to achieve an optimized cloud bill in AWS, Azure, and Google Cloud. 

How it Works

When you RightSize an instance, you find the optimal virtual machine size and type for its workload. 

Why is this necessary? Cloud providers offer a myriad of instance type options, which can make it difficult to select the right option for the needs of each and every one of your instances. Additionally, users often select the largest size and compute power available, whether it’s because they don’t know their workload needs in advance, don’t see cost as their problem, or “just in case”.

In fact, our analysis of instances being managed in ParkMyCloud showed that 95% of instances were operating at less than 50% average CPU, which means they are oversized and wasting money.

Now with ParkMyCloud’s RightSizing capability, you can quickly and easily – even automatically – resolve these sizing discrepancies to save money. ParkMyCloud uses your actual usage data to make these recommendations, and provides three recommendation options, which can include size changes, family/type changes, and modernization changes. Users can choose to accept these recommendations manually or schedule the changes to occur at a later date.

How Much You Can Save

A single instance change can save 50% or more of the cost. In the example shown here, ParkMyCloud recommends three possible changes for this instance, which would save 40-68% of the cost. 

At scale, the savings potential can be dramatic. For example, one enterprise customer who beta-tested RightSizing found that their RightSizing recommendations added up to $82,775.60 in savings – an average of more than $90 per month/ more than $1,000 per year for every instance in their environment. 

How to Get Started

Are you already using ParkMyCloud? If not, go ahead and register for a free trial. You’ll have full access for 14 days to try out ParkMyCloud in your own environment – including RightSizing.

If you already use ParkMyCloud, you’ll need to make sure you’re subscribed to the Pro or Enterprise tier to have access to this advanced feature. 

Now it’s time to RightSize! Watch this video to see how you can get started in just 90 seconds: 

Happy savings!

Why Azure Databricks Usage is On the Rise

Why Azure Databricks Usage is On the Rise

Have you been hearing a lot about Azure Databricks lately? We have. One of the nice things about talking with ParkMyCloud users is that we get to see trends often before they are more widely recognized within the industry. Whether it is adoption of new instances or databases, or usage of new tools and services it’s always interesting to see change occur. 

What is Databricks?

One such change over the last year or so has been an enormous increase in the use of very short-lived instances, typically less than 60 minutes, which get spun up as part of clusters. These are in fact Databricks being used to undertake data analytics workloads. I had come across Databricks in relation to their unicorn status in the startup world – as of six months ago were valued at close to $4B – so I guess it was only a matter of time before we began to see the fruits of their labor become popular. 

The Databricks story is an interesting one which begins at UC Berkeley with the development of a research project, Apache Spark in 2009. Apache Spark is described as a unified analytics engine for large-scale data processing. It provides an extremely rapid cluster computing technology, designed for fast computation. The team who developed Spark went on to found Databricks in 2013 since which time they have raised $500MM in funding. 

The Databricks platform allows enterprises to build their data pipelines across data storage systems and prepare data sets for data scientists and engineers. To do this, Databricks offers a range of tools for building, managing and monitoring data pipelines. It enables the building of machine learning (ML) models, which have grown in parallel with the growth in big data within the enterprise. 

The product also has an interesting approach to pricing with the introduction of their own usage-based billing methodology based on DBU’s. A DBU is a Databricks Unit (DBU) which is a unit of processing capability per hour, billed on per-second usage. This cost excludes the cost of the underlying instance (VM). The good thing is that the model is very transparent and provides a number of pricing options and tiers. Based on the tier and type of service required prices range from $0.07/DBU for their Standard product on the Data Engineering Light tier to $0.55 for the Premium product on the Data Analytics tier. Helpfully, they do offer online calculators for both Azure and AWS to help estimate cost including underlying infrastructure. The Azure Databricks pricing example can be seen here.

Databricks + Microsoft = Azure Databricks

A major breakthrough for the company was a unique partnership with Microsoft whereby their product is not just another item in the MS Azure Marketplace but rather is fully integrated into Azure with the ability to spin up Azure Databricks in the same way you would a virtual machine. Once running, the service can scale automatically as the users need change in the same way cloud is able to scale using autoscaling groups to match supply against demand. 

Databricks are also available for other public cloud vendors, most notably AWS (available within the Marketplace). However, the level of integration is not the same as on Azure, and the service looks much more like a standard AWS marketplace offering.

Why More and More Companies are Using Azure Databricks

What is clear is that opportunities for use of ML and AI has progressed from experimentation to workloads, and these workloads are now at a massive scale. This has also been accompanied by the emergence of a new subset of DevOps called AIOps, which makes a lot of sense given the amount of infrastructure and services now needing to be configured and deployed to run such workloads.

In a forthcoming blog we will dig a little deeper in terms of the usage patterns for such workloads and the changes in terms of the way organizations running these workloads are now utilizing the public cloud for these non-production workloads.

AWS Resource Optimization Recommendations: Good Enough or Not Quite There?

AWS Resource Optimization Recommendations: Good Enough or Not Quite There?

Earlier this week, AWS announced the launch of AWS resource optimization recommendations within their cost management portal. AWS claims that this will “identify opportunities for cost efficiency and act on them by terminating idle instances and rightsizing under-used instances.” Here’s what that actually means, and what functionality AWS still does not provide that users need in order to automate cost control.

AWS Recommendations Overview

AWS Recommendations are an enhancement to the existing cost optimization functionality covered by AWS Cost Explorer and AWS Trusted Advisor. Cost Explorer allows users to examine usage patterns over time. Trusted Advisor alerts users about resources with low utilization. These new recommendations actually suggest instances that may be a better fit. 

AWS Resource Optimization provides two types of recommendations for EC2 instances:

    • Terminate idle instances
    • Rightsize underutilized instances

These recommendations are generated based on 14 days of usage data. It considers “idle” instances to be those with lower than 1%  peak CPU utilization, and “underutilized” instances to be those with maximum CPU utilization between 1% and 40%. 

While any native functionality to control costs is certainly an improvement, users often express that they wish AWS would just have less complex billing in the first place. 

AWS Resource Optimization Tool vs. ParkMyCloud

ParkMyCloud offers cloud cost optimization through RightSizing for AWS, as well as Azure and Google Cloud, in addition to our automated scheduling to shut down resources when they are idle. Note that AWS’s new functionality does not include on/off schedule recommendations.

Here’s how the new AWS resource optimization tool stacks up against ParkMyCloud.

Types of Recommendations Generated

The AWS Resource Optimization tool will provide up to three recommendations for size changes within the same instance family, with the most conservative recommendation listed as the primary recommendation. Putting it another way, the top recommendation will be one size down from the current instance, the second recommendation will be two sizes down, etc. ParkMyCloud recommends the optimal instance type and size for the workload, regardless of the existing instance’s configuration. This includes instance modernization recommendations, which AWS does not offer.

The AWS tool generates recommendations for EC2 instances only, while ParkMyCloud recommends scheduling and RightSizing recommendations for EC2 and RDS. AWS also does not support GPU-based instances in its recommendations, while ParkMyCloud does. 

AWS customers must explicitly enable generation of recommendations in the AWS Cost Management tools. In ParkMyCloud, recommendations are generated automatically (with some access limitations based on subscription tier).  

ParkMyCloud allows you to manage resources across a multi-cloud environment including AWS, Azure, Google Cloud, and Alibaba Cloud. AWS’s tool, of course, only allows you to manage AWS resources.

Recommendation Quality

When you start to dig in, you’ll notice several limitations of the recommendations provided by AWS. The recommendations are based on utilization data from the last 14 days, a range that is not configurable. ParkMyCloud’s recommendations, on the other hand, can be based on a configurable range of 1-24 weeks of data, configurable by the customer by team, cloud provider, and resource type. 

Another important aspect of “optimization” that AWS does not allow the user to configure are the utilization thresholds. AWS assumes that any instance at less than 1% CPU utilization is idle, and assumes any instance between 1-40% CPU utilization is undersized. While these are reasonable rules of thumb, users need the ability to customize such thresholds to best suit their own environment and use cases, and AWS does not allow you to customize these parameters. AWS also assumes an “all or nothing” approach – they recommend that any instance detected as idle simply be terminated. ParkMyCloud does not assume that low utilization means the instance should be terminated, but suggests sizing and/or scheduling solutions with specificity to the utilization patterns.  ParkMyCloud allows users to select between Conservative, Balanced, or Aggressive schedule recommendations with customizable thresholds.

AWS also only evaluates “maximum CPU utilization” to determine idle resources. However, for resource schedule recommendations, ParkMyCloud uses both Peak and Average CPU plus network utilization for all instances, and memory utilization for instances with the CloudWatch agent installed. For sizing recommendations, ParkMyCloud uses maximum Average CPU plus memory utilization data if available. 

Perhaps the most dangerous aspect of the AWS Recommendation is they will recommend an instance size change based on CPU alone, even if they do not have memory metrics. Without cross-family recommendation, this means each size down typically cuts the memory in half.  ParkMyCloud Rightsizing Recommendations do not assume this is OK. In the absence of memory metrics, we make downsizing recommendations that keep memory constant. For a concrete example of this, here is an AWS recommendation to downsize from m5.xlarge to m5.large, cutting both CPU and memory, and resulting in a net savings of $60 per month.

In contrast, here is the ParkMyCloud Rightsizing Recommendation for the same instance:

You can see that while the AWS recommendation can save $60 per month by downsizing from m5.xlarge to m5.large, the top ParkMyCloud recommendation saves a very similar $57.67 by allowing the transition from m5.xlarge to r5a.large, keeping memory constant. While the savings are off by $2.33, this a far less risky transition and probably worth the difference. In both cases, of course, memory data from the CloudWatch Agent would likely result in better recommendations.

As shown in the AWS recommendation above, AWS provides the “RI hours” for the preceding 14 days, giving better visibility into the impact of resizing on your reserved instance usage, and uses this data for the cost and savings calculations. ParkMyCloud does not yet provide correlation of the size to RI usage, though that is planned for a future release.  That said, the AWS documentation also states “Rightsizing recommendations don’t capture second-order effects of rightsizing, such as the resulting RI hour’s availability and how they will apply to other instances. Potential savings based on reallocation of the RI hours aren’t included in the calculation.”  So the RI visibility on the AWS side has minimal impact on the quality of their recommendations.  

If the user is viewing the AWS Recommendation from within the same account as the target EC2 instance, a “Go to the Amazon EC2 Console” button appears on the recommendation details, but it leads to the EC2 console for whatever your last-used region was, and without an automatic filter for the specific instance ID. This means you need to do your own navigation to the right region (perhaps also requiring a new console login if the recommendation is for a different account in the Organization), and then find the instance to see the details. ParkMyCloud provides better ease-of-use in that you can jump directly from the recommendation into the instance details, regardless of your AWS Organization account structure.  ParkMyCloud: 1 click. AWS: At least five, plus copy/paste of the instance ID and plus possibly a login.

ParkMyCloud also shows utilization data for the recommendation below the recommendation text, giving excellent context. AWS again requires navigation the right account, EC2 and then region, or to CloudWatch and the right metrics using the instance ID. 

AWS Resource Optimization also ignores instances that have not been run for the past three days. ParkMyCloud takes this lack of utilization into consideration and does not discard these instances from recommendations. 

AWS regenerates recommendations every 24 hours. ParkMyCloud regenerates recommendations based on the calculation window set by the customer. 

Automation & Ease of Use

While AWS’s new recommendations are generated automatically, they all must be applied manually. ParkMyCloud allows users to accept and apply scheduling recommendations automatically, via a Policy Engine based on resource tagging and other criteria. RightSizing changes can be “applied now”, or scheduled to occur in the future, such as during a designated maintenance window. 

There is also the question of visibility and access across team members. In AWS, users will need access to billing, which most users will not have. In ParkMyCloud, Team Leads have access to view and execute recommendations for resources assigned to their respective teams. Additionally, recommendations can be easily exported, so business units or teams can share and review recommendations before they’re accepted if required by their management process. 

AWS’s management console and user interface are often cited as confusing and difficult to use, a trend that has unfortunately carried forward to this feature. On the other hand, ParkMyCloud makes resource management straightforward with a user-friendly UI. 

Get Started

Want to see what ParkMyCloud will recommend for your environment? Try it out with a free trial, which gives you 14-day access to our entire feature set, and you can see what cost optimization recommendations we have for you.

VMware Cloud on AWS: A Hybrid Cloud Midpoint

VMware Cloud on AWS: A Hybrid Cloud Midpoint

VMware Cloud on AWS is an integrated hybrid cloud offering jointly developed by AWS and VMware. It’s targeted at enterprises (or companies) who are looking to migrate on-premises vSphere-based workloads to public cloud, and provides access to native AWS services. 

Overview of VMware Cloud on AWS 

VMware Cloud on AWS provides an integrated hybrid cloud environment, allowing you to maintain a consistent infrastructure between the vSphere environment in your on-prem data center and the vSphere Software-Defined Data Center (SDDC) on AWS. It also provides a unified view and resource management of your on-prem data center and VMware SDDC on AWS with a single console. 

Digital transformation continues to drive businesses to the cloud to stay competitive. But integrating public cloud with existing private cloud infrastructure requires many technical processes, and skill differences between on-prem and cloud environments to be leveraged for both of these to work simultaneously. This combined offering makes it easier for those familiar with VMware to integrate into the public cloud without having to rewrite applications or modify operating models.

One reason this offering is attractive to customers is that it provides optimized access to native AWS services including compute, database, analytics, IoT, AI/ML, security, mobile, resource deployment, and application services.

Another reason is that with automatic scaling and load balancing VMware Cloud on AWS can adapt to the changing business needs across global regions. They also position themselves as a cost-effective solution for reducing upfront investment costs with no application re-factoring or re-architecting needed when migrating. We’ll take a look at the pricing solutions it offers for on-demand and subscription models, but first, let’s see what VMware Cloud for AWS can do for the enterprise.

Use Cases for VMware Cloud on AWS 

Accelerated and Simplified Data Center Migration

VMware Cloud on AWS claims to accelerate and simplify the migration process for businesses by reducing migration efforts and complexity between on-prem environments and the cloud. Once in the cloud, users can leverage VMware and AWS services to modernize applications and run mission-critical applications quickly with VMware availability and performance combined with the elastic scale of AWS.

Extend the Data Center to the Cloud with Your Existing Skillset

This offering lets users who are used to VMware keep a consistent and familiar environment on the cloud. Since VMware Cloud on AWS doesn’t require re-tooling or re-educating, IT teams can continue to deliver consistently on vSphere-based infrastructure and operations that are already implemented in existing on-prem data centers. 

Add a Robust Disaster Recovery Service to Your Environment

One offering available is VMware Site Recovery: on-demand disaster recovery as a service, optimized for VMware Cloud on AWS to reduce risk without the need to maintain a secondary on-prem site. You can securely replicate workloads to VMware Cloud on AWS so you can spin them up on-demand if disaster strikes. 

Flexible Dev/Test Environment

You can use VMware SDDC-consistent dev/test environments that can integrate with modern CI/CD automation tools and access native AWS services seamlessly. You can spin up an entire VMware SDDC in under two hours and scale host capacity in a few minutes.

VMware Cloud on AWS Cost Compared

So, how does the pricing shake out?  Hosts can be purchased on-demand or as a 1-year or 3-year subscription. If you choose on-demand pricing, you’ll pay for the physical host by the hour that the host is active with no upfront cost, while the long-term subscription is set to provide up to 50% cost savings over an equivalent period compared to on-demand service, but you pay the costs upfront. It’s a similar idea to AWS Reserved Instances, which may or may not be worth the cost.

Depending on the use case, pricing is similar to standard AWS pricing. See how it compares in price with standard AWS or estimate your costs with the pricing estimator. 

Top Tips for Using VMware Cloud on AWS

VMware Cloud on AWS is a good hybrid cloud option for those who want to stay in the VMware ecosystem while dipping their toe in AWS. Here are our top tips for using this offering:

  • Estimate prices in advance: One of the main reasons you want to estimate your pricing before committing to a subscription is to avoid overspend. Idle and overprovisioned resources you are not actually using result in wasted cloud spend, so make sure you’re not oversizing or spending money on cloud resources that should be turned off. 
  • Educate stakeholders on the fact that this allows you to bridge on-premises infrastructure and public cloud without disruption.
  • Consider whether jumping straight to the cloud is possible for some workloads – many companies start with dev/test. If so, you may be able to skip this intermediary step.
AWS Postgres Pricing Comparison

AWS Postgres Pricing Comparison

Maybe you’re looking to use PostgreSQL in your AWS environment – if so, you need to make sure to evaluate pricing and compare your options before you decide. A traditional “lift and shift” of your database can cause quite a headache, so your DBA team likely wants to do it right the first time (and who doesn’t?). Let’s take a look at some of your options for running PostgreSQL databases in AWS.

Option 1: Self-Managed Postgres on EC2

If you’re currently running your databases on-premises or in a private cloud, then the simplest conversion to public cloud in AWS is to stand up an EC2 virtual machine and install the Postgres software on that VM. Since PostgreSQL is open-source, there’s no additional charge for running the software, so you’ll just be paying for the VM (along with associated costs like storage and network transfer). AWS doesn’t have custom instance sizes, but they have enough different sizes across instance families that you can find an option to match your existing server.

As an example, let’s say you’d like to run an EC2 instance with 2 CPUs and 8 GB of memory and 100GB of storage in the us-east-1 region. An m5.large system would work for this, which would cost approximately $70 per month for compute, plus $10 per month for storage. On the plus side, there will be no additional costs if you are transferring existing data into the system (there’s only outbound data transfer costs for AWS).

The biggest benefit of running your own EC2 server with Postgres installed is that you can do any configuration changes or run external software as you see fit. Tools like pgbouncer for connection pooling or pg_jobmon for logging within transactions requires the self-management provided by this EC2 setup. Additional performance tuning that is based on direct access to the Postgres configuration files is also possible with this method.

Option 2: AWS Relational Database Service for Hosted Postgres Databases

If your database doesn’t require custom configuration or community projects to run, then using the AWS RDS service may work for you. This hosted service comes with some great options that you may not take the time to implement with your own installation, including:

    • Automated backups
    • Multi-AZ options (for automatic synchronization to a standby in another availability zone)
    • Behind-the-scenes patching to the latest version of Postgres
    • Monitoring via CloudWatch
    • Built-in encryption options

These features are all fantastic, but they do come at a price. The same instance size as above, an m5.large with 2 CPUs and 8 GB of memory, is approximately $130 per month for a single AZ, or $260 per month for a multi-AZ setup.

Option 3: Postgres-Compatible AWS Aurora

One additional option when looking at AWS Postgres pricing is AWS Aurora. This AWS-created database option is fully compatible with existing Postgres workloads, but enables auto-scaling and additional performance throughput. The price is also attractive, as a similar size of r5.db.large in a multi-AZ configuration would be $211 per month (plus storage and backup costs per GB). This is great if you’re all-in on AWS services, but might not work if you don’t like staying on the absolute latest Postgres version (or don’t want to become dependant on AWS).

AWS Postgres Pricing Comparison

Comparing these costs of these 3 options gives us: 

  • Self-managed EC2 – $80/month
  • Hosted RDS running Postgres in a single AZ – $130/month
  • Hosted RDS running Postgres in multiple AZ’s – $260/month
  • Hosted RDS running Aurora in multiple AZ’s – $211/month

Running an EC2 instance yourself is clearly the cheapest option from a pure cost perspective, but you better know how to manage and tune your settings in Postgres for this to work.  If you want your database to “just work” without worrying about losing data or accessibility, then the Aurora option is the best value, as the additional costs cover many more features that you’ll wonder how you ever lived without.

Want tips, tricks, and insights for an optimized cloud?

> No, I like wasting time and money.