Exciting news: RightSizing is now generally available in ParkMyCloud! You can now use this method for automated cost optimization alongside scheduling to achieve an optimized cloud bill in AWS, Azure, and Google Cloud.
How it Works
When you RightSize an instance, you find the optimal virtual machine size and type for its workload.
Why is this necessary? Cloud providers offer a myriad of instancetypeoptions, which can make it difficult to select the right option for the needs of each and every one of your instances. Additionally, users often select the largest size and compute power available, whether it’s because they don’t know their workload needs in advance, don’t see cost as their problem, or “just in case”.
In fact, our analysis of instances being managed in ParkMyCloud showed that 95% of instances were operating at less than 50% average CPU, which means they are oversized and wasting money.
Now with ParkMyCloud’s RightSizing capability, you can quickly and easily – even automatically – resolve these sizing discrepancies to save money. ParkMyCloud uses your actual usage data to make these recommendations, and provides three recommendation options, which can include size changes, family/type changes, and modernization changes. Users can choose to accept these recommendations manually or schedule the changes to occur at a later date.
How Much You Can Save
A single instance change can save 50% or more of the cost. In the example shown here, ParkMyCloud recommends three possible changes for this instance, which would save 40-68% of the cost.
At scale, the savings potential can be dramatic. For example, one enterprise customer who beta-tested RightSizing found that their RightSizing recommendations added up to $82,775.60 in savings – an average of more than $90 per month/ more than $1,000 per year for every instance in their environment.
How to Get Started
Are you already using ParkMyCloud? If not, go ahead and register for a free trial. You’ll have full access for 14 days to try out ParkMyCloud in your own environment – including RightSizing.
If you already use ParkMyCloud, you’ll need to make sure you’re subscribed to the Pro or Enterprise tier to have access to this advanced feature.
Now it’s time to RightSize! Watch this video to see how you can get started in just 90 seconds:
Earlier this week, AWS announced the launch of AWS resource optimization recommendations within their cost management portal. AWS claims that this will “identify opportunities for cost efficiency and act on them by terminating idle instances and rightsizing under-used instances.” Here’s what that actually means, and what functionality AWS still does not provide that users need in order to automate cost control.
AWS Recommendations Overview
AWS Recommendations are an enhancement to the existing cost optimization functionality covered by AWS Cost Explorer and AWS Trusted Advisor. Cost Explorer allows users to examine usage patterns over time. Trusted Advisor alerts users about resources with low utilization. These new recommendations actually suggest instances that may be a better fit.
AWS Resource Optimization provides two types of recommendations for EC2 instances:
Terminate idle instances
Rightsize underutilized instances
These recommendations are generated based on 14 days of usage data. It considers “idle” instances to be those with lower than 1% peak CPU utilization, and “underutilized” instances to be those with maximum CPU utilization between 1% and 40%.
While any native functionality to control costs is certainly an improvement, users often express that they wish AWS would just have less complex billing in the first place.
AWS Resource Optimization Tool vs. ParkMyCloud
ParkMyCloud offers cloud cost optimization through RightSizing for AWS, as well as Azure and Google Cloud, in addition to our automated scheduling to shut down resources when they are idle. Note that AWS’s new functionality does not include on/off schedule recommendations.
Here’s how the new AWS resource optimization tool stacks up against ParkMyCloud.
Types of Recommendations Generated
The AWS Resource Optimization tool will provide up to three recommendations for size changes within the same instance family, with the most conservative recommendation listed as the primary recommendation. Putting it another way, the top recommendation will be one size down from the current instance, the second recommendation will be two sizes down, etc. ParkMyCloud recommends the optimal instance type and size for the workload, regardless of the existing instance’s configuration. This includes instance modernization recommendations, which AWS does not offer.
The AWS tool generates recommendations for EC2 instances only, while ParkMyCloud recommends scheduling and RightSizing recommendations for EC2 and RDS. AWS also does not support GPU-based instances in its recommendations, while ParkMyCloud does.
AWS customers must explicitly enable generation of recommendations in the AWS Cost Management tools. In ParkMyCloud, recommendations are generated automatically (with some access limitations based on subscription tier).
ParkMyCloud allows you to manage resources across a multi-cloud environment including AWS, Azure, Google Cloud, and Alibaba Cloud. AWS’s tool, of course, only allows you to manage AWS resources.
When you start to dig in, you’ll notice several limitations of the recommendations provided by AWS. The recommendations are based on utilization data from the last 14 days, a range that is not configurable. ParkMyCloud’s recommendations, on the other hand, can be based on a configurable range of 1-24 weeks of data, configurable by the customer by team, cloud provider, and resource type.
Another important aspect of “optimization” that AWS does not allow the user to configure are the utilization thresholds. AWS assumes that any instance at less than 1% CPU utilization is idle, and assumes any instance between 1-40% CPU utilization is undersized. While these are reasonable rules of thumb, users need the ability to customize such thresholds to best suit their own environment and use cases, and AWS does not allow you to customize these parameters. AWS also assumes an “all or nothing” approach – they recommend that any instance detected as idle simply be terminated. ParkMyCloud does not assume that low utilization means the instance should be terminated, but suggests sizing and/or scheduling solutions with specificity to the utilization patterns. ParkMyCloud allows users to select between Conservative, Balanced, or Aggressive schedule recommendations with customizable thresholds.
AWS also only evaluates “maximum CPU utilization” to determine idle resources. However, for resource schedule recommendations, ParkMyCloud uses both Peak and Average CPU plus network utilization for all instances, and memory utilization for instances with the CloudWatch agent installed. For sizing recommendations, ParkMyCloud uses maximum Average CPU plus memory utilization data if available.
Perhaps the most dangerous aspect of the AWS Recommendation is they will recommend an instance size change based on CPU alone, even if they do not have memory metrics. Without cross-family recommendation, this means each size down typically cuts the memory in half. ParkMyCloud Rightsizing Recommendations do not assume this is OK. In the absence of memory metrics, we make downsizing recommendations that keep memory constant. For a concrete example of this, here is an AWS recommendation to downsize from m5.xlarge to m5.large, cutting both CPU and memory, and resulting in a net savings of $60 per month.
In contrast, here is the ParkMyCloud Rightsizing Recommendation for the same instance:
You can see that while the AWS recommendation can save $60 per month by downsizing from m5.xlarge to m5.large, the top ParkMyCloud recommendation saves a very similar $57.67 by allowing the transition from m5.xlarge to r5a.large, keeping memory constant. While the savings are off by $2.33, this a far less risky transition and probably worth the difference. In both cases, of course, memory data from the CloudWatch Agent would likely result in better recommendations.
As shown in the AWS recommendation above, AWS provides the “RI hours” for the preceding 14 days, giving better visibility into the impact of resizing on your reserved instance usage, and uses this data for the cost and savings calculations. ParkMyCloud does not yet provide correlation of the size to RI usage, though that is planned for a future release. That said, the AWS documentation also states “Rightsizing recommendations don’t capture second-order effects of rightsizing, such as the resulting RI hour’s availability and how they will apply to other instances. Potential savings based on reallocation of the RI hours aren’t included in the calculation.” So the RI visibility on the AWS side has minimal impact on the quality of their recommendations.
If the user is viewing the AWS Recommendation from within the same account as the target EC2 instance, a “Go to the Amazon EC2 Console” button appears on the recommendation details, but it leads to the EC2 console for whatever your last-used region was, and without an automatic filter for the specific instance ID. This means you need to do your own navigation to the right region (perhaps also requiring a new console login if the recommendation is for a different account in the Organization), and then find the instance to see the details. ParkMyCloud provides better ease-of-use in that you can jump directly from the recommendation into the instance details, regardless of your AWS Organization account structure. ParkMyCloud: 1 click. AWS: At least five, plus copy/paste of the instance ID and plus possibly a login.
ParkMyCloud also shows utilization data for the recommendation below the recommendation text, giving excellent context. AWS again requires navigation the right account, EC2 and then region, or to CloudWatch and the right metrics using the instance ID.
AWS Resource Optimization also ignores instances that have not been run for the past three days. ParkMyCloud takes this lack of utilization into consideration and does not discard these instances from recommendations.
AWS regenerates recommendations every 24 hours. ParkMyCloud regenerates recommendations based on the calculation window set by the customer.
Automation & Ease of Use
While AWS’s new recommendations are generated automatically, they all must be applied manually. ParkMyCloud allows users to accept and apply scheduling recommendations automatically, via a Policy Engine based on resource tagging and other criteria. RightSizing changes can be “applied now”, or scheduled to occur in the future, such as during a designated maintenance window.
There is also the question of visibility and access across team members. In AWS, users will need access to billing, which most users will not have. In ParkMyCloud, Team Leads have access to view and execute recommendations for resources assigned to their respective teams. Additionally, recommendations can be easily exported, so business units or teams can share and review recommendations before they’re accepted if required by their management process.
AWS’s management console and user interface are often cited as confusing and difficult to use, a trend that has unfortunately carried forward to this feature. On the other hand, ParkMyCloud makes resource management straightforward with a user-friendly UI.
Want to see what ParkMyCloud will recommend for your environment? Try it out with a free trial, which gives you 14-day access to our entire feature set, and you can see what cost optimization recommendations we have for you.
VMware Cloud on AWS is an integrated hybrid cloud offering jointly developed by AWS and VMware. It’s targeted at enterprises (or companies) who are looking to migrate on-premises vSphere-based workloads to public cloud, and provides access to native AWS services.
Overview of VMware Cloud on AWS
VMware Cloud on AWS provides an integrated hybrid cloud environment, allowing you to maintain a consistent infrastructure between the vSphere environment in your on-prem data center and the vSphere Software-Defined Data Center (SDDC) on AWS. It also provides a unified view and resource management of your on-prem data center and VMware SDDC on AWS with a single console.
Digital transformation continues to drive businesses to the cloud to stay competitive. But integrating public cloud with existing private cloud infrastructure requires many technical processes, and skill differences between on-prem and cloud environments to be leveraged for both of these to work simultaneously. This combined offering makes it easier for those familiar with VMware to integrate into the public cloud without having to rewrite applications or modify operating models.
One reason this offering is attractive to customers is that it provides optimized access to native AWS services including compute, database, analytics, IoT, AI/ML, security, mobile, resource deployment, and application services.
Another reason is that with automatic scaling and load balancing VMware Cloud on AWS can adapt to the changing business needs across global regions. They also position themselves as a cost-effective solution for reducing upfront investment costs with no application re-factoring or re-architecting needed when migrating. We’ll take a look at the pricing solutions it offers for on-demand and subscription models, but first, let’s see what VMware Cloud for AWS can do for the enterprise.
Use Cases for VMware Cloud on AWS
Accelerated and Simplified Data Center Migration
VMware Cloud on AWS claims to accelerate and simplify the migration process for businesses by reducing migration efforts and complexity between on-prem environments and the cloud. Once in the cloud, users can leverage VMware and AWS services to modernize applications and run mission-critical applications quickly with VMware availability and performance combined with the elastic scale of AWS.
Extend the Data Center to the Cloud with Your Existing Skillset
This offering lets users who are used to VMware keep a consistent and familiar environment on the cloud. Since VMware Cloud on AWS doesn’t require re-tooling or re-educating, IT teams can continue to deliver consistently on vSphere-based infrastructure and operations that are already implemented in existing on-prem data centers.
Add a Robust Disaster Recovery Service to Your Environment
One offering available is VMware Site Recovery: on-demand disaster recovery as a service, optimized for VMware Cloud on AWS to reduce risk without the need to maintain a secondary on-prem site. You can securely replicate workloads to VMware Cloud on AWS so you can spin them up on-demand if disaster strikes.
Flexible Dev/Test Environment
You can use VMware SDDC-consistent dev/test environments that can integrate with modern CI/CD automation tools and access native AWS services seamlessly. You can spin up an entire VMware SDDC in under two hours and scale host capacity in a few minutes.
VMware Cloud on AWS Cost Compared
So, how does the pricing shake out? Hosts can be purchased on-demand or as a 1-year or 3-year subscription. If you choose on-demand pricing, you’ll pay for the physical host by the hour that the host is active with no upfront cost, while the long-term subscription is set to provide up to 50% cost savings over an equivalent period compared to on-demand service, but you pay the costs upfront. It’s a similar idea to AWS Reserved Instances, which may or may not be worth the cost.
Depending on the use case, pricing is similar to standard AWS pricing. See how it compares in price with standard AWS or estimate your costs with the pricing estimator.
Top Tips for Using VMware Cloud on AWS
VMware Cloud on AWS is a good hybrid cloud option for those who want to stay in the VMware ecosystem while dipping their toe in AWS. Here are our top tips for using this offering:
Estimate prices in advance: One of the main reasons you want to estimate your pricing before committing to a subscription is to avoid overspend. Idle and overprovisioned resources you are not actually using result in wasted cloud spend, so make sure you’re not oversizing or spending money on cloud resources that should be turned off.
Educate stakeholders on the fact that this allows you to bridge on-premises infrastructure and public cloud without disruption.
Consider whether jumping straight to the cloud is possible for some workloads – many companies start with dev/test. If so, you may be able to skip this intermediary step.
Maybe you’re looking to use PostgreSQL in your AWS environment – if so, you need to make sure to evaluate pricing and compare your options before you decide. A traditional “lift and shift” of your database can cause quite a headache, so your DBA team likely wants to do it right the first time (and who doesn’t?). Let’s take a look at some of your options for running PostgreSQL databases in AWS.
Option 1: Self-Managed Postgres on EC2
If you’re currently running your databases on-premises or in a private cloud, then the simplest conversion to public cloud in AWS is to stand up an EC2 virtual machine and install the Postgres software on that VM. Since PostgreSQL is open-source, there’s no additional charge for running the software, so you’ll just be paying for the VM (along with associated costs like storage and network transfer). AWS doesn’t have custom instance sizes, but they have enough different sizes across instance families that you can find an option to match your existing server.
As an example, let’s say you’d like to run an EC2 instance with 2 CPUs and 8 GB of memory and 100GB of storage in the us-east-1 region. An m5.large system would work for this, which would cost approximately $70 per month for compute, plus $10 per month for storage. On the plus side, there will be no additional costs if you are transferring existing data into the system (there’s only outbound data transfer costs for AWS).
The biggest benefit of running your own EC2 server with Postgres installed is that you can do any configuration changes or run external software as you see fit. Tools like pgbouncer for connection pooling or pg_jobmon for logging within transactions requires the self-management provided by this EC2 setup. Additional performance tuning that is based on direct access to the Postgres configuration files is also possible with this method.
Option 2: AWS Relational Database Service for Hosted Postgres Databases
If your database doesn’t require custom configuration or community projects to run, then using the AWS RDS service may work for you. This hosted service comes with some great options that you may not take the time to implement with your own installation, including:
Multi-AZ options (for automatic synchronization to a standby in another availability zone)
Behind-the-scenes patching to the latest version of Postgres
These features are all fantastic, but they do come at a price. The same instance size as above, an m5.large with 2 CPUs and 8 GB of memory, is approximately $130 per month for a single AZ, or $260 per month for a multi-AZ setup.
Option 3: Postgres-Compatible AWS Aurora
One additional option when looking at AWS Postgres pricing is AWS Aurora. This AWS-created database option is fully compatible with existing Postgres workloads, but enables auto-scaling and additional performance throughput. The price is also attractive, as a similar size of r5.db.large in a multi-AZ configuration would be $211 per month (plus storage and backup costs per GB). This is great if you’re all-in on AWS services, but might not work if you don’t like staying on the absolute latest Postgres version (or don’t want to become dependant on AWS).
AWS Postgres Pricing Comparison
Comparing these costs of these 3 options gives us:
Self-managed EC2 – $80/month
Hosted RDS running Postgres in a single AZ – $130/month
Hosted RDS running Postgres in multiple AZ’s – $260/month
Hosted RDS running Aurora in multiple AZ’s – $211/month
Running an EC2 instance yourself is clearly the cheapest option from a pure cost perspective, but you better know how to manage and tune your settings in Postgres for this to work. If you want your database to “just work” without worrying about losing data or accessibility, then the Aurora option is the best value, as the additional costs cover many more features that you’ll wonder how you ever lived without.
AWS credits are a way to save on your Amazon Web Services (AWS) bill. Credits are applied to AWS cloud bills to help cover costs that are associated with eligible services, and are applied until they are exhausted or they expire. Essentially, credits are a coupon-code like mechanism used by Amazon on your bill. If you want to see how to redeem your AWS promotional credits, look here. So how do you get these credits? There are a number of ways – here are 9 that we have either used ourselves or that have been successfully used by our customers.
With AWS Activate, companies can build or scale with up to $100,000 in AWS promotional credits. AWS Activate is ideal for startups because they get access to resources as quickly as possible, and AWS provides them with a low cost, easy-to-use infrastructure to help them grow.
This is a big help for startups, knowing they are getting their money’s worth with these credits lets them focus on one thing – growth. If you are looking to get started on AWS definitely check this out.
Publish an Alexa Skill
For all you developers, each Alexa skill that you publish, you can apply to receive a $100 AWS promotional credit. Take advantage of these credits to get all your skills potential!
AWS Cloud Credits for Research
AWS Cloud Credits for Research evaluates academic research from researchers at accredited institutions around the world. Researchers that apply for this program take an initiative to build a cloud-hosted service, software, or tools and/or want to migrate a research process or open data to the cloud. The credit amount awarded will vary depending on the cost model and usage requirements documented in the research proposal.
Attending AWS webinars, events, and conferences can get you AWS credits. In order to be awarded the credits, you’ll have to provide proof that you actually attended. Make sure to keep an eye on their events page, as new stuff is being added all the time.
In an effort to educate the next generation of cloud professionals, AWS has made AWS Educate available to institutions, educators, and students. It provides institutions with the resources educators and students need for training resources, cloud-related learning, and content for courses. Students have the opportunity to receive credits by getting hands-on experience with AWS tech, training, content and career pathways.
At member institutions, educators earn $200 in AWS credits compared to non-member institutions they earn $75. Students receive an AWS Educate starter account along with $50 in credits at a member institution and $35 at a non-member institution. To make this even more appealing, AWS will award students and staff with more credits if you sign up as a member institution.
AWS Credit Program for Nonprofits
Through TechSoup Global, eligible nonprofit organizations can request one grant of $2,000 AWS credits once per fiscal year.
AWS Free Tier
As always, AWS Free Tier is a great option to get access to AWS products for no cost. Customers can use the product for free up to specified limits for one year from the date the account was created.
This includes 750 hours of Amazon EC2 Linux t2.micro instance usage, 5 GB of Amazon S3 standard storage, 750 hours of Amazon RDS Single-AZ db.t2.micro Instances, one million AWS Lambda requests and you can build and host most Alexa skills for free.
AWS focuses on education technology startups long term success with their AWS EdStart program. AWS is looking to provide businesses with the resources they need to get started as quickly and easily on AWS to ensure they have every opportunity to prosper. After applying and getting approved, businesses will receive their credit validation. The credit amount awarded is based on the business’s needs.
You only have access to promotional credits for a limited time, so make sure you take advantage of all these opportunities if you can! Whether you are just getting started with AWS or have been using it for a while, there are plenty of credits and resources available to make AWS an affordable option for you.
AWS CloudWatch is Amazon Web Services’ primary monitoring tool for your cloud environment. Whether you are aware of it or not, if you use AWS, your data is being collected in CloudWatch, on more metrics than you probably know what to do with. With a small amount of effort, however, you can make this data work for you to reduce costs, automatically.
What AWS CloudWatch Data Tells You
First of all, is AWS CloudWatch collecting data about your utilization? Almost certainly, the answer is yes. As an example, here are some of the metrics that are collected by default for EC2:
Like many AWS services, AWS CloudWatch has a free tier that covers the needs of many applications. You’ll need to pay more for custom metrics; extra dashboards, alarms, and logs; and custom events.
Also worth keeping in mind is that data is kept historically, based on the data resolution. Data points with a period of 60 seconds are kept for 15 days, although shorter periods are kept for as short as 3 hours, and longer for up to 15 months.
What to do with all this data? First, set up your AWS CloudWatch dashboard(s) and create alerts on the metrics that are important to you. The next step is to use this data for automated optimization of your environment.
How to Turn That Data into Automated Cost Savings
Most organizations using public cloud are wasting thousands or even hundreds of thousands of dollars on cloud resources they’re not actually using. Even if you’re aware of overspend, you may not think you have the time or bandwidth to address the issue. With automation, integrating cost control into your daily processes can be straightforward.
One ParkMyCloud customer, Kurt Brochu of Sysco Foods, once told us, “To me, the magic is that the platform empowers the end user to make decisions for the betterment of the business.” His team has achieved a lifetime ROI of 1400% using ParkMyCloud. AWS data and ParkMyCloud’s automation capabilities empower his users to identify what spend is necessary, and what can be optimized.
Learn how – by hearing from Kurt directly!
We’re joining together with him and AWS to discuss how to empower your team to use AWS CloudWatch data to optimize cloud costs. The webinar is now complete – watch a replay here!
The webinar will give you an understanding of:
What Amazon CloudWatch and AWS Trusted Advisor data is available that can help you save money
Real-world examples of cost savings
Potential data availability challenges and “gotchas” and how to address them
How ParkMyCloud’s automated cost optimization platform can use your utilization data to optimize costs