New: SmartParking for Google Database and AWS RDS Cost Optimization

New: SmartParking for Google Database and AWS RDS Cost Optimization

Today, we’re happy to share the latest cost control functionality in ParkMyCloud: SmartParking for Google database and AWS RDS cost optimization – as well as several other improvements and updates to help you find and eliminate cloud waste.

Automatically Detect Idle Google & AWS RDS Databases

“SmartParking” is what we call automatic on/off schedule recommendations based on utilization history. ParkMyCloud analyzes your resource utilization history and creates recommended schedules for each resource to turn them off when they are typically idle. This minimizes idle time to maximize savings on cloud resources.

Like an investment portfolio, users can choose to receive SmartParking schedules that are “conservative”, “balanced”, or “aggressive” — where conservative schedules protect all historic “on” times, while aggressive schedules prioritize maximum savings.

With this release, Google Cloud SQL Databases and AWS RDS instances have been added to the list of resources that can be optimized with SmartParking – a list that also includes AWS EC2 instances, Azure virtual machines, and Google Cloud virtual machine instances.

Why not Azure? At this time, Azure databases can’t be “turned off” in the same way that AWS and Google Cloud databases can. If Azure releases this capability in the future, we will follow with parking and SmartParking capability shortly thereafter.

What Else is New?

In this release, other updates to the ParkMyCloud platform include:

  • Configurable notifications  users now have the option for configurable shutdown warning notification times, from 0.25 hours to 24 hours in advance. Notifications can be received through email, Slack, Microsoft Teams, Google Hangouts, or custom webhook.  
  • Usability updates to Single Sign-On configuration, Google Cloud Credentials add/edit screen, and filtering actions.

See details in the release notes here.

Beyond this most recent release, we’ve made plenty of updates to make ParkMyCloud work for you. These include:

How to Get Started  

It’s easy to get started with Google database and RDS cost optimization! If you haven’t tried out ParkMyCloud yet, get started with a 14-day free trial. During the trial, you’ll have access to the Enterprise tier, which lets you try out all the features listed above. After your trial is over, you can choose to subscribe to the tier that works for you – or keep using our free tier for as long as you like. See pricing details for more information.

If you already use ParkMyCloud, just log in and head over to the Recommendations tab. Depending on the time-window configured for your SmartParking settings, it may take several days or weeks to accumulate enough metrics data to make good recommendations. To configure the time window for recommendations, navigate to Recommendations and select the gear icon in the upper-right, and choose SmartParking Recommendation Settings. Then, sit back while we collect and analyze your data, and your databases will be SmartParking before you know it.


3 Things to Look Forward to at Google Cloud Next 2019

3 Things to Look Forward to at Google Cloud Next 2019

Google Cloud Next 2019 will be our first Google event – and we’re looking forward to it! Google hopes to attract 30,000 attendees this year – up from 23,000 last year – to the San Francisco conference. This is the largest gathering of Google Cloud users, and features three days of networking, learning, and problem solving. Here are 3 things to look forward to at the event this year.

1. Announcements

As with any event of this scale, Google Cloud has been saving up announcements to make at their flagship event. At the event last year, Google Cloud made over 100 announcements. While some listed seem to stretch the idea of an announcement – customer case studies, for example – others were more interesting, ranging from Google Cloud Functions (serverless) to Istio for microservices management to resource-based pricing. They’re sure to have some exciting developments to share for 2019.

2. Speakers & Sessions

This year, the event has more than 30 featured speakers, and attendees will get to hear from executives from throughout the Google Cloud organization as well as their top customers and partners.

There will be hundreds of breakout sessions on 18 tracks. While the sessions you choose to attend will likely focus on the track most relevant to your job role and areas where you’re looking to grow, be sure to scan the full list for other cool sessions. A few that caught my eye…

You can also get certified while at the conference. If possible, we recommend doing this on Monday so you don’t miss out on sessions, but see what your schedule looks like.

3. Fun

Don’t forget to have fun while you’re there. Start with a visit to the expo when you have a break during conference hours – sponsors from Salesforce to DataDog to CloudHealth will have booths where you can learn about their offerings, cool demos, and of course, get the latest in innovative swag and giveaways. Don’t forget to come see ParkMyCloud! We’ll be at the group of booths right when you walk in the main entrance at the expo hall, #S1151.

After hours, various vendors & sponsors are having happy hours, so check out the websites, blogs, and emails from your favorite products to see if there are any you’d like to join. Plus, enjoy the city of San Francisco!

See You At Google Cloud Next 2019

If you’ll be at the event, be sure to stop by and say hi to ParkMyCloud at booth S1151 – schedule a time to stop by and we’ll give you an extra scratch-off card for a chance to win an gift card. We’d love to chat and hear what you think of the event.

Psst — if you haven’t yet registered, shoot me an email and I might be able to hook you up with a discount code.

5 Types of AWS Optimization Lyft is Already Using for that $300 Million Cloud Bill

5 Types of AWS Optimization Lyft is Already Using for that $300 Million Cloud Bill

AWS optimization might be on your mind if you saw last week’s headlines that Lyft has committed to spend $300 million with Amazon Web Services (AWS) over the next three years. This information was revealed in Lyft’s IPO prospectus, filed last Friday.

Lyft isn’t the first startup to generate attention from its massive public cloud bills – Snap and Spotify’s Google Cloud bills are just two other examples.

And this level of spend is no surprise, either. Lyft was born and scaled to “unicorn” status in the cloud, from the first three EC2 servers that powered their first ride to the massive infrastructure of microservices that now powers the ride sharing giant. The question is, how do they use those resources efficiently — with a mindset of AWS optimization?

How Lyft is Already Optimizing AWS

Several case studies from AWS as well as an AWS press release put out last week tell us how Lyft is already using cloud services – and give us insight into how they’re already well-versed in AWS optimization.

1. Commitment

The fact that Lyft has such commitments at all tells us that they’re taking advantage of AWS’s Enterprise Discount program – as we would expect for any company with that scale of infrastructure. An EDP is a private agreement with AWS with a minimum spend commitment in exchange for discounted pricing – a smart move, as Lyft anticipates no slowing down in its use of AWS.

2. Auto Scaling

When you learn that Lyft does eight times as many rides on a Saturday night as they do on Sunday morning, you realize the importance of auto scaling – scaling up to meet demand, and back down when the infrastructure is no longer needed.

3. Spot Instances

AWS has a published case study with Lyft about their use of Spot Instances – AWS’s offering of spare capacity at steeply discounted prices, which are interruptible and therefore only useful in certain circumstances. By using Spot Instances for testing, Lyft reduced testing costs by 75%.

4. Microservices Architecture

Lyft runs more than 150 microservices that use Amazon DynamoDB, Amazon EKS, and AWS Lambda — allowing individual workloads to scale as needed for the myriad processes involved in the on-demand ride sharing service.

5. Pre-Built Container Configuration

In addition to Amazon EKS, Lyft uses Amazon EC2 Container Registry (ECR) to store container images and deliver these images to test and deployment systems. They likely have a good start on the battle for container optimization, though in general, this market will mature greatly this year – so it’s something they’re sure to continue to optimize.

Things Lyft Needs to Do to Keep their Infrastructure Optimized

The case studies and press releases mentioned above, as well as Lyft’s own engineering blog, give some insight into their tech stack and processes. Beyond that, there are several things they may well be focusing on, that we would highly recommend as they continue to scale (and IPO):

1. Governance

Many cloud customers we talk to name governance as their top priority. Automated policies and user roles are key for ensuring that no one can spend outside their bounds. Sometimes, it’s as simple an idea as proper tagging – but one that can set automated processes in motion to assign resource access to team members, proper on/off schedules for non-production resources, and configuration management processes.

2. Resource Rightsizing

Our recent research showed that average CPU utilization for the instances in our data set (which leaned non-production) was less than 5%. Given that going one instance size down can save 50% of the cost, and two sizes can save 75%, this is a huge area for optimization that we recommend cloud users of all sizes focus on this year. At Lyft’s scale, this will require automated policies to resize underutilized resources automatically.

3.  Continuous Evaluation of Microservices

With 150 microservices, blanket policies won’t apply to all cases. Each microservice needs to be evaluated against newer AWS offerings and cost control techniques on an individual basis. Once each of the 150 has been evaluated, it’s time to go back to the beginning of the list and start again — a mindset of continuous cost control would serve them well.

Lyft has gotten this far built and grown on AWS — and their “culture of cloud” has enabled the growth in platform adoption that has brought them to the brink of IPO. One thing is clear: up to this point, growth at any cost has been the goal. That means that the mere amount of cloud spend has not been of huge concern. As they transition into being a public company, margins and profit will start to matter more, which will bring costs into focus. It will soon be important for Lyft to continually optimize infrastructure – in the cloud and across the board.

ParkMyCloud Now Supports AWS GovCloud

ParkMyCloud Now Supports AWS GovCloud

Automated Cloud Cost Optimization Now Available for Public Sector Cloud Users on Amazon Web Services

February 26, 2019 (Dulles, VA) –  ParkMyCloud, provider of the leading enterprise platform for continuous cost control in public cloud, announced today that it now supports AWS GovCloud (US). ParkMyCloud provides automated cost optimization through resource “rightsizing” and automated scheduling based on usage, which together can help cloud users eliminate wasted spend and reduce costs by 65%. In addition to AWS GovCloud, ParkMyCloud supports Amazon Web Services (AWS) commercial regions, Microsoft Azure, Google Cloud Platform, and Alibaba Cloud.

AWS GovCloud (US) is Amazon’s cloud region for sensitive data and regulated workloads. It is used by government customers, organizations in government-regulated industries, and other entities that meet security requirements. The region is highly secure,  subject to FedRAMP baselines, operated by employees who are U.S. citizens on U.S. soil, and requires customers to pass a screening process.

ParkMyCloud for AWS GovCloud resides in a standalone ParkMyCloud SaaS deployment within AWS GovCloud. All ParkMyCloud products meet users’ security guidelines by requiring least-privilege access to cloud resources, so only the state of the resource can be accessed or managed – never the contents. Support includes both regions of AWS GovCloud: the US-West region that was launched in 2011, and the US-East region that was announced in November 2018.

“We currently use ParkMyCloud to manage our AWS commercial resources, which saves us about 45% of the cost,” said Pratap Chilukuri, Lead Enterprise Architect at an IT service management company. “We’ve been looking forward to ParkMyCloud’s AWS GovCloud support so we can achieve the same savings on our GovCloud resources.”

“AWS GovCloud customers have not had a lot of available options for automated cloud cost control and governance,” said ParkMyCloud CEO Jay Chapel. “We’ve received a growing number of requests for this support over the past several months, and we’re happy to deliver it.”

For more information or to request access, please visit

About ParkMyCloud

ParkMyCloud provides an easy-to-use platform that helps enterprises automatically identify and eliminate wasted cloud spend. More than 800 enterprises around the world – including Sysco, Workfront, Hitachi ID Systems, Sage Software, and National Geographic – trust ParkMyCloud to cut their cloud spend by millions of dollars annually. ParkMyCloud’s SaaS offering allows enterprises to easily manage, govern, and optimize their spend across multiple public clouds. For more information, visit

Media Contact

Katy Stalcup, ParkMyCloud

The Cloud Waste Killer Manifesto: A Vow To Bring Down Cloud Computing Cost.

The Cloud Waste Killer Manifesto: A Vow To Bring Down Cloud Computing Cost.

On this, the twelfth day of the second month in the fourteenth year of Public Cloud, I, one Cloud Waste Killer, vow to bring down my cloud computing cost.

The public cloud was founded in pursuit of elasticity, scalability, and efficiency. It is my duty to defend these principles to the best of my ability.

I will make a valiant effort to use my prowess to pursue that greatest good: optimization.

Thus, I declare:

I Will Value What Matters.

Before killing waste, I will take stock of my resources. I will thoroughly examine my environment to find what resources are used consistently and fully so that they shall not meet the wrath of my weapon. I will label them accordingly for governance and automation.

I Will Leave No Stone Unturned.

After applying virtual armor to the resources I intend to keep, I will examine what remains. I will use the tools at my disposal to discover sources of waste.

I Will Show No Mercy.

Be it dragons or oversized resources, I will face my demons and destroy them. There is no space for idlers in this domain. Upon gathering data, I will create my policies and enforce them, to turn resources off outside of necessary hours, resize them when diminishment is in order, and remove what is no longer needed.

I Will Remain Fearless in Times of Peril.

It is only natural that in this process, I shall encounter objectors, who feel tied to their resources or otherwise stand in the way of my mission to reduce cloud computing cost. These may include developers prone to the hapless deployment of enormous virtual machines, or those who carry willful ignorance of the “stop” function. I will remain a true stalwart in my efforts, and seek to educate before taking action. I will present facts about resource usage to expose the problem of cloud waste.

I Will Polish My Armor and My Sword.

A hero is only as good as his weapon. While I bestow faith in the powers of Automation, I shall not neglect the tools of my trade. I will use the cloud computing cost optimization tools at hand and take advantage of their automation capabilities. I will trust them, yet make time to review their recommendations.


I Will Defend the Realm.

I vow to fight against the rising tide of cloud computing cost in my organization.

I vow to protect my environment against idle and oversized resources.

I vow to kill cloud waste.

And you?