Jay Chapel, Author at ParkMyCloud

3 Things We Learned at AWS re:Invent 2017 (An Insider’s Look from Jay)

The ParkMyCloud team has returned energized and excited from our annual trek to Las Vegas and our third AWS re:Invent conference. It’s an eclectic group of roughly 42K people, but we gleaned a ton of information by asking questions and listening to the enterprise cloud do’ers at our booth. These are the people actually moving, deploying and managing cloud services at the enterprises we consume goods and services from. They are also often the one’s that start the ‘cloud first’ initiatives we hear so much about — after all, they led the public cloud revolution to AWS 10+ years ago.

I’m not going to write about all the announcements AWS made at re:Invent2017 related to all the cool kid, popular buzz words like Artificial Intelligence, Machine Learning, Blockchain, Quantum Computing, Serverless Architecture, etc. However, if you do want to read about those please check out this nice recap from Ron Miller of TechCrunch.

Containers are so passé, they did not even make the cool kid list in 2017… but Microservices Architecture did. Huh, wonder if that’s a new phrase for containers?

For ParkMyCloud it’s a great event. We love talking to everyone there – they’re all cloud focused, they are either using AWS exclusively (born in the cloud), or AWS plus another public cloud, or AWS plus private cloud, and in some cases even AWS plus another public cloud and private cloud, thus they are truly ‘multi-hybrid cloud’. We had a ton of great conversations with cloud users who are either prospects, customers, technology partners, MSPs or swag hunters who want learn how to automate their cloud cost control – our nirvana.

There were a ton of Sessions, Workshops and Chalk Talks, and long lines to get into the good ones. It’s up to you to define the good ones and reserve your spot ahead of time.

Of course, it’s not all work and no play. This year for re:Play we had DJ Snake – giddy up! And while you walked your miles through the various casinos there were DJ’s scattered about spinning tunes for you – I describe re:Invent to my friends as an IT event where “millennials meet technology” — definitely not your father’s tech trade show. Having been to many of these IT tech trade shows around the world for 20+ years now, and outside of the Mobile World Congress in Barcelona, re:Invent is hands down the coolest.

Not only because of the DJ’s and re:Play but because there is a real buzz there, people are part of the new world of IT, and the migration of enterprise services to the world’s #1 cloud provider. And of course the Pub Crawl and Tatonka chicken wing eating contest.

AWS is now so big that the Venetian/Palazzo can’t hold everyone anymore, so they have spread over to the MGM, Mirage, and Aria. AWS refers to this collection of locations as it’s ‘campus’ – interesting, the rest of us refer to it simply as Las Vegas :-).

BTW – bring your sneakers. It’s 1.5 miles or a 22 minute power walk, including a few bridges, from the MGM to the Venetian assuming no stops for a cold beverage. Speaking of which, the Starbucks line is crazy.

Oh, and the swag, holy mother of pearl, people literally walk by the booth with large tote bags stuffed full of swag – if you like swag, hit up the expo hall for your fill of tee shirts, hoodies, koozies, spinners, bottle openers, pens, flash lights, memory sticks, chargers, stickers, hats, socks, glasses, mints, Toblerone chocolate, and lots more!

Well, I probably need to tie this blog / rant back to the headline, so in that vein, here are the top three things we learned at this year’s AWS re:Invent:

  1. Cost control in 2018 will be about aggregating metrics and taking automated actions based on Machine Learning
  2. AWS talks a lot about advanced cloud services and PaaS, but a majority of the customers we talk to still use and spend most of their dollars on EC2, RDS and S3
  3. DevOps / CloudOps folks are in charge of implementing cost control actions and pick the tools they want to use to optimize cloud spend

See you next year – pre-book your Uber/Lyft or bring a scooter!

Read more ›

Cloud Service Provider Comparison – Who Will be the Next Big Provider? Part One: Alibaba

When making a cloud service provider comparison, you would probably think of the “big three” providers: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). Thus far, AWS has led the cloud market, but the other two are gaining market share, driving us to make comparisons between Azure vs AWS and Google vs AWS. But that’s not the whole story.

In recent years, a few other “secondary” cloud providers have made their way into the market, offering more options to choose from. Are they worth looking at, and could one of them become the next big provider?

Andy Jassy, CEO of AWS, says: “There won’t be just one successful player. There won’t be 30 because scale really matters here in regards to cost structure, as well as the breadth of services, but there are going to be multiple successful players, and who those are I think is still to be written. But I would expect several of the older guard players to have businesses here as they have large installed enterprise customer bases and a large sales force and things of that sort.”

So for our next cloud service provider comparison, we are going to do an overview of what could arguably be the next biggest provider in the public cloud market (after all, we need to add a 4th cloud provider to the ParkMyCloud arsenal).:

Alibaba

Alibaba is a cloud provider not widely known about in the U.S., but it’s taking China by storm and giving Amazon a run for its money in Asia. It’s hard to imagine a cloud provider (or e-commerce giant) more successful than what we have seen with Amazon, let alone a provider that isn’t part of the big three, but Alibaba has their sights set on surpassing AWS to dominate the world wide cloud computing market.

Take a look at some recent headlines:

Guess Who’s King of Cloud Revenue Growth? It’s Not Amazon or Microsoft

Alibaba Just Had Its Amazon AWS Moment

Alibaba Declares War on Amazon’s Surging Cloud Computing Business

What we know so far about Alibaba:

  • In 2016: Cloud revenue was $675 million, surpassing Google Cloud’s $500 million. First quarter revenue was $359 million and in the second quarter rose to $447 million.
  • Alibaba was dubbed the highest ranking cloud provider in terms of revenue growth, with sales increasing 126.5 percent from 2015 ($298 million) to 2016
  • Gartner research places Alibaba’s cloud in fourth place among cloud providers, ahead of IBM and Oracle

Alibaba Cloud was introduced to cloud computing just three years after Amazon launched AWS. Since then, Alibaba has grown at a faster pace than Amazon, largely due to their domination of the Chinese market, and is now the 5th largest cloud provider in the world.

Alibaba’s growth is attributed in part to the booming Chinese economy, as the Chinese government continues digitizing, bringing its agencies online and into the cloud. In addition, as the principal e-commerce system in China, Alibaba holds the status as the “Amazon of Asia.” Simon Hu, senior vice president of Alibaba Group and president of Alibaba Cloud, claims that Alibaba will surpass AWS as the top provider by 2019.

Our Take

For the time being, Amazon is still dominating the U.S. cloud market, exceeding $400 billion in comparison to Alibaba’s $250 billion. Still, Alibaba Cloud is growing at incredible speed, with triple digit year-over-year growth over the last several quarters. As the dominant cloud provider in China, Alibaba is positioned to continue growing, and is still in its early stages of growth in the cloud computing market. Only time will reveal what Alibaba Cloud will do, but in the meantime, we’ll definitely be keeping a lookout. After all, we have customers in 20 countries around the world, not just in the U.S.  

Next Up: IBM & Oracle

Apart from the big three cloud providers, Alibaba is clearly making a name for itself with a fourth place ranking in the world of cloud computing. While this cloud provider is clearly gaining traction, a few more have made their introduction in recent years. Here’s a snapshot of the next 2 providers in our cloud service provider comparison:

IBM

  • At the end of June 2017, IBM made waves when it outperformed Amazon in total cloud computing revenue at $15.1 billion to $14.5 billion over a year-long period
  • However, Amazon is still way ahead when it comes to the IaaS market
    • For 2016, Amazon had the highest IaaS revenue, followed by Microsoft, Alibaba, and Google, respectively. IBM did not make the top 5.
    • Alibaba had the highest IaaS growth rate, followed by Google, Microsoft, and Amazon, respectively.
  • IBM was the fourth biggest cloud provider – before Alibaba took over
  • In Q1 of 2017, Synergy rankings showed that IBM has 4 percent of the public cloud market share, just behind Alibaba’s 5 percent
    • AWS had 44 percent, Azure – 11 percent, and Google Cloud – 6 percent

Oracle

  • Oracle’s cloud business is still ramping up, particularly in terms of IaaS
  • In fiscal Q1 of 2018, growth was at 51 percent, down from a 60 percent average in the last four quarters
    • Q4 for fiscal 2017 was at 58 percent
  • Since last quarter, shares have gone down by 10 percent

When making a cloud service provider comparison, don’t limit yourself to the “big three” of AWS, Azure, and GCP. They might dominate the market now, but as other providers grow, innovate, and increase their following in the cloud wars – we’ll continue to track and compare as earnings are reported.

Read more ›

Complex Cloud Pricing Models Mean You Need Automated Cost Control

Cloud pricing models can be complex. In fact, it’s often difficult for public cloud users to decipher a) what they’re spending, b) whether they need to be spending that much, and c) how to save on their cloud costs. The good news is that this doesn’t need to be an ongoing battle. Once you get a handle on what you’re spending, you can automate the cost control process to ensure that you only spend what you need to.

By the way, I recently talked about this on The Cloudcast podcast – if you prefer to listen, check out the episode.

All Cloud Pricing Models Require Cost Management

automate cloud cost savingsThe major cloud service providers – Amazon Web Services, Microsoft Azure, and Google Cloud Platform – offer several pricing models for compute services – by usage, Reserved, and Spot pricing.

The basic model is by usage – typically this has been per-hour, although AWS and Google both recently announced per-second billing (more on this next week.) This requires careful cost management, so users can determine whether they’re paying for resources that are running when they’re not actually needed. This could be paying for non-production instances on nights and weekends when no one is using them, or paying for oversized instances that are not optimally utilized.

Then there are Reserved Instances, which allow you to pre-pay partially or entirely. The billing calculation is done on the back end, so it still requires management effort to ensure that the instances you are running are actually eligible for the Reserved Instances you’ve paid for.

As to whether these are actually a good choice for you, see the following blog post: Are AWS Reserved Instances Better Than On-Demand? It’s about AWS Reserved Instances, although similar principles apply to Azure Reserved Instances.

Spot instances allow you to bid on and use spare compute capacity for a cheap price, but their inherent risk means that you have to build fault-tolerant applications in order to take advantage of this cost-saving option.

However You’re Paying, You Need to Automate

The bottom line is that while visibility into the costs incurred by your cloud pricing model is an important first step, in order to actually reduce and optimize your cloud spend, you need to be able to take automated actions to reduce infrastructure costs.

To this end, our customers told us that they would like the ability to park instances based on utilization data. So, we’re currently developing this capability, which will be released in early December. Following that, we will add the ability for ParkMyCloud to give you right sizing recommendations – so not only will you be able to automatically park your idle instances, you’ll also be able to automatically size instances to correctly fit your workloads so you’re not overpaying.

Though cloud pricing can be complicated, with governance and automated savings measures in place, you can put cost worries to the back of your mind and focus on your primary objectives.

Read more ›

Google Cloud Platform vs AWS: Is the answer obvious? Maybe not.

Google Cloud Platform vs AWS: what’s the deal? A few months ago, we asked the same question about Azure vs AWS. While Microsoft continues to see growth, and Amazon maintains a steady lead among cloud providers, Google is stepping in. Now that Google Cloud Platform has solidly secured its spot to round out the “big three” cloud providers, we think it’s time to take a closer look and see how the underdog matches up to the 800-pound gorilla.

Is Google Cloud catching up to AWS?

As they’ve been known to do, Amazon, Google, and Microsoft all released their recent quarterly earnings on the same day. At first glance, the headlines tell it all:

The natural conclusion is that AWS continues to dominate in the cloud war. With all major cloud providers reporting earnings at the same time, we have an ideal opportunity to examine the numbers and determine if there’s more to the story. Here’s what the quarterly earning reports tell us:

  • AWS reported $4.6 billion in revenue for the quarter and is on its way to $18 billion in revenue for year, a 42% year-over-year increase, taking the top spot among cloud providers
  • Google’s revenue has cloud sales lumped together with revenue from the Google Play app store, summing up to a total of $3.4 billion for the last quarter
  • Although Google did not report specific revenue for Google Cloud Platform (GCP), Canalys estimates earnings at $870 million for the quarter – a 76% year-over-year growth

 

  • It’s also important to note that Google is just getting started. Also included in their report was an increase in new hires, a total of 2,495 in the last quarter, and most of them going to positions in their cloud sector

The Obvious: Google is not surpassing AWS

When it comes to Google Cloud Platform vs AWS, presently we have a clear winner. Amazon continues to have the advantage as the biggest and most successful cloud provider on the market. While AWS is growing at a smaller rate now than both Google Cloud and Azure, Amazon’s growth is still more impressive given that it has the largest market share of all three. AWS is the clear competitor to beat as the first successful cloud provider, with the widest range of services, and a strong familiarity among developers.

The Less Obvious: Google is gaining ground

While it’s easy to write off Google Cloud Platform, AWS is not untouchable. Let’s not forget that 76% year-over-year growth is nothing to scoff at. AWS has already solidified itself in the cloud market, but Google Cloud is just beginning to take off.

Where is Google actually gaining ground?

We know that AWS is at the forefront of cloud providers today. At the same time, AWS is now only one among three major cloud providers. Google Cloud Platform has more in store for its cloud business in 2018.

Google’s stock continues to rise. With nearly 2,495 new hires added to the headcount, a vast majority of them being cloud-related jobs, it’s clear that Google is serious about expanding its role in the cloud market. Deals have been made with major retailer Kohl’s department store, and payments processor giant Paypal. Google CEO Sundar Pichai lists the cloud platform as one of the top three priorities for the company, confirming that they will continue expanding their cloud sales headcount.

In discussing Google’s recent quarterly earnings, Pichai added his thoughts on why he believes the Google Cloud Platform is on a set path for strong growth. He credits their success to customer confidence in Google’s impressive technology and a lead in machine learning, naming the company’s open-source software TensorFlow as a prime example. Another key component to growth is strategic partnerships, such as the recent announcement of a deal with Cisco, in addition to teaming up with VMware and Pivotal.

Driving Google’s growth is also the fact that the cloud market itself is growing fast. The move to the cloud has prompted large enterprises to use multiple cloud providers in building their applications, such as Home Depot Inc. and Target Corp., who rely on a combination of cloud vendors. Home Depot in particular uses both Azure and Google Cloud Platform, and a spokesman for the home improvement retailer explains why that was that intentional: “Our philosophy here is to be cloud agnostic, as much as we can.” This philosophy goes to show that as long as there is more than one major cloud provider in the mix, enterprises will continue trying, comparing, and adopting more than one at a time, making way for Google Cloud to gain further ground.

Andy Jassy, CEO of AWS, put it best:

“There won’t be just one successful player. There won’t be 30 because scale really matters here in regards to cost structure, as well as the breadth of services, but there are going to be multiple successful players, and who those are I think is still to be written. But I would expect several of the older guard players to have businesses here as they have large installed enterprise customer bases and a large sales force and things of that sort.”

Google Cloud Platform vs. AWS: Why does it matter?

Google Cloud Platform vs AWS is only one battle to consider in the ongoing cloud war. The truth is, market performance is only one factor in choosing the best cloud provider, and as we always say, the specific needs of your business are what will drive your decision.

What we do know: the public cloud is not just growing, it’s booming.

Referring back to our Azure vs AWS comparison, the basic questions still remain the same when it comes to choosing the best cloud provider:

  • Are the public cloud offerings to new customers easily comprehensible?
  • What is the pricing structure and how much do the products cost?
  • Are there adequate customer support and growth options?
  • Are there useful surrounding management tools?
  • Will our DevOps processes translate to these offerings?
  • Can the PaaS offerings speed time-to-value and simplify things sufficiently, to drive stickiness?
  • What security measures does the cloud provider have in place?

Right now AWS is certainly in the lead among major cloud providers, but for how long? We will continue to track and compare cloud providers as earnings are reported, offers are increased, and price options grow and change. To be continued in 2018…

Read more ›

3 Things Companies Using Cloud Computing Should Make Sure Their Employees Do

These days, there’s a huge range of companies using cloud computing, especially public cloud. While your infrastructure size and range of services used may vary, there are a few things every organization should keep in mind. Here are the top 3 we recommend for anyone in your organization who touches your cloud infrastructure.

Keep it Secure

OK, so this one is obvious, but it bears repeating every time. Keep your cloud access secure.

For one, make sure your cloud provider keys don’t end up on GitHub… it’s happened too many times.

(there are a few open source tools out there that can help search your GitHub for this very problem, check out AWSLabs’s git-secrets).

Organizations should also enforce user governance and use Role-Based Access Control (RBAC) to ensure that only the people who need access to specific resources can access them.

Keep Costs in Check

There’s an inherent problem created when you make computing a pay-as-you-go utility, as public cloud has done: it’s easy to waste money.

First of all, the default for computing resources is that they’re “always on” unless you specifically turn them off. That means you’re always paying for it.

Additionally, over-provisioning is prevalent – 55% of all public cloud resources are not correctly sized for their resources. The last is perhaps the most brutal: 15% of spend is on resources which are no longer used. It’s like discovering that you’re still paying for that gym membership you signed up for last year, despite the fact that you haven’t set foot inside. Completely wasted money.

In order to keep costs in check, companies using cloud computing need to ensure they have cost controls in place to eliminate and prevent cloud waste – which, by the way, is the problem we set out to solve when we created ParkMyCloud.

Keep Learning

Third, companies should ensure that their IT and development teams continue their professional development on cloud computing topics, whether by taking training courses or attending local Meetup groups to network with and learn from peers. We have a soft spot in our hearts for our local AWS DC Meetup, which we help organize, but there are great meetups in cities across the world on AWS, Azure, Google Cloud, and more.

Best yet, go to the source itself. Microsoft Azure has a huge events calendar, though AWS re:Invent is probably the biggest. It’s an enormous gathering for learning, training, and announcements of new products and services (and it’s pretty fun, too).

We’re a sponsor of AWS re:Invent 2017 – let us know if you’re going and would like to book time for a conversation or demo of ParkMyCloud while you’re there, or just stop by booth #1402!

Read more ›

3 Enterprise Cloud Management Challenges You Should Be Thinking About

Enterprise cloud management is a top priority. As the shift towards multi-cloud environments continues, so has the need to consider the potential challenges. Whether you already use the public cloud, or are considering making the switch, you probably want to know what the risks are. Here are three you should be thinking about.

1. Multi-Cloud Environments

As the ParkMyCloud platform supports AWS, Azure, and Google, we’ve noticed that multi-cloud strategies are becoming increasingly common among enterprises. There are a number of reasons why it would be beneficial to utilize more than one cloud provider. We have discussed risk mitigation as a common reason, along with price protection and workload optimization. As multi-cloud strategies become more popular, the advantages are clear. However, every strategy comes with its challenges, and it’s important for CIOs to be aware of the associated risks.

Without the use of cloud management tools, multi-cloud management is complex and sometimes difficult to navigate. Different cloud providers have different price models, product features, APIs, and terminology. Compliance requirements are also a factor that must be considered when dealing with multiple providers. Meeting and maintaining requirements for one cloud provider is complicated enough, let alone multiple. And don’t forget you need a single pane to view your multi-cloud infrastructure.

2. Cost Control

Cost control is a first priority among cloud computing trends. Enterprise Management Associates (EMA) conducted a research study and identified key reasons why there is a need for cloud cost control, among them were inefficient use of cloud resources, unpredictable billing, and contractual obligation or technological dependency.

Managing your cloud environment and controlling costs requires a great deal of time and strategy, taking away from the initiatives your enterprise really needs to be focusing on. The good news is that we offer a solution to cost control that will save 65% or more on your monthly cloud bills – just by simply parking your idle cloud resources. ParkMyCloud was one of the top three vendors recommended by EMA as a Rapid ROI Utility. If you’re interested in seeing why, we offer a 14-day free trial.

3. Security & Governance

In discussing a multi-cloud strategy and its challenges, the bigger picture also includes security and governance. As we have mentioned, a multi-cloud environment is complex, complicated, and requires native or 3rd party tools to maintain vigilance. Aside from legal compliance based on the industry your company is in, the cloud also comes with standard security issues and of course the possibility of cloud breaches. In this vein, as we talk to customers they often worry about too many users being granted console access to create and terminate cloud resources which can lead to waste. A key here is limiting user access based on roles or Role-based Access Controls (RBAC). At ParkMyCloud we recognize that visibility and control is important in today’s complex cloud world. That’s why in designing our platform, we provide the sysadmin the ability to delegate access based on a user’s role and the ability to authenticate leveraging SSO using SAML integration . This approach brings security benefits without losing the appeal of a multi-cloud strategy.

Our Solution

Enterprise cloud management is an inevitable priority as the shift towards a multi-cloud environment continues. Multiple cloud services add complexity to the challenges of IT and cloud management. Cost control is time consuming and needs to be automated and monitored constantly. Security and governance is a must and it’s necessary to ensure that users and resources are optimally governed. As the need for cloud management continues to grow, cloud automation tools like ParkMyCloud provide a means to effectively manage cloud resources, minimize challenges, and save you money.

Read more ›

Cloud Optimization Tools = Cloud Cost Control (Part II)

A couple of weeks ago in Part 1 of this blog topic we discussed the need for cloud optimization tools to help enterprises with the problem of cloud cost control. Amazon Web Services (AWS) even goes as far as suggesting the following simple steps to control their costs (which can also be applied  to Microsoft Azure and Google Cloud Platform, but of course with slightly different terminology):

    1. Right-size your services to meet capacity needs at the lowest cost;
    2. Save money when you reserve;
    3. Use the spot market;
    4. Monitor and track service usage;
    5. Use Cost Explorer to optimize savings; and
    6. Turn off idle instances (we added this one).

A variety of third-party tools and services have popped up in the market over the past few years to help with cloud cost optimization – why? Because upwards of $23B was spent on public cloud infrastructure in 2016, and spending continues to grow at a rate of 40% per year. Furthermore, depending on who you talk to, roughly 25% of public cloud spend is wasted or not optimized — that’s a huge market! If left unchecked, this waste problem is supposed to triple to over $20B by 2020 – enter the vultures (full disclosure, we are also a vulture, but the nice kind). Most of these tools are lumped under the Cloud Management category, which includes subcategories like Cost Visibility and Governance, Cost Optimization, and Cost Control vendors – we are a cost control vendor to be sure.

Why do you, an enterprise, care? Because there are very unique and subtle differences between the tools that fit into these categories, so your use case should dictate where you go for what – and that’s what I am trying to help you with. So, why am I a credible source to write about this (and not just because ParkMyCloud is the best thing since sliced bread)?

Well, yesterday we had a demo with a FinTech company in California that was interested in Cost Control, or thought they were. It turns out that what they were actually interested in was Cost Visibility and Reporting; the folks we talked to were in Engineering Finance, so their concerns were primarily with billing metrics, business unit chargeback for cloud usage, RI management, and dials and widgets to view all stuff AWS and GCP billing related. Instead of trying to force a square peg into a round hole, we passed them on to a company in this space who’s better suited to solve their immediate needs. In response, the Finance folks are going to put us in touch with the FinTech Cloud Ops folks who care about automating their cloud cost control as part of their DevOps processes.

This type of situation happens more often than not. We have a lot of enterprise customers using ParkMyCloud along with CloudHealth, CloudChekr, Cloudability, and Cloudyn because in general, they provide Cost Visibility and Governance, and we provide actionable, automated Cost Control.

As this is our blog, and my view from the street – we have 200+ customers now using ParkMyCloud, and we demo to 5-10 enterprises per week. Based on a couple of generic customer uses cases where we have strong familiarity, here’s what you need to know to stay ahead of the game:

  • Cost Visibility and Governance: CloudHealth, CloudChekr, Cloudability and Cloudyn (now owned by Microsoft)
  • Reserved Instance (RI) management – all of the above
  • Spot Instance management – SpotInst
  • Monitor and Track Usage: CloudHealth, CloudChekr, Cloudability and Cloudyn
  • Turn off (park) Idle Resources – ParkMyCloud, Skeddly, Gorilla Stack, BotMetric
  • Automate Cost Control as part of your DevOps Process: ParkMyCloud
  • Govern User Access to Cloud Console for Start/Stop: ParkMyCloud
  • Integrate with Single Sign-On (SSO) for Federated User Access: ParkMyCloud

To summarize, cloud cost control is important, and there are many cloud optimization tools available to assist with visibility, governance, management, and control of your single or multi-cloud environments. However, there are very few tools which allow you to set up automated actions leveraging your existing enterprise tools like Ping, Okta, Atlassian, Jenkins, and Slack.  Make sure you are not only focusing on cost visibility and recommendations, but also on action-oriented platforms to really get the best bang for your buck.

Read more ›

Cloud Optimization Tools = Cloud Cost Control

Over the past couple of years we have had a lot of conversations with large and small enterprises regarding cloud management and cloud optimization tools, all of whom were looking for cost control. They wanted to reduce their bills, just like any utility you might run at home — why spend more than you need to? Amazon Web Services (AWS) actively promotes optimizing cloud infrastructure, and where they lead, others follow. AWS even goes so far as to suggest the following simple steps to control AWS costs:

  1. Right-size your services to meet capacity needs at the lowest cost;
  2. Save money when you reserve;
  3. Use the spot market;
  4. Monitor and track service usage;
  5. Use Cost Explorer to optimize savings; and
  6. Turn off idle instances (we added this one).

Its interesting to note use of the word ‘control’ even though the section is labeled Cost Optimization.

So where is all of this headed? It’s great that AWS offers their own solutions but what if you want automation into your DevOps processes, multi-cloud support (or plan to be multi cloud), real-time reporting on these savings, and to turn stuff off when you are not using it? Well then you likely need to use a third-party tool to help with these tasks.

Let’s take a quick look at a description of each AWS recommendation above, and get a better understanding of each offering. Following this we will then explore if these cost optimization options can be automated as part of a continuous cost control process:

  1. Right-sizing – Both the EC2 Right Sizing solution and AWS Trusted Advisor analyze utilization of EC2 instances running during the prior two weeks. The EC2 Right Sizing solution analyzes all instances with a max CPU utilization less than 50% and determines a more cost-effective instance type for that workload, if available.
  2. Reserved Instances (RI) – For certain services like Amazon EC2 and Amazon RDS, you can invest in reserved capacity. With RI’s, you can save up to 75% over equivalent ‘on-demand’ capacity. RI’s are available in three options – (1) All up-front, (2) Partial up-front or (3) No upfront payments.
  3. Spot – Amazon EC2 Spot instances allow you to bid on spare Amazon EC2 computing capacity. Since Spot instances are often available at a discount compared to On-Demand pricing, you can significantly reduce the cost of running your applications, grow your application’s compute capacity and throughput for the same budget, and enable new types of cloud computing applications.
  4. Monitor and Track Usage – You can use Amazon CloudWatch to collect and track metrics, monitor log files, set alarms, and automatically react to changes in your AWS resources. You can also use Amazon CloudWatch to gain system-wide visibility into resource utilization, application performance, and operational health.
  5. Cost Explorer – AWS Cost Explorer gives you the ability to analyze your costs and usage. Using a set of default reports, you can quickly get started with identifying your underlying cost drivers and usage trends. From there, you can slice and dice your data along numerous dimensions to dive deeper into your costs.
  6. Turn off Idle Instances – To “park” your cloud resources by assigning them schedules of operating hours they will run or be temporarily stopped – i.e. parked. Most non-production resources (dev, test, staging, and QA) can be parked at nights and on weekends, when they are not being used. On the flip side of this, some batch processing or load testing type applications can only run during non-business hours, so they can be shut down during the day.

Many of these AWS solutions offer recommendations, but do require manual efforts to gain the benefits. This is why third party solutions have have seen widespread adoption and include cloud management, cloud governance and visibility, and cloud optimization tools. In part two of this this blog we will have a look at some of those tools, the benefits of each, approach and the level of automation to be gained.

Read more ›

Cloud Webhooks – Notification Options for System Level Alerts to Improve your Cloud Operations

Webhooks are user-defined HTTP POST callbacks. They provide a lightweight mechanism for letting remote applications receive push notifications from a service or application, without requiring polling. In today’s IT infrastructure that includes monitoring tools, cloud providers, DevOps processes, and internally-developed applications, webhooks are a crucial way to communicate between individual systems for a cohesive service delivery. Now, in ParkMyCloud, webhooks are available for even more powerful cost control.

For example, you may want to let a monitoring solution like Datadog or New Relic know that ParkMyCloud is stopping a server for some period of time and therefore suppress alerts to that monitoring system for the period the server will be parked, and vice versa enable the monitoring once the server is unparked (turned on). Another example would be to have ParkMyCloud post to a chatroom or dashboard when schedules have been overridden by users. We do this by enabling systems notifications to our cloud webhooks.

Previously only two options were provided when configuring system level and user notifications in ParkMyCloud: System Errors and Parking Actions. We have added 3 new notification options for both system level and user notifications. Descriptions for all five options are provided below:

  • System Errors – These are errors occurring within the system itself such as discovery errors, parking errors, invalid credential permissions, etc.
  • System Maintenance and Updates – These are the notifications provided via the banner at the top of the dashboard.
  • User Actions – These are actions performed by users in ParkMyCloud such as manual resource state toggles, attachment or detachment of schedules, credential updates, etc.
  • Parking Actions – These are actions specifically related to parking such as automatic starting or stopping of resources based on defined parking schedules.
  • Policy Actions – These are actions specifically related to configured policies in ParkMyCloud such as automatic schedule attachments based on a set rule.

We have made the options more granular to provide you better control on events you want to see or not see.

These options can be seen when adding or modifying a channel for system level notifications (Settings > System Level Notifications). In the image shown below, a channel is being added.

Note: For additional information regarding these options, click on the Info Icon to the right of Notify About.

The new notification options are also viewable by users who want to set up their own notifications (Username > My Profile).  These personal notifications are sent via email to the address associated with your user.  Personal notifications can be set up by any user, while Webhooks must be set up by a ParkMyCloud Admin.

After clicking on Notifications, you will see the above options and may use the checkboxes to select the notifications you want to receive. You can also set each webhook to handle a specific ParkMyCloud team, then set up multiple webhooks to handle different parts of your organization.  This offers maximum flexibility based on each team’s tools, processes, and procedures. Once finished, click on Save Changes. Any of these notifications can be sent then to your cloud webhook and even Slack to ensure ParkMyCloud is integrated into your cloud management operations.

 

Read more ›

Interview: DevOps in AWS – How to Automate Cloud Cost Savings

automate cloud cost savings

We chatted with Ryan Alexander, DevOps Engineer at Decision Resources Group (DRG) about his company’s use of AWS and how they automate cloud cost savings. Below is a transcript of our conversion.

Hi Ryan, thanks for speaking with us. To start out, can you please describe what your company does?

Decision Resources Group offers market information and data for the medtech industry. For example, let’s say a medical graduate student is doing a thesis on Viagra use in the Boston area. They can use our tool to see information such as age groups, ethnicities, number of hospitals, and number of people who were issued Viagra in the city of Boston.

What does your team do within the company? What is your role?

I’m a DevOps engineer on a team of two. We provide infrastructure automation to the other teams in the organization. We report to senior tech management, which makes us somewhat of an island within the organization.

Can you describe how you are using AWS?

We have an infrastructure team internally. Once a server or infrastructure is built, we take over to build clusters and environments for what’s required. We utilize pretty much every tool AWS offers — EBS, ELB, RDS, Aurora, CloudFormation, etc.

What prompted you to look for a cost control solution?

When I joined DRG in December, there was a new cost saving initiative developing within the organization. It came from our CTO, who knew we could be doing better and wanted to see where we might be leaving money on the table.

How did you hear about ParkMyCloud?

One of my colleagues actually spoke with your CTO, Dale, at AWS re:Invent, and I had also heard about ParkMyCloud at DevOpsDays Toronto 2016. We realized it could help solve some of our cloud cost control problems and decided to take a look.

What challenges were contributing to the high costs? How has ParkMyCloud helped you solve them?

We knew we had a problem where development, staging, and QA environments were only used for 8 hours a day – but they were running for 24 hours a day. We wanted to shut them down and save money on the off hours, which ParkMyCloud helps us do automatically.

We also have “worker” machines that are used a few times a month, but they need to be there. It was tedious to go in and shut them down individually. Now with ParkMyCloud, I put those in a group and shut them down with one click. It is really just that easy to automate cloud cost savings with ParkMyCloud.

We also have security measures in place, where not everyone has the ability to sign in to AWS and shut down instances. If there was a team that needed them started on demand, but they’re in another country and I’m sleeping, they have to wait until I wake up the next morning, or I get up at 2 AM. Now that we set up Single Sign-On, I can set up the guys who use those servers, and give them the rights to startup and shutdown those servers. This has been more efficient for all of us. I no longer have to babysit and turn those on/off as needed, which saves time for all of us.

With ParkMyCloud, we set up teams and users so they can only see their own instances, so they can’t cause a cascading failure because they can only see the servers they need.

Were there any unexpected benefits of ParkMyCloud?

When I started, I deleted 3 servers that were sitting there doing nothing for a year and costing the company lots of money. With ParkMyCloud, that kind of stuff won’t happen, because everything gets sorted into teams. We can see the costs by team and ask the right questions, like, “why is your team’s cost so expensive right now? Why are you ignoring these recommendations from ParkMyCloud to park these instances?”

 

We rely on tagging to do all of this. Tagging is life in DevOps.

Read more ›

How X-Mode Deals with Rising AWS Costs

rising AWS costs

We sat down with Josh Anton, CEO of X-Mode, a technology app that has been experiencing rapid growth and rising AWS costs. We asked him about his company, what cloud services he uses, and how he goes about mitigating those costs.

Can you start by telling us about X-Mode and what you guys do?

X-Mode is a location platform that currently maps out 5-10% of the U.S. population on a monthly basis and 1-2% of the U.S. population daily, which is about 3-6 million active daily users | 15M to 20M users monthly. X-Mode collect location based data from applications and platforms used by these consumers, and then develop consumer segments or attribution where our customers basically use the data to determine if their advertising is effective and to develop target profiles. For example, based on the number and types of coffee shops a person has visited, we can assume they are this type of coffee drinker. Or a company like McDonald’s will determine if their advertising is effective if they see that an ad is run in a certain area, and a person visits that restaurant in the next few days. The data has many applications.

How did you get this idea, Josh?

We started off as an app called Drunk Mode, which was founded and built while I was at the University of Virginia studying Marketing and IT. After about a year and half our app grew to about 1.5 million users by leveraging influencer marketing via Trend Pie and student campus rep program at 50+ universities. In September of 2016, we realized that if we developed a location-based technology platform we could monetize and capitalize on the location data we collected from the Drunk Mode app. Along with input from our advisors, we developed a strategy to help out other small apps by aggregating their data, crunching it, and packaging it up in real-time to sell to ad agencies and retailers, acting almost as a data wholesaler and helping these small app plays monetize their data as a secondary source of income.

Who’s cloud services are you using and how does X-Mode work?

We use Amazon Web Services (AWS) for all of our cloud infrastructure and primarily use their EC2, RDS, and Elastic Beanstalk services. Our technology works by collecting and aggregating location data based on when and where people go on a daily basis. it is collected locally by iOS and Android devices, and passed to AWS’s cloud using their API gateway function. The cool thing is that we are able to pinpoint a person’s location within feet of a retail location. The location data is batched and sent to our servers every 12 hours and we package it up and license the data out to our vendors. We are processing around 10 to 12 billion location based records per month, and have some proprietary algorithms which make our processing very fast and we have almost no burn on the phone’s battery. Our customers are sent the data daily and we use services like Lambda, RDS and Elastic Beanstalk to make this as efficient as possible. We are now developing the functionality to better triangulate beacons so that we can pinpoint locations even better, and send location data within the hour, rather than within the day.

Why did you pick AWS?

We chose AWS because when X-Mode joined Fishbowl Labs (a startup accelerator run and sponsored by AOL in Northern Virginia), we were given $15,000 in free AWS credits. The free credits have made me very loyal to Amazon’s service and now the switching costs would be fairly high in terms of effort and dollars to move away from Amazon. So even though it’s expensive, we are here to stay and adopting more of AWS’s advanced services in order to improve our platform performance and take advantage of their technology advances. Another reason we stay with AWS is that we know it is going to be there, we previously used another service called Parse.com that was acquired by Facebook and a few years later, they shut down the service, for us performance and stability (the server service existing 10 years from now) are very important to us.

Are you still using AWS credits?

No, we used those up many months ago. We have gone from spending a few hundred dollars a month to spending $25,000 or more a month. While that is a cost, it’s also a blessing in that X-Mode is rapidly growing and scaling. Outside of the cost of people, this is our biggest monthly expense. ParkMyCloud was an easy choice, given 75% or more of our AWS spend is on EC2 and RDS services, and ParkMyCloud’s ability to ‘park’ each service and their flexible governance model for our remote engineering team. So we are very excited about the savings ParkMyCloud will produce for us, along with some of the new design work we will be doing to make our platform even more efficient.

Are there other ways you are looking to optimize your AWS spend?  

We believe that we have to re-architect the system. We have actually done that three times given our rapid platform growth, but it is all about making sure that we are optimizing our import/export process. We are running our servers at maximum capacity to help get information to people, and are continually looking to make our operation more efficient. Along with using ParkMyCloud, we are focusing on general platform optimization to make sure we keep costs down, improve performance and innovate at a rapid pace.

What other tools do you use as part of your DevOps process?

Let’s keep in mind we are startup, but we are getting more and more organized in terms of development cycles and have a solid delivery process. And yes, we use tools like Slack, Jira, BaseCamp, Bitbucket, and Google Drive. Everything is SaaS-based and everything is in the cloud, and we follow an agile development process. On the Sales and Marketing side we are solely a millennial workforce and work in office but our development team is basically stay at home dads distributed around the country, so planning and communication are keys to our success. That’s where Slack and Jira come into play. In terms of processes, we are trying to implement a better QA process so we deliver very vetted code to our end users. We do a lot of development planning and mapping each quarter, so all of this is incredibly important to the growth of the X-Mode platform and to the success of our organization.

Read more ›

Trends in Cloud Computing – ParkMyCloud Turns Two, What’s New?

trends in cloud computing

It’s not hard to start a company but it’s definitely hard to grow and scale a company, so two years later we thought we would discuss trends in cloud computing that shape our growth and vision – what we see and hear as we talk to enterprises, MSP’s and industry pundits on a daily basis. First, and foremost we need to thank our customers, both free and paid, who use ParkMyCloud, save millions a year, and actively engage with us in defining our roadmap, and have helped us develop the best damn cloud cost control solution in the market. And the bloggers, analysts, and writers who share our story, given we have customers on every continent (except Antarctica) this has been extremely beneficial to us.

Observation Number One: the public cloud is here to stay. Given the CapEx investment needed to build and operate data centers all over the world, only the cash rich companies will succeed at scale so you need to figure out if you want to be a single cloud / multi-region, or multi-cloud user. We discussed that in detail recently in this blog and it really boils down to risk mitigation. Most companies we talk to are single cloud BUT do ask if we support multi-cloud in case they diversify (we are, we support AWS, Azure, and Google).

Observation Number Two: AWS is king, duh – well they are, and they continue to innovate and grow at a record setting pace. AWS just hit $4bn in quarterly revenue – that’s $16bn in run rate. It’s like the new IBM – what CIO or CTO is going to get fired for moving their infrastructure to AWS’ cloud to improve agility, attract millennial developers who want to innovate in the cloud, leverage the cloud ecosystem, and lower cost (we will address this one in a bit). We released support for Azure and Google in 2017, and yet 75% or more of the new trials and customers we get use AWS, and their environments are almost always larger than those on Azure and Google. There is a reason Microsoft and Google do not release IaaS statistics. And for IBM and Oracle, they are the way back IaaS time machine.

Observation Number Three: Cloud Cost Control is a real thing. It’s something enterprises really care about, and optimizing their cloud spend as their bills grow is becoming increasingly more important to the CFO and CIO. This is mainly focused on buying capacity in advance (which kind of defeats the purpose of the pay as you go model), rightsizing servers as developers have a tendency to over provision for their needs, turning stuff off when it’s not being used, and finding orphaned resources that are ‘lost’ in the cloud. As 65% of a bill is spent on compute (servers / instances) the focus is usually directed there first and foremost as a reduction there is the largest impact on a bill.

Observation Number Four: DevOps and IT Ops are responsible for cloud cost control, not Finance. Now, Finance (or the CFO) might provide a directive to  IT or Engineering that their cloud costs must be brought under control and that they need to look at ways to optimize, but at the end of the day DevOps and IT Ops are responsible for evaluating and selecting tools to help their companies immediately reduce their cloud costs. When we talk to the technical teams during a demo they have been told to they need to reduce their cloud spend or there is a cost control initiative in place, and then they research technologies to help them solve this problem (SEO is key here). Here’s a great example of a FinTech customer of ours and how their cost control decision went down.

Observation Number Five: It’s all about automation, DevOps and self-service. As mentioned, the technical folks are responsible for implementing a cost control platform to optimize their cloud spend, and as such it’s all about show me, not pretty reports and graphs. What we mean here is that as an action oriented platform they want us to be able to easily integrate into their continuous integration and delivery processes through a fully functional API, but also provide a simple UI for the non-techies to ensure self-service. And at the infrastructure layer it’s about what you can do with and through DevOps tools like Slack, Atlassian, and Jenkins, and at the enterprises level with SSO providers such as Ping, Okta and Microsoft, repeating themes over and over again regardless of the cloud provider.

Observation Number Six: Looking ahead, it’s about Stacks. As the idea of microservices continues to take hold, more developers are utilizing multiple instances or services to deploy a single application or environment. In years past, the bottleneck for implementing such groups of servers or databases was the deployment time, but modern configuration management tools (like Chef, Puppet, and Ansible) make this a common strategy by turning the infrastructure into code.  However, managing these environments for humans can remain challenging. ParkMyCloud already allows logical groupings of instances for one-click scheduling, but we’re planning on taking this a step further by integrating with the deployment solutions to really tie it all together.

Obviously the trends in cloud computing we touch on have a mix of macro and micro, and are generally looked at through a cost control lens, but they do provide insights into the day to day of what we see and hear from the folks that operate and use cloud from multinational enterprises to startups. By tracking these trends over time, we can help you keep on top of cloud best-practices to optimize your IT budget, and we look forward to what the next 2 years of cloud computing will bring us.

Read more ›

Was the Acquisition of Cloudyn About the need to Manage Microsoft Azure? Sort of.

batch workloads

Perhaps you heard that Microsoft recently acquired Cloudyn in order to manage Microsoft Azure cloud resources, along with of course Amazon Web Services (AWS), Google Cloud Platform (GCP), and others. Why? Well the IT landscape is becoming more and more a multi-cloud landscape. Originally this multi-cloud (or hybrid cloud) approach was about private and public cloud, but as we recently wrote here the strategy as we talk to large enterprises is becoming more about leveraging multiple public clouds for a variety of reasons – risk management, vendor lock in, and workload optimization seem to be the three main reasons.

 

That said, according to TechCrunch and quotes from Microsoft executives the acquisition is meant to provide Microsoft a cloud billing and management solution that provides it with an advantage over competitors (particularly AWS and GCP) as companies continue to pursue, drum roll please … a multi-cloud strategy. Additional, benefits for Microsoft include visibility into usage patterns, adoption rates, and other cloud-related data points that they can leverage in the ‘great cloud war’ to come … GOT reference of course.

 

Why are we writing about this – a couple reasons. One of course is that this a relevant event in the cloud management platform (CMP) space, as this is really the first big cloud visibility and governance acquisition to date. The other acquisitions by Dell (Enstratius), Cisco (Cliqr), and CSC (ServiceMesh) for example were more orchestration and infrastructure platforms than reporting tools. Second, this points to the focus enterprises have on cost visibility, cost management and governance as they look to optimize their spend and usage as one does with any utility. And third, this proves that a ‘pushback’ from enterprises to more widely adopt Azure has been, “I am already using AWS, I don’t want to manage through yet another screen / console”, and that multi-cloud visibility and governance helps solve that problem.

 

Now, taking this one step farther: the visibility, recommendations, and reporting are all well and good, but what about the actions that must be taken off those reports, and integration into enterprise Devops processes for automation and continuous cost control? That’s where something like Cloudyn falls short, and where a platform like ParkMyCloud kicks in:

 

  • Multi-cloud Visibility and Governance- check
  • Single-Sign On (SSO) – check
  • REST API for DevOps Automation – check
  • Policy Engine for Automated Actions (parking) – check
  • Real-time Usage and Savings data – check
  • Manage Microsoft Azure (AWS + GCP) – check

 

The next step in cloud cost control is automation and action, not just visibility and reporting. Let technology automate these tasks for you instead of just telling you about it.

Read more ›

New on ParkMyCloud: Notifications via Slack and Email

New on ParkMyCloud: you can now receive notifications about your environment and ParkMyCloud account via email as well as Slack and other webhooks. We’re happy to deliver this user-requested feature, and look forward to an improved user experience.

The notifications are divided into system-level notifications and user-level notifications, as outlined below.

Administrators: Configure Notifications of Account-Level Actions via Slack/Webhooks

Administrators can now set up shared account-level notifications for parking actions and/or system errors. You can choose to receive these actions via Slack or a custom webhook.

These notifications include information about:

  • Parking Actions
    • Resource stop/start as a result of a schedule
    • Manual resource start/stop via toggles
    • Manual schedule snoozes
    • Attach/detach of schedules to resources
    • Manual changes to schedules
  • System Errors
    • Permissions issues, such as a lack of permissions on an instance or credential that prevents parking actions
    • Errors related to your cloud service provider, for example, errors due to service outages.

For instructions on how to configure these notifications, please see this article on our support portal.

All Users: Get Notified via Email

While system-level notifications must be configured by an administrator, individual ParkMyCloud users can choose to set up email notifications as well. These notifications include the same information listed above for the teams you choose.

Email notifications will be sent as a rollup every 15 minutes. If no actions occur, you will not receive an email. For instructions on how to configure these notifications, please see this article on our support portal.

Let Us Know What You Think

To our current users: we look forward to your feedback on the notifications, and welcome any suggestions you have to improve the functionality and usability of ParkMyCloud.

If you aren’t yet using ParkMyCloud, you can get started here with a free trial.

Read more ›

Top Cloud Computing Trends: Cloud Cost Control

Enterprise Management Associates (EMA) just released a new report on the top cloud computing trends for hybrid cloud, containers, and DevOps in 2017. With this guide, they aim to provide recommendations to enterprises on how you can implement products and processes in your business to meet the top priority trends.

First Priority Among Cloud Computing Trends: Cost Control

Of the 260 companies interviewed in EMA’s study, 42% named “cost control” as their number one priority. Here at ParkMyCloud, we weren’t surprised to hear that. As companies mature in their use of the cloud, cost control moves to the top of the list as their number one cloud-related priority.

EMA has identified a few key problems that contribute to the need for cloud cost control:

  • Waste – inefficient use of cloud resources
  • Unpredictable bills – cloud bills are higher than expected
  • Vendor lock-in – inability to move away from a cloud provider due to contractual or technological dependencies

Related to this is another item on EMA’s list of cloud computing trends: the demand for a single pane of glass for monitoring the cloud. This goes hand-in-hand with the need for cost control, as well as concerns about governance: if you can’t see it, you don’t know there’s a problem. However, it’s important to keep in mind that a pane of glass is only one step toward reaching a solution. You need to actually take action on your cloud environment to keep costs in control.

How to Implement Changes to Control Costs

To actually implement changes in your environment and control costs, EMA has provided a starting recommendation:

Consider simple tools with large impact: Evaluate tools that are quick to implement and help harvest “low-hanging fruit.”

In fact, EMA provided a list of its top 3 vendors that it recommends as a Rapid ROI Utility – among which it has included ParkMyCloud.

Cost Control among top cloud computing trends

EMA recommends these top tools, particularly the “rapid ROI tools,” as a good starting point for controlling  cloud costs – as each of the tools can easily be tried out and the results can be verified in a brief period of time. (If you’re interested in trying out ParkMyCloud in your environment, we offer a 14-day free trial, during which you get to pocket the savings and try out a variety of enterprise-grade features like SSO, a Policy Engine, and API automation for continuous cost control.)

 

Download the report here to check out the full results from EMA.

Read more ›

New: Park AWS RDS Instances with ParkMyCloud

Now You Can Park AWS RDS Instances with ParkMyCloud

We’re happy to share that you can now park AWS RDS instances with ParkMyCloud!

AWS just recently released the ability to start and stop RDS instances. Now with ParkMyCloud, you can automate RDS start/stop on a schedule, so your databases used for development, testing, and other non-production purposes are only running when you actually need them – and you only pay for the hours you use. This is the first parking feature on the market that’s fully integrated with AWS’s new RDS start/stop capability.

You can also use ParkMyCloud’s policy engine to create rules that automatically assign your RDS instances to parking schedules and to teams, so they’re only accessible to the users who need them.

Why it Matters

Our customers who use AWS have long asked for the ability to park RDS instances. In fact,

RDS is the area of biggest of cloud spend after compute, accounting for about 15-20% of an average user’s bill. The savings users can enjoy from parking RDS will be significant. On average, ParkMyCloud users save $140 per parked instance per month on compute – and as RDS instances cost significantly more per hour, the savings will be proportionally higher.

“We’ve used ParkMyCloud for over a year to reduce our EC2 spend, enjoying a 13X return on our yearly license fee – it’s literally saved us thousands of dollars on our AWS bill. We look forward to saving even more now that ParkMyCloud has added support for RDS start/stop!” – Anthony Suda, Release Manager/Senior Network Manager, Sundog.

How to Get Started

It’s easy to get started and park AWS RDS instances with ParkMyCloud.

If you don’t yet use ParkMyCloud, you can try it now for free. We offer a 14-day free trial of all ParkMyCloud features, after which you can choose to subscribe to a premium  plan or continue parking your instances using ParkMyCloud’s free tier.

If you already use ParkMyCloud, you’ll need to check your AWS permissions and ParkMyCloud policies out, and then turn on the RDS feature via your settings page. Please see more information about this on our support page.

As always, we welcome your feedback about this new addition to ParkMyCloud, and anything else you’d like to see in the future.

Happy parking!

Read more ›

Cloud Access Control Policy – How to Balance Security and Access

Cloud access control policy can be a tricky balance. On the one hand, cloud security is a top concern among many cloud users we talk to. On the other, the ease, flexibility, and speed of the cloud can be sacrificed when users aren’t given the access they need to the resources they use.

Cloud Access Control Policy & Cloud Management Platforms

Internal cloud access control policy is a matter that can be determined within each organization – but what about when an organization wants to use an external cloud management platform? As mentioned, we constantly hear that cloud security ranks #1 or close to it in terms of enterprise priorities, yet when we look around we see a lot of divergence in what different cloud management products require.

Some require literally the keys to the kingdom when you wish to partake of their systems capabilities. You might just want to run some simple analytical reports, but the vendor starts from the perspective of requiring broad ranging policy access, way beyond what’s required to do that job.

We have begun a survey of policy requirements across cloud management platforms, and from our research so far, it seems that the “principle of least privilege” is not as widely adopted in the market as it should be.

The Principle of Least Privilege

In the world of cyber security there is a widely-known cloud access control policy concept called “the principle of least privilege.”  In essence, this concept means that users of any system should only be provided with the privileges that they need to do their job. In the world of on-demand cloud computing where resources are spun up and access shared within seconds, this principle is often stretched beyond its limit.

When designing ParkMyCloud, this concept was top-of-mind. We understood the need to assure clients that controlling their infrastructure with our product made their environments safer, not more vulnerable. What this means in practice is minimizing the number of policy permissions any user of the system needs to have to optimize and control their public cloud.

Each public cloud provider (AWS, Azure, Google Cloud Platform), has a unique set of policy controls used to manage how people access and utilize their company’s cloud infrastructure. These range at the low end to just allowing people to view things (and not create, change or terminate) to in essence giving users the keys to the kingdom.

When evaluating and subscribing to cloud tools, you should demand that access controls are tightly enforced. ParkMyCloud uses the bare minimum to save you money in the cloud, so you can be sure that your infrastructure is secure and optimized for cost control. Keep your environment secure, while balancing by providing users with limited access so they can do their jobs efficiently and cost-effectively.

Read more ›

Is a multi-cloud strategy really just a risk mitigation decision?

Manage Microsoft Azure

Now that ParkMyCloud supports AWS, Azure, and Google, we’re starting to see more businesses who utilize a multi-cloud strategy. The question this raises is: why is a multi-cloud strategy important from a functional standpoint, and why are enterprises deploying this strategy?

To answer this, let’s define “multi-cloud”, as this means different things to different people. I appreciated this description from TechTarget, which describes multi-cloud as:

the concomitant use of two or more cloud services to minimize the risk of widespread data loss or downtime due to a localized component failure in a cloud computing environment. …. A multi-cloud strategy can also improve overall enterprise performance by avoiding “vendor lock-in” and using different infrastructures to meet the needs of diverse partners and customers.

From our conversations with some cloud gurus and our customers, a multi-cloud strategy boils down to:

  • Risk Mitigation – low priority
  • Managing vendor lock-in (price protection) – medium priority
  • Optimizing where you place your workloads – high priority

Risk Mitigation 

Looking at our own infrastructure at ParkMyCloud, we use AWS and other AWS services including RDS, Route 53, SNS and SES. In a risk mitigration exercise, would we look for those like services in Azure, and try to go through the technical work of mapping a 1:1 fit and building a hot failover in Azure? Or would we simply use a different AWS region – which uses fewer resources and less time?

You don’t actually need multi-cloud to do hot failovers, as you can instead use different regions within a single cloud provider – but that’s of course betting on the fact that those regions won’t go down simultaneously. In our case we would have major problems if multiple AWS regions went down simultaneously, but if that happens we certainly won’t be the only one in that boat!

Furthermore, to do a hot failover from one cloud provider to another (say, between AWS and Google), would require a degree of working between the cloud providers and infrastructure and application integration that is not widely available today.

Ultimately, risk mitigation just isn’t the most significant driver for multi-cloud.

Vendor Lock-in

What happens when your cloud provider changes their pricing? Or your CIO says we will never be beholden to one IT infrastructure vendor, like Cisco on the network, or HP in the data center? In that case, you lose your negotiating leverage on price and support.

On the other hand, look at SalesForce. How many enterprises use multiple CRMs?

Do you then have to design and build your applications to undertake a multi-cloud strategy from the get-go, so that transitioning everything to a different cloud provider will be a relatively simple undertaking? The complexity of moving your applications across clouds over a couple of months is nothing compared to the complexity of doing a real-time hot failover when your service is down. For enterprises this might be doable, given enough resources and time. Frankly, we don’t see much of this.

Instead, we see customers using a multi-cloud strategy to design and build applications in the clouds best suited for optimizing their applications. Bythe way — you can then use this leverage to help prevent vendor lock-in.

Workload Optimization

Hot failovers may come to mind first when considering why you would want to go multi-cloud, but what about normal operations, when your infrastructure is running smoothly? Having access to multiple cloud providers lets your engineers pick the one that is the most appropriate for the workload they want to deploy. By avoiding the “all or nothing’ approach,” IT leaders gain greater control over their different cloud services. They can pick and choose the product, service or platform that best fits their requirements, in terms of time-to-market or cost effectiveness,, then integrate those services. Also, this approach may help avoiding problems that arise, when a single provider runs into trouble!

A multi-cloud strategy addresses several inter-related problems. It’s not just a technical avenue for hot failover. It includes vendor relationship management and the ability optimize your workloads based on the strengths of your teams and that CSP’s infrastructure.

By the way — when you actually deploy your multi-cloud strategy, make sure you have a management plan in place upfront. Too often, we hear from companies who deploy on multiple clouds, but don’t have a way to see or compare them in one place — so make sure you have a multi-cloud dashboard in place to provide visibility that spans across cloud providers, their locations and your resources, for proper governance and control, so you can get the most benefit out of a multi-cloud infrastructure.

Read more ›

Announcing Google Cloud Platform Cost Control with ParkMyCloud

Now Supporting Google Cloud Platform Cost Control

Today, we’re excited to announce that ParkMyCloud now supports Google Cloud Platform!

Amazon Web Services (AWS) customers have been using ParkMyCloud for automated cost control since the product launch in 2015, and Azure customers have enjoyed the same capabilities since earlier this year. With ParkMyCloud, you can automate on/off scheduling to ensure your resources are only running when you actually need them. Customers such as McDonald’s, Fox, Capital One, Sage Software, and Wolters Kluwer have already saved millions.

If you use multiple public cloud providers, you can manage them together on a single dashboard.

Why it Matters

With the addition of Google Cloud Platform, ParkMyCloud now provides continuous cost control for the three largest cloud providers in the $23 billion public cloud market. This means ParkMyCloud enables enterprises to eliminate wasted cloud spend – a $6 billion problem in 2017. See more in our official press release.

How Does ParkMyCloud Work on Google Cloud Platform?

It’s simple to get started using ParkMyCloud to manage your Google compute resources:

  1. Connect – Create a ParkMyCloud account – no credit card required – and connect to your Google Cloud Platform account
  2. Manage – Discover and manage all your cloud resources in a single view
  3. Park – Just click the schedule to automatically “Park” (stop) and start resources based on your needs.

If you’re new to ParkMyCloud, please see these additional resources:

  • ParkMyCloud Single Sign-On Integrations – integrate with Active Directory, Centrify, Google G-Suite, Okta, OneLogin, or Ping Identity for single sign-on to ParkMyCloud
  • Zero-Touch Parking – how to use the ParkMyCloud policy engine to create rules for schedules to be automatically applied
  • Resource Group Parking – create “logical groups” for your resources for sequenced startup and shutdown

See it In Action

We’re happy to schedule a demo for you to see ParkMyCloud in action – if you’re interested, please contact us.

Try Now for Free

You can get started now with a free 14-day trial of ParkMyCloud, with full access to premium features.

After your trial expires, you can choose to continue using the core parking functionality for free (forever!), or upgrade to use premium features such as the API, advanced reporting and SSO. Happy parking!

Read more ›

Continuous Integration and Delivery Require Continuous Cost Control

Today, we propose a new concept to add to the DevOps mindset: Continuous Cost Control.

In DevOps, speed and continuity are king. Continuous Operations, Continuous Delivery, Continuous Integration. Keep everything running and get new features in the hands of users quickly.

For some organizations, this approach leads to a mindset of “speed at any cost”. Especially in the era of easily consumable public cloud, this results in a habit of wasted spend and blown budgets – which may, of course, meet the goals for delivery. But remember that a goal of Continuous Delivery is sustainability. This applies to the coding and backend of the application, but also to the business side.

With that in mind, we get to the cost of development and operations. At some point in every organization’s lifecycle comes the need to control costs. Perhaps it’s when your system or product reaches a certain level of predictability or maturity – i.e. maintenance mode – or perhaps earlier, depending your organization.

We all know that agility has helped companies create competitive advantage; but customers and others tell us it can’t be “agility at any cost.” That’s why we believe the next challenge is cost-effective agility. That’s what Continuous Cost Control is all about.

What is Continuous Cost Control?

Think of it as the ability to see and automatically take action on development and operations resources, so that the amount spent is a controlled factor and not merely a result. This should occur with no impact to delivery.

Think of the spend your department manages. It likely includes software license costs and true-ups and perhaps various service costs. If you’re using private cloud/on-premise infrastructure, you’ve got equipment purchases and depreciations, plus everything to support that equipment, down to the fuel costs for backup generators, to consider.

However, the second biggest line item (after personnel) for many agile teams is public cloud. Within this bucket, consider the compute costs, bandwidth costs, database costs, storage, transactions… and the list goes on.

While private cloud/on-premise infrastructure requires continuous monitoring and cost control, the problem becomes acute when you change to the utility model of the public cloud. Now, more and more people in your organization have the ability to spin up virtual servers. It can be easy to forget that every hour (or minute, depending on the cloud provider) of this compute time costs money – not to mention all the surrounding costs.

Continually controlling these costs means automating your cost savings at all points in the development pipeline.  Early in the process, development and test systems should only be run while actually in use.  Later, during testing and staging, systems should be automatically turned on for specific tests, then shut down once the tests are complete.  During maintenance and production support, make sure your metrics and logs keep you updated on what is being used – and when.

How to get started with Continuous Cost Control

While Continuous Cost Control is an idea that you should apply to your development and operations practices throughout all project phases, there are a few things you can do to start a cultural behavior of controlled costs.

  • Create a mindset. Apply principles of DevOps to cloud cost control.
  • Take a few “easy wins” to automate cost control on your public cloud resources.
    • Schedule your non-production resources to turn off when not needed
    • Build in a process to “right size” your instances, so you’re not paying for more capacity than you need
    • Use alternate services besides the basic compute services where applicable. In AWS, for example, this includes Auto Scaling groups, Spot Instances, and Reserved Instances
  • Integrate cost control into your continuous delivery process. The public cloud is a utility which needs to optimized from day one – or if not then, as soon as possible.
  • Analyze usage patterns of your development team to apply rational schedules to your systems to increase adoption rates
  • Allow deviations from the normal schedules, but make sure your systems revert back to the schedule when possible
  • Be honest about what is being used, and don’t just leave it up for convenience

We hope this concept of Continuous Cost Control is useful to you and your organization – and we welcome your feedback.

Read more ›
Page 1 of 41234
Copyright © ParkMyCloud 2016. All rights reserved|Privacy Policy