Cloud Operations Management: Is the cloud really making operations easier?

As cloud becomes more mature, the need for cloud operations management becomes more pervasive. In my world, it seems pretty much like IT Operations Management (ITOM) from decades ago. In the way-back machine I used to work at Micromuse, the Netcool company, which was acquired by IBM Tivoli, the Smarter Planet company, which then turned Netcool into Smarter Cloud … well you get the drift. Here we are 10+ years later, and IT = Cloud (and maybe chuck in some Watson).

Cloud operations management is the process concerned with designing, overseeing, controlling, and subsequently redesigning cloud operational processes.  This involves management of both hardware and software as well as network infrastructures to promote an efficient and lean cloud.

Analytics is heavily involved in cloud operations management and used to maximize visibility of the cloud environment, which gives the organization the intelligence required to control the resources and running services confidently and cost-effectively.

Cloud operations management can:

  • Improve efficiency and minimize the risk of disruption
  • Deliver the speed and quality that users expect and demand
  • Reduce the cost of delivering cloud services and justify your investments

Since ParkMyCloud helps enterprises control cloud costs, we mostly talk to customers about the part of cloud operations concerned with running and managing resources. We are all about that third bullet – reducing the cost of delivering cloud services and justifying investments. We strive to accomplish that while also helping with the first two bullets to really maximize the value the cloud brings to an enterprise.

So what’s really cool is when we get to ask people what tools they are using to deploy, secure, govern, automate and manage their public cloud infrastructure, as those are the tools that they want us to integrate into as part of their cost optimization efforts, and we need to understand the roles operation folks now play in public cloud (CloudOps).

And, no it’s not easier to manage cloud. In fact I would say it’s harder. The cloud provides numerous benefits – agility, time to market, OpEx vs. CapEx, etc. – but you still have to automate, manage and optimize all those resources. The pace of change is mind boggling – AWS advertises 150+ services now, from basic compute to AI, and everything in between.

So who are these people responsible for cloud operations management? Their titles tend to be DevOps, CloudOps, IT Ops and Infrastructure-focused, and they are tasked with operationalizing their cloud infrastructure while teams of developers, testers, stagers, and the like are constantly building apps in the cloud and leveraging a bottoms-up tools approach. Ten years ago, people could not just stand up a stack in their office and have at it, but they sure as hell can now.

So what does this look like in the cloud? I think KPMG did a pretty good job with this graphic and generally hits on the functional buckets we see people stick tools into for cloud operations management.

So how should you approach your cloud operations management journey? Let’s revisit the goals from above.

  1. Efficiency – Automation is the name of the game. Narrow in on the tools that provide automation to free up your team’s development time.
  2. Deliverability – See the bullet above. When your team has time, they can focus on delivering the best possible product to your customers.
  3. Cost control – Think of “continuous cost control” as a companion to continuous integration and continuous delivery. This area, too, can benefit from automated tools – learn more about continuous cost control.

 

Read more ›

$12.9 Billion in wasted cloud spend this year.

Wake up and smell the wasted cloud spend. The cloud shift is not exactly a shift anymore, it’s an evident transition. It’s less of a “disruption” to the IT market and more of an expectation. And with enterprises following a visible path headed towards the cloud, it’s clear that their IT spend is going in the same direction: up.

Enterprises have a unique advantage as their cloud usage continues to grow and evolve. The ability to see where IT spend is going is a great opportunity to optimize resources and minimize wasted cloud spend, and one of the best ways to do that is by identifying and preventing cloud waste.

So, how much cloud waste is out there and how big is the problem? What difference does this make to the enterprises adopting cloud services at an ever-growing rate? Let’s take a look.

The State of the Cloud Market in 2018

The numbers don’t lie. For a real sense of how much wasted cloud spend there is, the first step is to look at how much money enterprises are spending in this space at an aggregate level.

Gartner’s latest IT spending forecast predicts that worldwide IT spending will reach $3.7 trillion in 2018, up 4.5 percent from 2017. Of that number, the portion spent in the public cloud market is expected to reach $305.8 billion in 2018, up $45.6 billion from 2017.

The last time we examined the numbers back in 2016, the global public cloud market was sitting at around $200 billion and Gartner had predicted that the cloud shift would affect $1 trillion in IT spending by 2020. Well, with an updated forecast and over $100 billion dollars later, growth could very well exceed predictions.

The global cloud market and the portion attributed to public cloud spend are what give us the ‘big picture’ of the cloud shift, and it just keeps growing, and growing, and growing. You get the idea. To start understanding wasted cloud spend at an organizational level, let’s break this down further by looking at an area that Gartner says is driving a lot of this growth: infrastructure as a service (IaaS).

Wasted Cloud Spend in IaaS

As enterprises increasingly turn to cloud service providers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) to provide compute resources for hosting components of their infrastructures, IaaS plays a significant role in both cloud spend and cloud waste.

Of the forecasted $305.8 billion dollar public cloud market  for 2018, $45.8 billion of that will be spent on IaaS, ⅔ of which goes directly to compute resources. This is where we get into the waste part:

  • 44% of compute resources are used for non-production purposes (i.e. development, staging, testing, QA)
  • The majority of servers used for these functions only need to run during the typical 40-hour work week (Monday through Friday, 9 to 5) and do not need to run 24/7
  • Cloud service providers are still charging you by the hour (or minute, or even by the second) for providing compute resources

The bottom line: for the other 128 hours of the week (or 7,680 minutes, or 460,800 seconds) – you’re getting charged for resources you’re not even using. And there’s a large percent of your waste!

What You Can Do to Prevent Wasted Cloud Spend

Turn off your cloud resources.

The easiest and fastest way to save money on your idle cloud resources when  is by simply by not using them. In other words, turn them off. When you think of the cloud as a utility like electricity, it’s as simple as turning off the lights every night and when you’re not at home. With ParkMyCloud you can automatically schedule your cloud resources to turn off when you don’t need them, like nights and weekends, and eliminate 65% or more on your monthly bill with AWS, Azure, and Google. Wham. bam.

Turn on your SmartParking.

You already know that you don’t need your servers to be on during nights and weekends, so you shut them off. That’s great, but what if you could save even more with valuable insight and information about your exact usage over time?

With ParkMyCloud’s new SmartParking feature, the platform will track your utilization data, look for patterns and create recommended schedules for each instance, allowing you to turn them off when they’re typically idle.

There’s a lot of cloud waste out there, but there’s also something you can do about it: try ParkMyCloud today.

Read more ›

Yeah, Yeah, Yeah we Park %$#@, but what really matters to Enterprises? – Frequently Asked Questions

Here at ParkMyCloud we get to do product demos for a lot of great companies all over the world, from startups to Fortune 500’s, and in many different industries – Software, IT, Financial, Media, Food and Beverage, and many more. And as we talk to industry analysts and venture capitalists they always ask about vertical selling and the like — we used to do this back at Micromuse where had Federal, Enterprise, Service Provider and SMB sales teams, for example. But here at ParkMyCloud we notice in general the questions from enterprises are vertical-agnostic, and since cloud is the great IT equalizer in my book, we decided to summarize the 8 Most Frequently Asked Questions we get from prospects of all shapes and sizes.

These are the more common questions we get beyond turning cloud resources off / on:

How does ParkMyCloud handle system patching?

Answer: The most common way of dealing with patching is to use our API.  The workflow would be to log in through the API, get a list of the resources, then choose which resources you want and choose to “snooze” the schedule (which is a temporary override of the schedule, if you haven’t played with that yet) for a couple of hours, or however long the patching takes.  Once the schedule is snoozed, you can toggle the instance on, then do the patching.  After the patching is complete, you can either cancel the snooze to go back to the original schedule or wait for the snooze to finish and timeout.

If your patching is done on a weekly basis, you could also just implement the patch times into the schedules so the instances turn on, say at 3am on Sunday.

How do I start and stop instances in a sequential order?

Answer: ParkMyCloud has created a feature that we call ‘Logical Groups’, basically you group cloud resources into a group or cluster within the platform and then assign the order you wish them to stop and start, you can also set how long it takes before resource 1 starts / stops and then resource 2 starts / stops and so forth. This way, your web server can stop first and the database can stop second so all the connections close properly. As this feature is very popular, we have had many requests to fully automate this using our policy engine and tags, a work in progress – that will be way cool.

My developers hate UI’s, how does he/she manage the schedules without using your UI?

Answer: Yes, this is an easy one but always gets asked. If you are anti-UI or just don’t want to use yet another UI, you can use the following channels to manage your resources in ParkMyCloud:

Can I govern user access and permissions?

Answer: Yes, we have support for Single-Sign On (SSO) and a full on Role-based Access Control model (RBAC) in the platform that allows you to import users, add them to teams and assign them roles. The common scenario around this is ‘I only want my SAP QA team to have access to the cloud resources they need for that project and nothing else, and limit their permissions’ – handled.

Can I automatically assign schedules based on tags?

Answer: Yes, and in general this what most companies do using ParkMyCloud. We have a Policy Engine where you can create policies that allow you to fully automate your cloud resource scheduling. Basically the policy reads the AWS, Azure, or Google Cloud metadata that is brought into the platform, and based on those tags (or even other data like resource name, size, region, etc.) and the corresponding policy, we can automatically assign schedules to cloud resources. And we take that a step further, as those resources can also be automatically parsed to Teams and Users as well based on their roles (see RBAC).

You can only park stuff based on tags? That’s so weak!

Answer: Not so fast my friend … I must admit we sort of threw this one in there but it does come up quite often, and we recently solved this problem with our release of SmartParking, which allows you to bring in metric data, trend it for a period of time, and then automatically create schedules based on those usage patterns – cool stuff.

Can we pick which instances we bring into ParkMyCloud?

Answer: Sort of, through their API the cloud providers don’t allow you to choose which cloud resources in an account you bring into the platform, if you link a cloud account to ParkMyCloud all the cloud resources in that account will populate (assuming our API supports those resources and the cloud provider allows you to ‘park’ them). But we do let you choose which accounts you bring into ParkMyCloud, so link accounts and bring in as many or as few accounts as you wish, and by the way AWS recommends you create accounts based on on function like Production, Dev, Test, QA, etc., and then breaks that down even more granular to Dev 1, Dev 2, Dev 3, etc. – this is ideal for ParkMyCloud.

Where is ParkMyCloud located?

Answer: Northern Virginia of course, in Sterling at Terminal 68 to be precise. It’s a co-working space we share with several other startups; we would also be remiss if we did not mention this area is also one of the finalist locations for Amazon’s H2Q – it’s a hotbed of cloud and data center activity.

We hope this was helpful and would value your feedback on the 8 Most Frequently Asked Questions we get, and if yours are the same or different, or of course our favorite … have you thought of XYZ as a feature? Let us know at info@parkmycloud.com.

Read more ›

Introducing SmartParking: Automatic On/Off Schedules based on AWS CloudWatch Metrics

Today, we’re excited to bring you SmartParkingTM – automatic, custom on/off schedules for individual resources based on AWS CloudWatch metrics!

ParkMyCloud customers have always appreciated parking recommendations based on keywords found in their instance names and tags – for example, ParkMyCloud recommends that an instance tagged “dev” can be parked, as it’s likely not needed outside of a Monday-Friday workday.

Now, SmartParking will look for patterns in your utilization data from AWS CloudWatch, and create recommend schedules for each instance to turn them off when they are typically idle. This minimizes idle time to maximize savings on your resources.

With SmartParking, you eliminate the extra step of checking in with your colleagues to make sure the schedules you’re putting on their workloads doesn’t interfere with their needs. Now you can receive automatic recommendations to park resources when you know they won’t be used.

SmartParking schedules are provided as recommendations, which you can then click to apply. This release supports SmartParking for AWS resources, with plans to add Azure and Google Cloud SmartParking.

Customize Your Recommendations like your 401K

Different users will have different preferences about what they consider “parkable” times for an instance. So, like your investment portfolios, you can choose to receive SmartParking schedules that are “conservative”, “balanced”, or “aggressive”. And like an investment, a bigger risk comes with the opportunity for a bigger reward.

If you’d like to prioritize the maximum savings amount, then choose aggressive SmartParking schedules. You will park instances – and therefore save money – for the most time, with the “risk” of occasional inconvenience by having something turned off when someone needs it. Your users can always log in to ParkMyCloud and override the schedule with the “snooze button” if they need to use the instance when it’s parked.

On the other hand, if you would like to ensure that your instances are never parked when they might be needed, choose a conservative SmartParking schedule. It will only recommend parked times when the instance is never used. Choose “balanced” for a happy medium.

What People are Saying: Save More, Easier than Ever

Several existing ParkMyCloud customers have previewed the new functionality. “ParkMyCloud has helped my team save so much on our AWS bill already, and SmartParking will make it even easier,” said Tosin Ojediran, DevOps Engineer at a FinTech company. “The automatic schedules will save us time and make sure our instances are never running when they don’t need to be.”

Already a ParkMyCloud user? Log in to your account to try out the new SmartParking. Note that you will need to have AWS CloudWatch metrics enabled for several weeks in order for us to see your usage trends and make recommendations. If you haven’t already, you will need to update your AWS policy.

New to ParkMyCloud? Start a free trial here.

Read more ›

Cloud Computing 101, the Holidays, DevOps Automation and Moscow Mules – How’s that for a mix!?

I’m back to thinking about Cloud Computing 101, DevOps automation, and the other topics that keep my mind whirring at night – a sure sign that the 2017 holiday season is now officially over. I kicked mine off with an Ugly Sweater Party and wrapped it up with the College BCS games. In between, we had my parents’ 50th wedding anniversary (congrats to them), work-related holiday functions, Christmas with family and friends, New Years Eve with friends, and even chucked in some work and skiing. My liver needs a break but I love those Moscow Mules! Oh, and I have a Fitbit now to tell me how much I sit on my arse all day and peck away at this damn laptop – thanks kids, love you :).

What does this have to do with the cloud, cost control, DevOps and ParkMyCloud? At the different functions and events I went to, people who know me and what we do here at ParkMyCloud asked how business was going. In short, it’s great! In case you didn’t notice, the public cloud is growing, and fast. According to this recent article in Forbes, IaaS is growing 36% year on year – giddy up! Enterprises all over the world use ParkMyCloud to automate cloud cost control as part of their DevOps process. In fact we have customers in 20+ countries now. And people from companies like Sysco Foods rave about the ease of use and cost savings provided by the platform.

Now, when I talked to folks who don’t know what we do or what the cloud is, it’s a whole different discussion. For example, here’s a conversation I had at a party with Lindsey – a fictitious name to protect the innocent (or perhaps it’s USA superstar skier Lindsey Vonn… you will never know.) I like to call this conversation and ones like it “Cloud 101.”

Lindsey: “Hey Jay, how’s it going?”

Jay: “Awesome, great to see you Lindsey. Staying fit I see. How’s the family?” (of course I am holding my Mule in my copper mug – love it!)

Blah blah blah – now to the good stuff.

Lindsey: “So what do you do now?”

Jay: “Do you know what the cloud is?”

Lindsey: “You mean like iTunes?”

Jay: “Sort of. You know all those giant buildings you see when driving around here in Ashburn (VA)? Those buildings are full of servers that run the apps that you use in everyday life. Do you use the Starbucks app?”

Lindsey: “Yes – I’m addicted to Peppermint Mochas.”

Jay: “I am an Iced Venti Skim Chai Tea person myself. So the servers in those data centers are what power the cloud, Starbucks develops apps in the cloud, servers cost money when they’re running, just like the lights in your house. And like the lights in your house, those development servers don’t need to run all the time – only when people are actually using them. So we help companies like Starbucks turn them off when they are not being used. In short, we help companies save money in the cloud.”

Side note to Starbucks — maybe if you used ParkMyCloud to save on your cloud costs with Microsoft and AWS you could stop raising the price of my Iced Venti Skim Chai Tea Latte… just a thought.

It’s thanks to all our customers and partners that I’m able to have this Cloud Computing 101 conversation and include ParkMyCloud in it – with a special thanks to the “Big 3” cloud service providers – AWS, Azure and Google Cloud. Without them, we would not exist as there would not be a cloud to optimize. Kind of like me without my parents, so glad they came together.

Looking ahead to the rest of 2018, we will have lots to write about here at ParkMyCloud — multi-cloud is trending up, automated cloud cost control is trending up, and DevOps will make this all more efficient. And ParkMyCloud will introduce SmartParking, SmartSizing, support for AliCloud and more. It’s all about action and automation baby. Game of Thrones better be back in 2018, too.

Read more ›

3 Things We Learned at AWS re:Invent 2017 (An Insider’s Look from Jay)

The ParkMyCloud team has returned energized and excited from our annual trek to Las Vegas and our third AWS re:Invent conference. It’s an eclectic group of roughly 42K people, but we gleaned a ton of information by asking questions and listening to the enterprise cloud do’ers at our booth. These are the people actually moving, deploying and managing cloud services at the enterprises we consume goods and services from. They are also often the one’s that start the ‘cloud first’ initiatives we hear so much about — after all, they led the public cloud revolution to AWS 10+ years ago.

I’m not going to write about all the announcements AWS made at re:Invent2017 related to all the cool kid, popular buzz words like Artificial Intelligence, Machine Learning, Blockchain, Quantum Computing, Serverless Architecture, etc. However, if you do want to read about those please check out this nice recap from Ron Miller of TechCrunch.

Containers are so passé, they did not even make the cool kid list in 2017… but Microservices Architecture did. Huh, wonder if that’s a new phrase for containers?

For ParkMyCloud it’s a great event. We love talking to everyone there – they’re all cloud focused, they are either using AWS exclusively (born in the cloud), or AWS plus another public cloud, or AWS plus private cloud, and in some cases even AWS plus another public cloud and private cloud, thus they are truly ‘multi-hybrid cloud’. We had a ton of great conversations with cloud users who are either prospects, customers, technology partners, MSPs or swag hunters who want learn how to automate their cloud cost control – our nirvana.

There were a ton of Sessions, Workshops and Chalk Talks, and long lines to get into the good ones. It’s up to you to define the good ones and reserve your spot ahead of time.

Of course, it’s not all work and no play. This year for re:Play we had DJ Snake – giddy up! And while you walked your miles through the various casinos there were DJ’s scattered about spinning tunes for you – I describe re:Invent to my friends as an IT event where “millennials meet technology” — definitely not your father’s tech trade show. Having been to many of these IT tech trade shows around the world for 20+ years now, and outside of the Mobile World Congress in Barcelona, re:Invent is hands down the coolest.

Not only because of the DJ’s and re:Play but because there is a real buzz there, people are part of the new world of IT, and the migration of enterprise services to the world’s #1 cloud provider. And of course the Pub Crawl and Tatonka chicken wing eating contest.

AWS is now so big that the Venetian/Palazzo can’t hold everyone anymore, so they have spread over to the MGM, Mirage, and Aria. AWS refers to this collection of locations as it’s ‘campus’ – interesting, the rest of us refer to it simply as Las Vegas :-).

BTW – bring your sneakers. It’s 1.5 miles or a 22 minute power walk, including a few bridges, from the MGM to the Venetian assuming no stops for a cold beverage. Speaking of which, the Starbucks line is crazy.

Oh, and the swag, holy mother of pearl, people literally walk by the booth with large tote bags stuffed full of swag – if you like swag, hit up the expo hall for your fill of tee shirts, hoodies, koozies, spinners, bottle openers, pens, flash lights, memory sticks, chargers, stickers, hats, socks, glasses, mints, Toblerone chocolate, and lots more!

Well, I probably need to tie this blog / rant back to the headline, so in that vein, here are the top three things we learned at this year’s AWS re:Invent:

  1. Cost control in 2018 will be about aggregating metrics and taking automated actions based on Machine Learning
  2. AWS talks a lot about advanced cloud services and PaaS, but a majority of the customers we talk to still use and spend most of their dollars on EC2, RDS and S3
  3. DevOps / CloudOps folks are in charge of implementing cost control actions and pick the tools they want to use to optimize cloud spend

See you next year – pre-book your Uber/Lyft or bring a scooter!

Read more ›

Cloud Service Provider Comparison – Who Will be the Next Big Provider? Part One: Alibaba

When making a cloud service provider comparison, you would probably think of the “big three” providers: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). Thus far, AWS has led the cloud market, but the other two are gaining market share, driving us to make comparisons between Azure vs AWS and Google vs AWS. But that’s not the whole story.

In recent years, a few other “secondary” cloud providers have made their way into the market, offering more options to choose from. Are they worth looking at, and could one of them become the next big provider?

Andy Jassy, CEO of AWS, says: “There won’t be just one successful player. There won’t be 30 because scale really matters here in regards to cost structure, as well as the breadth of services, but there are going to be multiple successful players, and who those are I think is still to be written. But I would expect several of the older guard players to have businesses here as they have large installed enterprise customer bases and a large sales force and things of that sort.”

So for our next cloud service provider comparison, we are going to do an overview of what could arguably be the next biggest provider in the public cloud market (after all, we need to add a 4th cloud provider to the ParkMyCloud arsenal).:

Alibaba

Alibaba is a cloud provider not widely known about in the U.S., but it’s taking China by storm and giving Amazon a run for its money in Asia. It’s hard to imagine a cloud provider (or e-commerce giant) more successful than what we have seen with Amazon, let alone a provider that isn’t part of the big three, but Alibaba has their sights set on surpassing AWS to dominate the world wide cloud computing market.

Take a look at some recent headlines:

Guess Who’s King of Cloud Revenue Growth? It’s Not Amazon or Microsoft

Alibaba Just Had Its Amazon AWS Moment

Alibaba Declares War on Amazon’s Surging Cloud Computing Business

What we know so far about Alibaba:

  • In 2016: Cloud revenue was $675 million, surpassing Google Cloud’s $500 million. First quarter revenue was $359 million and in the second quarter rose to $447 million.
  • Alibaba was dubbed the highest ranking cloud provider in terms of revenue growth, with sales increasing 126.5 percent from 2015 ($298 million) to 2016
  • Gartner research places Alibaba’s cloud in fourth place among cloud providers, ahead of IBM and Oracle

Alibaba Cloud was introduced to cloud computing just three years after Amazon launched AWS. Since then, Alibaba has grown at a faster pace than Amazon, largely due to their domination of the Chinese market, and is now the 5th largest cloud provider in the world.

Alibaba’s growth is attributed in part to the booming Chinese economy, as the Chinese government continues digitizing, bringing its agencies online and into the cloud. In addition, as the principal e-commerce system in China, Alibaba holds the status as the “Amazon of Asia.” Simon Hu, senior vice president of Alibaba Group and president of Alibaba Cloud, claims that Alibaba will surpass AWS as the top provider by 2019.

Our Take

For the time being, Amazon is still dominating the U.S. cloud market, exceeding $400 billion in comparison to Alibaba’s $250 billion. Still, Alibaba Cloud is growing at incredible speed, with triple digit year-over-year growth over the last several quarters. As the dominant cloud provider in China, Alibaba is positioned to continue growing, and is still in its early stages of growth in the cloud computing market. Only time will reveal what Alibaba Cloud will do, but in the meantime, we’ll definitely be keeping a lookout. After all, we have customers in 20 countries around the world, not just in the U.S.  

Next Up: IBM & Oracle

Apart from the big three cloud providers, Alibaba is clearly making a name for itself with a fourth place ranking in the world of cloud computing. While this cloud provider is clearly gaining traction, a few more have made their introduction in recent years. Here’s a snapshot of the next 2 providers in our cloud service provider comparison:

IBM

  • At the end of June 2017, IBM made waves when it outperformed Amazon in total cloud computing revenue at $15.1 billion to $14.5 billion over a year-long period
  • However, Amazon is still way ahead when it comes to the IaaS market
    • For 2016, Amazon had the highest IaaS revenue, followed by Microsoft, Alibaba, and Google, respectively. IBM did not make the top 5.
    • Alibaba had the highest IaaS growth rate, followed by Google, Microsoft, and Amazon, respectively.
  • IBM was the fourth biggest cloud provider – before Alibaba took over
  • In Q1 of 2017, Synergy rankings showed that IBM has 4 percent of the public cloud market share, just behind Alibaba’s 5 percent
    • AWS had 44 percent, Azure – 11 percent, and Google Cloud – 6 percent

Oracle

  • Oracle’s cloud business is still ramping up, particularly in terms of IaaS
  • In fiscal Q1 of 2018, growth was at 51 percent, down from a 60 percent average in the last four quarters
    • Q4 for fiscal 2017 was at 58 percent
  • Since last quarter, shares have gone down by 10 percent

When making a cloud service provider comparison, don’t limit yourself to the “big three” of AWS, Azure, and GCP. They might dominate the market now, but as other providers grow, innovate, and increase their following in the cloud wars – we’ll continue to track and compare as earnings are reported.

Read more ›

Complex Cloud Pricing Models Mean You Need Automated Cost Control

Cloud pricing models can be complex. In fact, it’s often difficult for public cloud users to decipher a) what they’re spending, b) whether they need to be spending that much, and c) how to save on their cloud costs. The good news is that this doesn’t need to be an ongoing battle. Once you get a handle on what you’re spending, you can automate the cost control process to ensure that you only spend what you need to.

By the way, I recently talked about this on The Cloudcast podcast – if you prefer to listen, check out the episode.

All Cloud Pricing Models Require Cost Management

automate cloud cost savingsThe major cloud service providers – Amazon Web Services, Microsoft Azure, and Google Cloud Platform – offer several pricing models for compute services – by usage, Reserved, and Spot pricing.

The basic model is by usage – typically this has been per-hour, although AWS and Google both recently announced per-second billing (more on this next week.) This requires careful cost management, so users can determine whether they’re paying for resources that are running when they’re not actually needed. This could be paying for non-production instances on nights and weekends when no one is using them, or paying for oversized instances that are not optimally utilized.

Then there are Reserved Instances, which allow you to pre-pay partially or entirely. The billing calculation is done on the back end, so it still requires management effort to ensure that the instances you are running are actually eligible for the Reserved Instances you’ve paid for.

As to whether these are actually a good choice for you, see the following blog post: Are AWS Reserved Instances Better Than On-Demand? It’s about AWS Reserved Instances, although similar principles apply to Azure Reserved Instances.

Spot instances allow you to bid on and use spare compute capacity for a cheap price, but their inherent risk means that you have to build fault-tolerant applications in order to take advantage of this cost-saving option.

However You’re Paying, You Need to Automate

The bottom line is that while visibility into the costs incurred by your cloud pricing model is an important first step, in order to actually reduce and optimize your cloud spend, you need to be able to take automated actions to reduce infrastructure costs.

To this end, our customers told us that they would like the ability to park instances based on utilization data. So, we’re currently developing this capability, which will be released in early December. Following that, we will add the ability for ParkMyCloud to give you right sizing recommendations – so not only will you be able to automatically park your idle instances, you’ll also be able to automatically size instances to correctly fit your workloads so you’re not overpaying.

Though cloud pricing can be complicated, with governance and automated savings measures in place, you can put cost worries to the back of your mind and focus on your primary objectives.

Read more ›

Google Cloud Platform vs AWS: Is the answer obvious? Maybe not.

Google Cloud Platform vs AWS: what’s the deal? A few months ago, we asked the same question about Azure vs AWS. While Microsoft continues to see growth, and Amazon maintains a steady lead among cloud providers, Google is stepping in. Now that Google Cloud Platform has solidly secured its spot to round out the “big three” cloud providers, we think it’s time to take a closer look and see how the underdog matches up to the 800-pound gorilla.

Is Google Cloud catching up to AWS?

As they’ve been known to do, Amazon, Google, and Microsoft all released their recent quarterly earnings on the same day. At first glance, the headlines tell it all:

The natural conclusion is that AWS continues to dominate in the cloud war. With all major cloud providers reporting earnings at the same time, we have an ideal opportunity to examine the numbers and determine if there’s more to the story. Here’s what the quarterly earning reports tell us:

  • AWS reported $4.6 billion in revenue for the quarter and is on its way to $18 billion in revenue for year, a 42% year-over-year increase, taking the top spot among cloud providers
  • Google’s revenue has cloud sales lumped together with revenue from the Google Play app store, summing up to a total of $3.4 billion for the last quarter
  • Although Google did not report specific revenue for Google Cloud Platform (GCP), Canalys estimates earnings at $870 million for the quarter – a 76% year-over-year growth

 

  • It’s also important to note that Google is just getting started. Also included in their report was an increase in new hires, a total of 2,495 in the last quarter, and most of them going to positions in their cloud sector

The Obvious: Google is not surpassing AWS

When it comes to Google Cloud Platform vs AWS, presently we have a clear winner. Amazon continues to have the advantage as the biggest and most successful cloud provider on the market. While AWS is growing at a smaller rate now than both Google Cloud and Azure, Amazon’s growth is still more impressive given that it has the largest market share of all three. AWS is the clear competitor to beat as the first successful cloud provider, with the widest range of services, and a strong familiarity among developers.

The Less Obvious: Google is gaining ground

While it’s easy to write off Google Cloud Platform, AWS is not untouchable. Let’s not forget that 76% year-over-year growth is nothing to scoff at. AWS has already solidified itself in the cloud market, but Google Cloud is just beginning to take off.

Where is Google actually gaining ground?

We know that AWS is at the forefront of cloud providers today. At the same time, AWS is now only one among three major cloud providers. Google Cloud Platform has more in store for its cloud business in 2018.

Google’s stock continues to rise. With nearly 2,495 new hires added to the headcount, a vast majority of them being cloud-related jobs, it’s clear that Google is serious about expanding its role in the cloud market. Deals have been made with major retailer Kohl’s department store, and payments processor giant Paypal. Google CEO Sundar Pichai lists the cloud platform as one of the top three priorities for the company, confirming that they will continue expanding their cloud sales headcount.

In discussing Google’s recent quarterly earnings, Pichai added his thoughts on why he believes the Google Cloud Platform is on a set path for strong growth. He credits their success to customer confidence in Google’s impressive technology and a lead in machine learning, naming the company’s open-source software TensorFlow as a prime example. Another key component to growth is strategic partnerships, such as the recent announcement of a deal with Cisco, in addition to teaming up with VMware and Pivotal.

Driving Google’s growth is also the fact that the cloud market itself is growing fast. The move to the cloud has prompted large enterprises to use multiple cloud providers in building their applications, such as Home Depot Inc. and Target Corp., who rely on a combination of cloud vendors. Home Depot in particular uses both Azure and Google Cloud Platform, and a spokesman for the home improvement retailer explains why that was that intentional: “Our philosophy here is to be cloud agnostic, as much as we can.” This philosophy goes to show that as long as there is more than one major cloud provider in the mix, enterprises will continue trying, comparing, and adopting more than one at a time, making way for Google Cloud to gain further ground.

Andy Jassy, CEO of AWS, put it best:

“There won’t be just one successful player. There won’t be 30 because scale really matters here in regards to cost structure, as well as the breadth of services, but there are going to be multiple successful players, and who those are I think is still to be written. But I would expect several of the older guard players to have businesses here as they have large installed enterprise customer bases and a large sales force and things of that sort.”

Google Cloud Platform vs. AWS: Why does it matter?

Google Cloud Platform vs AWS is only one battle to consider in the ongoing cloud war. The truth is, market performance is only one factor in choosing the best cloud provider, and as we always say, the specific needs of your business are what will drive your decision.

What we do know: the public cloud is not just growing, it’s booming.

Referring back to our Azure vs AWS comparison, the basic questions still remain the same when it comes to choosing the best cloud provider:

  • Are the public cloud offerings to new customers easily comprehensible?
  • What is the pricing structure and how much do the products cost?
  • Are there adequate customer support and growth options?
  • Are there useful surrounding management tools?
  • Will our DevOps processes translate to these offerings?
  • Can the PaaS offerings speed time-to-value and simplify things sufficiently, to drive stickiness?
  • What security measures does the cloud provider have in place?

Right now AWS is certainly in the lead among major cloud providers, but for how long? We will continue to track and compare cloud providers as earnings are reported, offers are increased, and price options grow and change. To be continued in 2018…

Read more ›

3 Things Companies Using Cloud Computing Should Make Sure Their Employees Do

These days, there’s a huge range of companies using cloud computing, especially public cloud. While your infrastructure size and range of services used may vary, there are a few things every organization should keep in mind. Here are the top 3 we recommend for anyone in your organization who touches your cloud infrastructure.

Keep it Secure

OK, so this one is obvious, but it bears repeating every time. Keep your cloud access secure.

For one, make sure your cloud provider keys don’t end up on GitHub… it’s happened too many times.

(there are a few open source tools out there that can help search your GitHub for this very problem, check out AWSLabs’s git-secrets).

Organizations should also enforce user governance and use Role-Based Access Control (RBAC) to ensure that only the people who need access to specific resources can access them.

Keep Costs in Check

There’s an inherent problem created when you make computing a pay-as-you-go utility, as public cloud has done: it’s easy to waste money.

First of all, the default for computing resources is that they’re “always on” unless you specifically turn them off. That means you’re always paying for it.

Additionally, over-provisioning is prevalent – 55% of all public cloud resources are not correctly sized for their resources. The last is perhaps the most brutal: 15% of spend is on resources which are no longer used. It’s like discovering that you’re still paying for that gym membership you signed up for last year, despite the fact that you haven’t set foot inside. Completely wasted money.

In order to keep costs in check, companies using cloud computing need to ensure they have cost controls in place to eliminate and prevent cloud waste – which, by the way, is the problem we set out to solve when we created ParkMyCloud.

Keep Learning

Third, companies should ensure that their IT and development teams continue their professional development on cloud computing topics, whether by taking training courses or attending local Meetup groups to network with and learn from peers. We have a soft spot in our hearts for our local AWS DC Meetup, which we help organize, but there are great meetups in cities across the world on AWS, Azure, Google Cloud, and more.

Best yet, go to the source itself. Microsoft Azure has a huge events calendar, though AWS re:Invent is probably the biggest. It’s an enormous gathering for learning, training, and announcements of new products and services (and it’s pretty fun, too).

We’re a sponsor of AWS re:Invent 2017 – let us know if you’re going and would like to book time for a conversation or demo of ParkMyCloud while you’re there, or just stop by booth #1402!

Read more ›

3 Enterprise Cloud Management Challenges You Should Be Thinking About

Enterprise cloud management is a top priority. As the shift towards multi-cloud environments continues, so has the need to consider the potential challenges. Whether you already use the public cloud, or are considering making the switch, you probably want to know what the risks are. Here are three you should be thinking about.

1. Multi-Cloud Environments

As the ParkMyCloud platform supports AWS, Azure, and Google, we’ve noticed that multi-cloud strategies are becoming increasingly common among enterprises. There are a number of reasons why it would be beneficial to utilize more than one cloud provider. We have discussed risk mitigation as a common reason, along with price protection and workload optimization. As multi-cloud strategies become more popular, the advantages are clear. However, every strategy comes with its challenges, and it’s important for CIOs to be aware of the associated risks.

Without the use of cloud management tools, multi-cloud management is complex and sometimes difficult to navigate. Different cloud providers have different price models, product features, APIs, and terminology. Compliance requirements are also a factor that must be considered when dealing with multiple providers. Meeting and maintaining requirements for one cloud provider is complicated enough, let alone multiple. And don’t forget you need a single pane to view your multi-cloud infrastructure.

2. Cost Control

Cost control is a first priority among cloud computing trends. Enterprise Management Associates (EMA) conducted a research study and identified key reasons why there is a need for cloud cost control, among them were inefficient use of cloud resources, unpredictable billing, and contractual obligation or technological dependency.

Managing your cloud environment and controlling costs requires a great deal of time and strategy, taking away from the initiatives your enterprise really needs to be focusing on. The good news is that we offer a solution to cost control that will save 65% or more on your monthly cloud bills – just by simply parking your idle cloud resources. ParkMyCloud was one of the top three vendors recommended by EMA as a Rapid ROI Utility. If you’re interested in seeing why, we offer a 14-day free trial.

3. Security & Governance

In discussing a multi-cloud strategy and its challenges, the bigger picture also includes security and governance. As we have mentioned, a multi-cloud environment is complex, complicated, and requires native or 3rd party tools to maintain vigilance. Aside from legal compliance based on the industry your company is in, the cloud also comes with standard security issues and of course the possibility of cloud breaches. In this vein, as we talk to customers they often worry about too many users being granted console access to create and terminate cloud resources which can lead to waste. A key here is limiting user access based on roles or Role-based Access Controls (RBAC). At ParkMyCloud we recognize that visibility and control is important in today’s complex cloud world. That’s why in designing our platform, we provide the sysadmin the ability to delegate access based on a user’s role and the ability to authenticate leveraging SSO using SAML integration . This approach brings security benefits without losing the appeal of a multi-cloud strategy.

Our Solution

Enterprise cloud management is an inevitable priority as the shift towards a multi-cloud environment continues. Multiple cloud services add complexity to the challenges of IT and cloud management. Cost control is time consuming and needs to be automated and monitored constantly. Security and governance is a must and it’s necessary to ensure that users and resources are optimally governed. As the need for cloud management continues to grow, cloud automation tools like ParkMyCloud provide a means to effectively manage cloud resources, minimize challenges, and save you money.

Read more ›

Cloud Optimization Tools = Cloud Cost Control (Part II)

A couple of weeks ago in Part 1 of this blog topic we discussed the need for cloud optimization tools to help enterprises with the problem of cloud cost control. Amazon Web Services (AWS) even goes as far as suggesting the following simple steps to control their costs (which can also be applied  to Microsoft Azure and Google Cloud Platform, but of course with slightly different terminology):

    1. Right-size your services to meet capacity needs at the lowest cost;
    2. Save money when you reserve;
    3. Use the spot market;
    4. Monitor and track service usage;
    5. Use Cost Explorer to optimize savings; and
    6. Turn off idle instances (we added this one).

A variety of third-party tools and services have popped up in the market over the past few years to help with cloud cost optimization – why? Because upwards of $23B was spent on public cloud infrastructure in 2016, and spending continues to grow at a rate of 40% per year. Furthermore, depending on who you talk to, roughly 25% of public cloud spend is wasted or not optimized — that’s a huge market! If left unchecked, this waste problem is supposed to triple to over $20B by 2020 – enter the vultures (full disclosure, we are also a vulture, but the nice kind). Most of these tools are lumped under the Cloud Management category, which includes subcategories like Cost Visibility and Governance, Cost Optimization, and Cost Control vendors – we are a cost control vendor to be sure.

Why do you, an enterprise, care? Because there are very unique and subtle differences between the tools that fit into these categories, so your use case should dictate where you go for what – and that’s what I am trying to help you with. So, why am I a credible source to write about this (and not just because ParkMyCloud is the best thing since sliced bread)?

Well, yesterday we had a demo with a FinTech company in California that was interested in Cost Control, or thought they were. It turns out that what they were actually interested in was Cost Visibility and Reporting; the folks we talked to were in Engineering Finance, so their concerns were primarily with billing metrics, business unit chargeback for cloud usage, RI management, and dials and widgets to view all stuff AWS and GCP billing related. Instead of trying to force a square peg into a round hole, we passed them on to a company in this space who’s better suited to solve their immediate needs. In response, the Finance folks are going to put us in touch with the FinTech Cloud Ops folks who care about automating their cloud cost control as part of their DevOps processes.

This type of situation happens more often than not. We have a lot of enterprise customers using ParkMyCloud along with CloudHealth, CloudChekr, Cloudability, and Cloudyn because in general, they provide Cost Visibility and Governance, and we provide actionable, automated Cost Control.

As this is our blog, and my view from the street – we have 200+ customers now using ParkMyCloud, and we demo to 5-10 enterprises per week. Based on a couple of generic customer uses cases where we have strong familiarity, here’s what you need to know to stay ahead of the game:

  • Cost Visibility and Governance: CloudHealth, CloudChekr, Cloudability and Cloudyn (now owned by Microsoft)
  • Reserved Instance (RI) management – all of the above
  • Spot Instance management – SpotInst
  • Monitor and Track Usage: CloudHealth, CloudChekr, Cloudability and Cloudyn
  • Turn off (park) Idle Resources – ParkMyCloud, Skeddly, Gorilla Stack, BotMetric
  • Automate Cost Control as part of your DevOps Process: ParkMyCloud
  • Govern User Access to Cloud Console for Start/Stop: ParkMyCloud
  • Integrate with Single Sign-On (SSO) for Federated User Access: ParkMyCloud

To summarize, cloud cost control is important, and there are many cloud optimization tools available to assist with visibility, governance, management, and control of your single or multi-cloud environments. However, there are very few tools which allow you to set up automated actions leveraging your existing enterprise tools like Ping, Okta, Atlassian, Jenkins, and Slack.  Make sure you are not only focusing on cost visibility and recommendations, but also on action-oriented platforms to really get the best bang for your buck.

Read more ›

Cloud Optimization Tools = Cloud Cost Control

Over the past couple of years we have had a lot of conversations with large and small enterprises regarding cloud management and cloud optimization tools, all of whom were looking for cost control. They wanted to reduce their bills, just like any utility you might run at home — why spend more than you need to? Amazon Web Services (AWS) actively promotes optimizing cloud infrastructure, and where they lead, others follow. AWS even goes so far as to suggest the following simple steps to control AWS costs:

  1. Right-size your services to meet capacity needs at the lowest cost;
  2. Save money when you reserve;
  3. Use the spot market;
  4. Monitor and track service usage;
  5. Use Cost Explorer to optimize savings; and
  6. Turn off idle instances (we added this one).

Its interesting to note use of the word ‘control’ even though the section is labeled Cost Optimization.

So where is all of this headed? It’s great that AWS offers their own solutions but what if you want automation into your DevOps processes, multi-cloud support (or plan to be multi cloud), real-time reporting on these savings, and to turn stuff off when you are not using it? Well then you likely need to use a third-party tool to help with these tasks.

Let’s take a quick look at a description of each AWS recommendation above, and get a better understanding of each offering. Following this we will then explore if these cost optimization options can be automated as part of a continuous cost control process:

  1. Right-sizing – Both the EC2 Right Sizing solution and AWS Trusted Advisor analyze utilization of EC2 instances running during the prior two weeks. The EC2 Right Sizing solution analyzes all instances with a max CPU utilization less than 50% and determines a more cost-effective instance type for that workload, if available.
  2. Reserved Instances (RI) – For certain services like Amazon EC2 and Amazon RDS, you can invest in reserved capacity. With RI’s, you can save up to 75% over equivalent ‘on-demand’ capacity. RI’s are available in three options – (1) All up-front, (2) Partial up-front or (3) No upfront payments.
  3. Spot – Amazon EC2 Spot instances allow you to bid on spare Amazon EC2 computing capacity. Since Spot instances are often available at a discount compared to On-Demand pricing, you can significantly reduce the cost of running your applications, grow your application’s compute capacity and throughput for the same budget, and enable new types of cloud computing applications.
  4. Monitor and Track Usage – You can use Amazon CloudWatch to collect and track metrics, monitor log files, set alarms, and automatically react to changes in your AWS resources. You can also use Amazon CloudWatch to gain system-wide visibility into resource utilization, application performance, and operational health.
  5. Cost Explorer – AWS Cost Explorer gives you the ability to analyze your costs and usage. Using a set of default reports, you can quickly get started with identifying your underlying cost drivers and usage trends. From there, you can slice and dice your data along numerous dimensions to dive deeper into your costs.
  6. Turn off Idle Instances – To “park” your cloud resources by assigning them schedules of operating hours they will run or be temporarily stopped – i.e. parked. Most non-production resources (dev, test, staging, and QA) can be parked at nights and on weekends, when they are not being used. On the flip side of this, some batch processing or load testing type applications can only run during non-business hours, so they can be shut down during the day.

Many of these AWS solutions offer recommendations, but do require manual efforts to gain the benefits. This is why third party solutions have have seen widespread adoption and include cloud management, cloud governance and visibility, and cloud optimization tools. In part two of this this blog we will have a look at some of those tools, the benefits of each, approach and the level of automation to be gained.

Read more ›

Cloud Webhooks – Notification Options for System Level Alerts to Improve your Cloud Operations

Webhooks are user-defined HTTP POST callbacks. They provide a lightweight mechanism for letting remote applications receive push notifications from a service or application, without requiring polling. In today’s IT infrastructure that includes monitoring tools, cloud providers, DevOps processes, and internally-developed applications, webhooks are a crucial way to communicate between individual systems for a cohesive service delivery. Now, in ParkMyCloud, webhooks are available for even more powerful cost control.

For example, you may want to let a monitoring solution like Datadog or New Relic know that ParkMyCloud is stopping a server for some period of time and therefore suppress alerts to that monitoring system for the period the server will be parked, and vice versa enable the monitoring once the server is unparked (turned on). Another example would be to have ParkMyCloud post to a chatroom or dashboard when schedules have been overridden by users. We do this by enabling systems notifications to our cloud webhooks.

Previously only two options were provided when configuring system level and user notifications in ParkMyCloud: System Errors and Parking Actions. We have added 3 new notification options for both system level and user notifications. Descriptions for all five options are provided below:

  • System Errors – These are errors occurring within the system itself such as discovery errors, parking errors, invalid credential permissions, etc.
  • System Maintenance and Updates – These are the notifications provided via the banner at the top of the dashboard.
  • User Actions – These are actions performed by users in ParkMyCloud such as manual resource state toggles, attachment or detachment of schedules, credential updates, etc.
  • Parking Actions – These are actions specifically related to parking such as automatic starting or stopping of resources based on defined parking schedules.
  • Policy Actions – These are actions specifically related to configured policies in ParkMyCloud such as automatic schedule attachments based on a set rule.

We have made the options more granular to provide you better control on events you want to see or not see.

These options can be seen when adding or modifying a channel for system level notifications (Settings > System Level Notifications). In the image shown below, a channel is being added.

Note: For additional information regarding these options, click on the Info Icon to the right of Notify About.

The new notification options are also viewable by users who want to set up their own notifications (Username > My Profile).  These personal notifications are sent via email to the address associated with your user.  Personal notifications can be set up by any user, while Webhooks must be set up by a ParkMyCloud Admin.

After clicking on Notifications, you will see the above options and may use the checkboxes to select the notifications you want to receive. You can also set each webhook to handle a specific ParkMyCloud team, then set up multiple webhooks to handle different parts of your organization.  This offers maximum flexibility based on each team’s tools, processes, and procedures. Once finished, click on Save Changes. Any of these notifications can be sent then to your cloud webhook and even Slack to ensure ParkMyCloud is integrated into your cloud management operations.

 

Read more ›

Interview: DevOps in AWS – How to Automate Cloud Cost Savings

automate cloud cost savings

We chatted with Ryan Alexander, DevOps Engineer at Decision Resources Group (DRG) about his company’s use of AWS and how they automate cloud cost savings. Below is a transcript of our conversion.

Hi Ryan, thanks for speaking with us. To start out, can you please describe what your company does?

Decision Resources Group offers market information and data for the medtech industry. For example, let’s say a medical graduate student is doing a thesis on Viagra use in the Boston area. They can use our tool to see information such as age groups, ethnicities, number of hospitals, and number of people who were issued Viagra in the city of Boston.

What does your team do within the company? What is your role?

I’m a DevOps engineer on a team of two. We provide infrastructure automation to the other teams in the organization. We report to senior tech management, which makes us somewhat of an island within the organization.

Can you describe how you are using AWS?

We have an infrastructure team internally. Once a server or infrastructure is built, we take over to build clusters and environments for what’s required. We utilize pretty much every tool AWS offers — EBS, ELB, RDS, Aurora, CloudFormation, etc.

What prompted you to look for a cost control solution?

When I joined DRG in December, there was a new cost saving initiative developing within the organization. It came from our CTO, who knew we could be doing better and wanted to see where we might be leaving money on the table.

How did you hear about ParkMyCloud?

One of my colleagues actually spoke with your CTO, Dale, at AWS re:Invent, and I had also heard about ParkMyCloud at DevOpsDays Toronto 2016. We realized it could help solve some of our cloud cost control problems and decided to take a look.

What challenges were contributing to the high costs? How has ParkMyCloud helped you solve them?

We knew we had a problem where development, staging, and QA environments were only used for 8 hours a day – but they were running for 24 hours a day. We wanted to shut them down and save money on the off hours, which ParkMyCloud helps us do automatically.

We also have “worker” machines that are used a few times a month, but they need to be there. It was tedious to go in and shut them down individually. Now with ParkMyCloud, I put those in a group and shut them down with one click. It is really just that easy to automate cloud cost savings with ParkMyCloud.

We also have security measures in place, where not everyone has the ability to sign in to AWS and shut down instances. If there was a team that needed them started on demand, but they’re in another country and I’m sleeping, they have to wait until I wake up the next morning, or I get up at 2 AM. Now that we set up Single Sign-On, I can set up the guys who use those servers, and give them the rights to startup and shutdown those servers. This has been more efficient for all of us. I no longer have to babysit and turn those on/off as needed, which saves time for all of us.

With ParkMyCloud, we set up teams and users so they can only see their own instances, so they can’t cause a cascading failure because they can only see the servers they need.

Were there any unexpected benefits of ParkMyCloud?

When I started, I deleted 3 servers that were sitting there doing nothing for a year and costing the company lots of money. With ParkMyCloud, that kind of stuff won’t happen, because everything gets sorted into teams. We can see the costs by team and ask the right questions, like, “why is your team’s cost so expensive right now? Why are you ignoring these recommendations from ParkMyCloud to park these instances?”

 

We rely on tagging to do all of this. Tagging is life in DevOps.

Read more ›

How X-Mode Deals with Rising AWS Costs

rising AWS costs

We sat down with Josh Anton, CEO of X-Mode, a technology app that has been experiencing rapid growth and rising AWS costs. We asked him about his company, what cloud services he uses, and how he goes about mitigating those costs.

Can you start by telling us about X-Mode and what you guys do?

X-Mode is a location platform that currently maps out 5-10% of the U.S. population on a monthly basis and 1-2% of the U.S. population daily, which is about 3-6 million active daily users | 15M to 20M users monthly. X-Mode collect location based data from applications and platforms used by these consumers, and then develop consumer segments or attribution where our customers basically use the data to determine if their advertising is effective and to develop target profiles. For example, based on the number and types of coffee shops a person has visited, we can assume they are this type of coffee drinker. Or a company like McDonald’s will determine if their advertising is effective if they see that an ad is run in a certain area, and a person visits that restaurant in the next few days. The data has many applications.

How did you get this idea, Josh?

We started off as an app called Drunk Mode, which was founded and built while I was at the University of Virginia studying Marketing and IT. After about a year and half our app grew to about 1.5 million users by leveraging influencer marketing via Trend Pie and student campus rep program at 50+ universities. In September of 2016, we realized that if we developed a location-based technology platform we could monetize and capitalize on the location data we collected from the Drunk Mode app. Along with input from our advisors, we developed a strategy to help out other small apps by aggregating their data, crunching it, and packaging it up in real-time to sell to ad agencies and retailers, acting almost as a data wholesaler and helping these small app plays monetize their data as a secondary source of income.

Who’s cloud services are you using and how does X-Mode work?

We use Amazon Web Services (AWS) for all of our cloud infrastructure and primarily use their EC2, RDS, and Elastic Beanstalk services. Our technology works by collecting and aggregating location data based on when and where people go on a daily basis. it is collected locally by iOS and Android devices, and passed to AWS’s cloud using their API gateway function. The cool thing is that we are able to pinpoint a person’s location within feet of a retail location. The location data is batched and sent to our servers every 12 hours and we package it up and license the data out to our vendors. We are processing around 10 to 12 billion location based records per month, and have some proprietary algorithms which make our processing very fast and we have almost no burn on the phone’s battery. Our customers are sent the data daily and we use services like Lambda, RDS and Elastic Beanstalk to make this as efficient as possible. We are now developing the functionality to better triangulate beacons so that we can pinpoint locations even better, and send location data within the hour, rather than within the day.

Why did you pick AWS?

We chose AWS because when X-Mode joined Fishbowl Labs (a startup accelerator run and sponsored by AOL in Northern Virginia), we were given $15,000 in free AWS credits. The free credits have made me very loyal to Amazon’s service and now the switching costs would be fairly high in terms of effort and dollars to move away from Amazon. So even though it’s expensive, we are here to stay and adopting more of AWS’s advanced services in order to improve our platform performance and take advantage of their technology advances. Another reason we stay with AWS is that we know it is going to be there, we previously used another service called Parse.com that was acquired by Facebook and a few years later, they shut down the service, for us performance and stability (the server service existing 10 years from now) are very important to us.

Are you still using AWS credits?

No, we used those up many months ago. We have gone from spending a few hundred dollars a month to spending $25,000 or more a month. While that is a cost, it’s also a blessing in that X-Mode is rapidly growing and scaling. Outside of the cost of people, this is our biggest monthly expense. ParkMyCloud was an easy choice, given 75% or more of our AWS spend is on EC2 and RDS services, and ParkMyCloud’s ability to ‘park’ each service and their flexible governance model for our remote engineering team. So we are very excited about the savings ParkMyCloud will produce for us, along with some of the new design work we will be doing to make our platform even more efficient.

Are there other ways you are looking to optimize your AWS spend?  

We believe that we have to re-architect the system. We have actually done that three times given our rapid platform growth, but it is all about making sure that we are optimizing our import/export process. We are running our servers at maximum capacity to help get information to people, and are continually looking to make our operation more efficient. Along with using ParkMyCloud, we are focusing on general platform optimization to make sure we keep costs down, improve performance and innovate at a rapid pace.

What other tools do you use as part of your DevOps process?

Let’s keep in mind we are startup, but we are getting more and more organized in terms of development cycles and have a solid delivery process. And yes, we use tools like Slack, Jira, BaseCamp, Bitbucket, and Google Drive. Everything is SaaS-based and everything is in the cloud, and we follow an agile development process. On the Sales and Marketing side we are solely a millennial workforce and work in office but our development team is basically stay at home dads distributed around the country, so planning and communication are keys to our success. That’s where Slack and Jira come into play. In terms of processes, we are trying to implement a better QA process so we deliver very vetted code to our end users. We do a lot of development planning and mapping each quarter, so all of this is incredibly important to the growth of the X-Mode platform and to the success of our organization.

Read more ›

Trends in Cloud Computing – ParkMyCloud Turns Two, What’s New?

trends in cloud computing

It’s not hard to start a company but it’s definitely hard to grow and scale a company, so two years later we thought we would discuss trends in cloud computing that shape our growth and vision – what we see and hear as we talk to enterprises, MSP’s and industry pundits on a daily basis. First, and foremost we need to thank our customers, both free and paid, who use ParkMyCloud, save millions a year, and actively engage with us in defining our roadmap, and have helped us develop the best damn cloud cost control solution in the market. And the bloggers, analysts, and writers who share our story, given we have customers on every continent (except Antarctica) this has been extremely beneficial to us.

Observation Number One: the public cloud is here to stay. Given the CapEx investment needed to build and operate data centers all over the world, only the cash rich companies will succeed at scale so you need to figure out if you want to be a single cloud / multi-region, or multi-cloud user. We discussed that in detail recently in this blog and it really boils down to risk mitigation. Most companies we talk to are single cloud BUT do ask if we support multi-cloud in case they diversify (we are, we support AWS, Azure, and Google).

Observation Number Two: AWS is king, duh – well they are, and they continue to innovate and grow at a record setting pace. AWS just hit $4bn in quarterly revenue – that’s $16bn in run rate. It’s like the new IBM – what CIO or CTO is going to get fired for moving their infrastructure to AWS’ cloud to improve agility, attract millennial developers who want to innovate in the cloud, leverage the cloud ecosystem, and lower cost (we will address this one in a bit). We released support for Azure and Google in 2017, and yet 75% or more of the new trials and customers we get use AWS, and their environments are almost always larger than those on Azure and Google. There is a reason Microsoft and Google do not release IaaS statistics. And for IBM and Oracle, they are the way back IaaS time machine.

Observation Number Three: Cloud Cost Control is a real thing. It’s something enterprises really care about, and optimizing their cloud spend as their bills grow is becoming increasingly more important to the CFO and CIO. This is mainly focused on buying capacity in advance (which kind of defeats the purpose of the pay as you go model), rightsizing servers as developers have a tendency to over provision for their needs, turning stuff off when it’s not being used, and finding orphaned resources that are ‘lost’ in the cloud. As 65% of a bill is spent on compute (servers / instances) the focus is usually directed there first and foremost as a reduction there is the largest impact on a bill.

Observation Number Four: DevOps and IT Ops are responsible for cloud cost control, not Finance. Now, Finance (or the CFO) might provide a directive to  IT or Engineering that their cloud costs must be brought under control and that they need to look at ways to optimize, but at the end of the day DevOps and IT Ops are responsible for evaluating and selecting tools to help their companies immediately reduce their cloud costs. When we talk to the technical teams during a demo they have been told to they need to reduce their cloud spend or there is a cost control initiative in place, and then they research technologies to help them solve this problem (SEO is key here). Here’s a great example of a FinTech customer of ours and how their cost control decision went down.

Observation Number Five: It’s all about automation, DevOps and self-service. As mentioned, the technical folks are responsible for implementing a cost control platform to optimize their cloud spend, and as such it’s all about show me, not pretty reports and graphs. What we mean here is that as an action oriented platform they want us to be able to easily integrate into their continuous integration and delivery processes through a fully functional API, but also provide a simple UI for the non-techies to ensure self-service. And at the infrastructure layer it’s about what you can do with and through DevOps tools like Slack, Atlassian, and Jenkins, and at the enterprises level with SSO providers such as Ping, Okta and Microsoft, repeating themes over and over again regardless of the cloud provider.

Observation Number Six: Looking ahead, it’s about Stacks. As the idea of microservices continues to take hold, more developers are utilizing multiple instances or services to deploy a single application or environment. In years past, the bottleneck for implementing such groups of servers or databases was the deployment time, but modern configuration management tools (like Chef, Puppet, and Ansible) make this a common strategy by turning the infrastructure into code.  However, managing these environments for humans can remain challenging. ParkMyCloud already allows logical groupings of instances for one-click scheduling, but we’re planning on taking this a step further by integrating with the deployment solutions to really tie it all together.

Obviously the trends in cloud computing we touch on have a mix of macro and micro, and are generally looked at through a cost control lens, but they do provide insights into the day to day of what we see and hear from the folks that operate and use cloud from multinational enterprises to startups. By tracking these trends over time, we can help you keep on top of cloud best-practices to optimize your IT budget, and we look forward to what the next 2 years of cloud computing will bring us.

Read more ›

Was the Acquisition of Cloudyn About the need to Manage Microsoft Azure? Sort of.

batch workloads

Perhaps you heard that Microsoft recently acquired Cloudyn in order to manage Microsoft Azure cloud resources, along with of course Amazon Web Services (AWS), Google Cloud Platform (GCP), and others. Why? Well the IT landscape is becoming more and more a multi-cloud landscape. Originally this multi-cloud (or hybrid cloud) approach was about private and public cloud, but as we recently wrote here the strategy as we talk to large enterprises is becoming more about leveraging multiple public clouds for a variety of reasons – risk management, vendor lock in, and workload optimization seem to be the three main reasons.

 

That said, according to TechCrunch and quotes from Microsoft executives the acquisition is meant to provide Microsoft a cloud billing and management solution that provides it with an advantage over competitors (particularly AWS and GCP) as companies continue to pursue, drum roll please … a multi-cloud strategy. Additional, benefits for Microsoft include visibility into usage patterns, adoption rates, and other cloud-related data points that they can leverage in the ‘great cloud war’ to come … GOT reference of course.

 

Why are we writing about this – a couple reasons. One of course is that this a relevant event in the cloud management platform (CMP) space, as this is really the first big cloud visibility and governance acquisition to date. The other acquisitions by Dell (Enstratius), Cisco (Cliqr), and CSC (ServiceMesh) for example were more orchestration and infrastructure platforms than reporting tools. Second, this points to the focus enterprises have on cost visibility, cost management and governance as they look to optimize their spend and usage as one does with any utility. And third, this proves that a ‘pushback’ from enterprises to more widely adopt Azure has been, “I am already using AWS, I don’t want to manage through yet another screen / console”, and that multi-cloud visibility and governance helps solve that problem.

 

Now, taking this one step farther: the visibility, recommendations, and reporting are all well and good, but what about the actions that must be taken off those reports, and integration into enterprise Devops processes for automation and continuous cost control? That’s where something like Cloudyn falls short, and where a platform like ParkMyCloud kicks in:

 

  • Multi-cloud Visibility and Governance- check
  • Single-Sign On (SSO) – check
  • REST API for DevOps Automation – check
  • Policy Engine for Automated Actions (parking) – check
  • Real-time Usage and Savings data – check
  • Manage Microsoft Azure (AWS + GCP) – check

 

The next step in cloud cost control is automation and action, not just visibility and reporting. Let technology automate these tasks for you instead of just telling you about it.

Read more ›

New on ParkMyCloud: Notifications via Slack and Email

New on ParkMyCloud: you can now receive notifications about your environment and ParkMyCloud account via email as well as Slack and other webhooks. We’re happy to deliver this user-requested feature, and look forward to an improved user experience.

The notifications are divided into system-level notifications and user-level notifications, as outlined below.

Administrators: Configure Notifications of Account-Level Actions via Slack/Webhooks

Administrators can now set up shared account-level notifications for parking actions and/or system errors. You can choose to receive these actions via Slack or a custom webhook.

These notifications include information about:

  • Parking Actions
    • Resource stop/start as a result of a schedule
    • Manual resource start/stop via toggles
    • Manual schedule snoozes
    • Attach/detach of schedules to resources
    • Manual changes to schedules
  • System Errors
    • Permissions issues, such as a lack of permissions on an instance or credential that prevents parking actions
    • Errors related to your cloud service provider, for example, errors due to service outages.

For instructions on how to configure these notifications, please see this article on our support portal.

All Users: Get Notified via Email

While system-level notifications must be configured by an administrator, individual ParkMyCloud users can choose to set up email notifications as well. These notifications include the same information listed above for the teams you choose.

Email notifications will be sent as a rollup every 15 minutes. If no actions occur, you will not receive an email. For instructions on how to configure these notifications, please see this article on our support portal.

Let Us Know What You Think

To our current users: we look forward to your feedback on the notifications, and welcome any suggestions you have to improve the functionality and usability of ParkMyCloud.

If you aren’t yet using ParkMyCloud, you can get started here with a free trial.

Read more ›

Top Cloud Computing Trends: Cloud Cost Control

Enterprise Management Associates (EMA) just released a new report on the top cloud computing trends for hybrid cloud, containers, and DevOps in 2017. With this guide, they aim to provide recommendations to enterprises on how you can implement products and processes in your business to meet the top priority trends.

First Priority Among Cloud Computing Trends: Cost Control

Of the 260 companies interviewed in EMA’s study, 42% named “cost control” as their number one priority. Here at ParkMyCloud, we weren’t surprised to hear that. As companies mature in their use of the cloud, cost control moves to the top of the list as their number one cloud-related priority.

EMA has identified a few key problems that contribute to the need for cloud cost control:

  • Waste – inefficient use of cloud resources
  • Unpredictable bills – cloud bills are higher than expected
  • Vendor lock-in – inability to move away from a cloud provider due to contractual or technological dependencies

Related to this is another item on EMA’s list of cloud computing trends: the demand for a single pane of glass for monitoring the cloud. This goes hand-in-hand with the need for cost control, as well as concerns about governance: if you can’t see it, you don’t know there’s a problem. However, it’s important to keep in mind that a pane of glass is only one step toward reaching a solution. You need to actually take action on your cloud environment to keep costs in control.

How to Implement Changes to Control Costs

To actually implement changes in your environment and control costs, EMA has provided a starting recommendation:

Consider simple tools with large impact: Evaluate tools that are quick to implement and help harvest “low-hanging fruit.”

In fact, EMA provided a list of its top 3 vendors that it recommends as a Rapid ROI Utility – among which it has included ParkMyCloud.

Cost Control among top cloud computing trends

EMA recommends these top tools, particularly the “rapid ROI tools,” as a good starting point for controlling  cloud costs – as each of the tools can easily be tried out and the results can be verified in a brief period of time. (If you’re interested in trying out ParkMyCloud in your environment, we offer a 14-day free trial, during which you get to pocket the savings and try out a variety of enterprise-grade features like SSO, a Policy Engine, and API automation for continuous cost control.)

 

Download the report here to check out the full results from EMA.

Read more ›
Page 1 of 41234
Copyright © ParkMyCloud 2016-2018. All rights reserved|Privacy Policy