Why the NCAA Google Cloud Ads Matter

NCAA, Google Cloud? What does the cloud have to do with March Madness? Actually, public cloud is increasingly being used and promoted in sports. When you watch the tournament on NCAA, Google Cloud ads will show prominently. Plus, the NCAA has chosen to run its infrastructure on Google Cloud Platform (GCP).

(By the way, have your done your bracket yet? I just did mine – I went chalk and picked Villanova. Couldn’t see my WVU Mountaineers winning it all).

So we will see and hear a lot of Google Cloud in the coming weeks. Google recently announced a multiyear sponsorship deal with the NCAA and will run these ads throughout the upcoming NCAA basketball tournament. Google is hoping to expand its cloud business by taking complex topics such as cloud computing, machine learning and artificial intelligence and making them relatable to a wider audience.

So why does is matter that NCAA and Google Cloud will appear so prominently together this March Madness?

First of all, Google Cloud is always matching wits with the other major cloud providers — and in this case, they’ve had their hooks in various mainstream sporting leagues and events for several years. For example, did you notice the partnership between AWS and the National Football League (NFL)? Both AWS and NFL promote machine-learning capabilities — software that helps recognize patterns and make predictions — to quickly analyze data captured during games. The data could provide new kinds of statistics for fans and insights that could help coaches.

Second, there’s the infrastructure that supports these huge events. I can tell you as a sports fan that me and my mates will all be live streaming football, basketball, golf and soccer (yes the English Premier League) on our phones and tablets wherever we are. We do this while watching the kids play sports, working in the office, and even while we are playing golf – hook it up to cart (a buggy for my UK mates). Many of these content providers are using AWS, Microsoft Azure, GCP, and IBM Cloud to get this content to us in real time, and to analyze it and provide valuable insights for a better user experience.

Or take a look at the Masters golf tournament. Usually IBM and ATT are big sponsors, although the Masters is usually very hush hush about a lot of this. Last year there was a lot of talk of IBM Watson, the Masters and the surreal experience they were able to deliver. This is a really good read on what went on behind the scenes and how Watson and IBM’s cloud delivered that experience. IBM used Machine learning, Visual recognition, Speech-to-text, and cognitive computing to build a phenomenal user experience for Masters viewers and visitors.

The NCAA and Google Cloud are not just ad partners, but the NCAA is also a GCP customer. The NCAA is migrating 80+ years of historical and play-by-play data, from 90 championships and 24 sports to GCP. To start, the NCAA will tap into decades of historical basketball data using BigQuery, Cloud Spanner, Datalab, Cloud Machine Learning and Cloud Dataflow, to power the analysis of team and player performance. So Google Cloud not only gets advertising prominence for one of the most-watched events of the year, it gets a high-profile customer and one of the coolest use cases out there.

Enjoy the tournament – let’s go Cats!

Read more ›

Azure Region Pricing: Costs for Compute

In this blog we are going to examine how Microsoft Azure region pricing varies and how region selection can help you reduce cloud spending.

How Organizations Select Public Cloud Regions

There are many comparisons that go into pricing differences between AWS vs Azure vs GCP, etc. At the end of the day, however, most organizations select one primary cloud service provider (CSP) for most of their workloads, plus maybe another for multi-cloud redundancy of critical services. Once selected, organizations then typically put many of their workloads in the region closest to their offices, plus maybe some geographic redundancy in their production systems. In other situations, a certain region is selected because that is the first region to support some new CSP feature. As time goes by, other regions become options because either those new features are propagated through the system, or whole new regions are created.

CSP regions tend to cluster around certain larger geographic regions, that I will call “areas” for the purpose of this blog. Looking at Azure in particular, we can see that Azure has three major US areas (Western, Central, and Eastern). The Western and Eastern US areas each have two Azure regions, and the Central area has four Azure regions. The UK, Europe and Australia areas each have two Azure regions. There are a number of other Azure regions as well, but they are far enough dispersed that I would consider them to be areas with a single region.

How Does Azure Region Pricing Vary?

With this regional distribution as a starting point, let’s look next at costs for instances. Here is a somewhat random selection of Azure region pricing data, looking at a variety of instance types (cost data as of approximately March 1, 2018).

While this graphic is a bit busy, there are a couple things that jump out at us:

  • Within most of the areas, there are clearly more expensive regions and less expensive regions.
  • The least expensive regions, on average across these instance types are us-west-2, us-west-central, and korea-south.
  • The most expensive regions are asia-pacific-east, japan-east, and australia-east.
  • Windows instances are about 1.5-3 times more expensive than their Linux-based counterparts

Let’s zoom-in on Azure Standard_DS2_v2 instance type, which comprises almost 60% of the total population of Azure instances customers are managing in the ParkMyCloud platform.

We can clearly see the relative volatility in the cost of this instance type across regions. And, while the Windows instance is about 1.5-2 times the cost of the Linux instance, the volatility is fairly closely mirrored across the regions.

Of more interest, however, is how the costs can differ within a given area. From that comparison we can see that there is some real savings to be gained by careful region selection within an area:

Over the course of a year, strategic region selection of a Windows DS2 instance could save up to $578 for the asia-pacific regions, $298 for the us-east regions, and $228 for the Korean regions.  

How to Save Using Regions

By comparing regions within your desired “area” as illustrated above, the savings over a quantity of instances can be significant. Good region selection is fundamental to controlling Azure costs, and for costs across the other clouds as well.

Read more ›

Announcing SmartParking for Microsoft Azure: Automated On/Off Schedules Based on Azure Monitor Data

Today, we’re excited to announce the release of SmartParkingTM for Microsoft Azure! SmartParking allows Azure customers to automate cloud cost optimization by creating parking schedules optimized to your actual cloud usage based on Azure Monitor data.

Here’s how it works: ParkMyCloud analyzes your Azure Monitor data to find patterns in the usage for each of your virtual machines (VMs). Based on those patterns, ParkMyCloud creates recommended on/off schedules for each VM to turn them off when they are idle. This maximizes your savings by ensuring that no VM is running when it’s not needed — while also saving you the time and frustration of trying to figure out when your colleagues need their resources running.

We released SmartParking for AWS in January, and customers have had positive feedback — and SmartParking for Google Cloud Platform is coming soon.

Customize Your Recommendations like your 401K

Is it better to park aggressively, maximizing savings, or to park conservatively, ensuring that no VM is parked when a user might need it? Everyone will have a different preference, which is why we’ve created different options for SmartParking recommendations. Like an investment portfolio, you can choose to receive SmartParking schedules that are “conservative”, “balanced”, or “aggressive”. And like an investment, a bigger risk comes with the opportunity for a bigger reward.

An aggressive SmartParking schedule prioritizes the maximum savings amount. You will park instances – and therefore save money – for the most time, with the “risk” of occasional inconvenience by having something turned off when someone needs it. Not to worry, though — users can always “snooze” these schedules to override them if they need to use the instance when it’s parked.

On the other hand, a conservative SmartParking schedule will make it more likely that your instances are never parked when they might be needed. It will only recommend parked times when the instance is never used. Choose “balanced” for a happy medium.

Customer Feedback: Making Parking Better Than Ever

ParkMyCloud customer Sysco Foods has more than 500 users across 50 teams using ParkMyCloud to manage their AWS environments. “When I’m asked by a team how they should use the tool, they’re exceedingly happy that they can go in and see when systems are idle,” Kurt Brochu, Sysco Foods’ Senior Manager of the Cloud Enablement Team, said of SmartParking. “To me, the magic is that the platform empowers the end user to make decisions for the betterment of the business.”

Already a ParkMyCloud user? Log in to your account to try out SmartParking for Azure. Note that you’ll have to update the permissions that ParkMyCloud has to access your Azure data — see the user guide for instructions on that.

Not yet a ParkMyCloud user? Start a free trial here.

Google Cloud Platform user? Not to worry — Google Cloud SmartParking is coming next month. Let us know if you’re interested and we’ll notify you when it’s released.

Read more ›

Interview: QCentive Saves $25k/month on AWS while Enabling Cloud Computing in Healthcare

We talked with Bill Gullicksen, Director of IT at QCentive, about how his company is using ParkMyCloud to save money on their AWS costs while enabling cloud computing in healthcare.

Thanks for taking the time to speak with us today. Can you start by telling me about QCentive and how you are using the cloud?

We are a 2-year-old healthcare startup founded in Massachusetts. We build systems for the healthcare industry to help reduce costs in healthcare and provide efficiencies in contract and payment management for healthcare companies. We are actually the first vendor for our customer authorized to take private healthcare information and move it to the cloud.

What do you think made QCentive stand apart to your customer as the best option for moving their infrastructure to the cloud?

Healthcare has been very cloud-averse due to issues like security concerns. In order to prove the use case for cloud computing in healthcare, we needed to build out a prototype and go through many months of meeting with them to prove that we could move them to the cloud while being HIPAA compliant, high-tech compliant, and secure.

We’re currently in the process of building our first prototype application by taking years of patient and healthcare contract information, loading it all into AWS, and then putting our application on top of it. We’ll be able to go through all the contracts, healthcare records, emergency room visits, and more to quickly calculate how to get the best savings in those areas.

So as you’re helping healthcare companies transition to the cloud, how did you come to find ParkMyCloud as a useful tool for your mission?

We had a few architects just going to town on AWS for about the first year we were in business. They were building and building away, and then all of the sudden our monthly AWS costs soared up to $40k, then $50k, $60k, $70k – and we’re spending a lot of money on Amazon and we don’t even have a working application yet!

Last summer I was put in charge of all of our AWS operations and I immediately went into cost control mode. I asked, “what I can do to get some of these costs under control?” We started out with some rightsizing exercises and scaled some stuff back and that got us some savings. We found areas where we have had some stability and used Reserved Instances there, allowing us to get a 30-40% discount, but we didn’t want to do long-term commitments so we only did those for a year.

For the remaining instances, I realized that we pay by the minute and we really don’t need to be running instances 24/7. That’s that’s when I started thinking about how to schedule instances to shut down. I could do that and turn them off with AWS tools, but then telling an instance to turn itself back on at 6 in the morning – I didn’t have a way to do that. And that’s when I found out about ParkMyCloud and said this looks perfect – I can schedule instances to get them running 12 hours a day, 5 days a week instead of 24/7 and I’ll cut my costs in half.

Have you discovered any other benefits while using ParkMyCloud?

ParkMyCloud was the perfect tool for what I needed at the time and it also gave us a side benefit where we could give developers, QA people, and even data analysts and business folks the ability to turn an instance off when they’re done, or turn it on without having to write a bunch of complex policies within AWS.

Before, if I only wanted certain people to be able to manipulate a handful of instances, I had to put those instance IDs in the policies. Instance IDs frequently change, so running custom policies was taking a lot of overhead and we got the benefit from ParkMyCloud of just assigning them teams. Now, whether the instance IDs change or not, there’s no extra work for me.

That’s why we chose ParkMyCloud and why we’ve been using it for 6-7 months now. For me it was great, very simple to set up, simple to use, easy for non-technical users and with very little effort from me and my technical staff, so it’s been perfect.

Great. So it seems like you were using a good mix of different cost savings efforts between the reserved instances, the rightsizing, and ParkMyCloud. Is there anything else you’re doing to manage cloud re-infrastructure costs?

Those are the bulk of it. We have a CloudCheckr subscription that I use sometimes, it’s very simple but I just use it for looking at the daily spend, seeing if there’s any unexpected spikes, things like that. I can use it for finding resources that are no longer being used. It’s nice to have for identifying orphaned volumes and gives me a simple, easy way to clean some of that up, but we get our biggest use out of ParkMyCloud.

What percent of your resources are currently on ParkMyCloud schedules?

We’ve taken some schedules off just to keep some systems up for a while, but my rule of thumb has been to put a schedule and a team on everything. Even if a schedule is running 24/7/365, I want to at least have a schedule on it and know that it’s a conscious business decision we made to keep that up versus “it just slipped through the cracks and we never looked at it.”

About how many people in your team or organization are using ParkMyCloud?

Somewhere around 15-20 users, which is probably about 75% of our company at this point.

Where do those users sit within your organization?

I’m Director of IT and we’ve got a Director of DevOps and a DevOps engineer – we are the three technical resources around infrastructure. Then we’ve got around 12 software developers that all have access so they can spin up their dev environments and spin them down when they’re not working.

We have a very flexible schedule, 3 days in the office, 2 days working from home, and we’ve got software developers that do their best coding at 3 in the morning. If they get up with an idea and they want to code, they need the ability to start up instances, do what they need to do, and then turn them off when they’re done. So they’re all in there, our QA department is currently 4-5 people and they’re all using ParkMyCloud, and then we’ve got 4-5 business analysts that do a lot of data analysis and database querying also using ParkMyCloud.

That makes sense. So, how much are you saving on your AWS bills using ParkMyCloud?

We are consistently saving between $15-25k a month.

Costs are creeping up now because before we got close to release we had systems sized small. So at first we dropped from our high bill that got up to around $60k and then after I implemented all these changes and started with ParkMyCloud in the first month, I got the spend down to $17k. Now we’ve got the full load of customer data, we’ve had to upsize a lot of the instances for performance and costs are going up in that regard but even with a $40k monthly spend, if we weren’t using ParkMyCloud that would be $60 or $65k monthly.

We’ve got a lot of instances that we keep normally parked now and we only turn them on when there’s a workload to run. And then we’ve got probably another 40 or 50% of our instances that only run Monday through Friday, from 7:00 AM to 7:00 PM, so we’re getting that savings there which to me is bigger savings than messing with Reserved Instances.

Things like Reserved Instances look great the day you buy them, but then the first time you have to change the size on something, all of the sudden you’ve got Reserved Instances that you’re not using anymore. With ParkMyCloud that never happens, it’s all savings.

How did you first hear about ParkMyCloud?

We were interviewing an external technology company last summer that was being brought in to jump start our CI/CD process. While they were in I asked, “hey, do you know any good methods for doing scheduling?” – and they said take a look at ParkMyCloud. G2 Technologies in Boston.

Any other feedback for us?

I was surprised how simple ParkMyCloud was to get up and running. It was a couple of hours from signing up for the trial to having most of the work done and realizing savings, which was great. The release of your mobile app has been fantastic because it’s nice if I need to turn something on for somebody that doesn’t have access on a Saturday when I’m 30 miles away from my computer. I can do it anywhere with the mobile app.

Glad to hear it! I think that wraps things up for now. Thank you Bill, I appreciate your time.

You’re welcome!

Read more ›

How to Decide Between a CI/CD SaaS Tool vs. Self Hosted

You may find yourself deciding whether to choose a CI/CD SaaS tool, or a self-hosted option. The continuous integration/continuous delivery platform market has grown over the last several years as DevOps becomes more mainstream, and now encompasses a huge variety of tools, each with their own flavor. While it’s great to have choices, it means that choosing the right tool can be a difficult decision. There are several factors to consider when choosing the right fit, including hosting, cost, scalability, and integration support. In this post, we will look at one of the biggest points of consideration: whether to choose a SaaS CI/CD service or a self-hosted system. This will be the first entry in a series of posts about how to choose the CI/CD system that is right for your team. Like everything, there are pros and cons to all solutions, and with the vast amount of CI/CD options available today, there’s no such thing as “one size fits all”.

Considerations for Choosing a CI/CD SaaS Tool

First, let’s take a look at the up-side to choosing a CI/CD SaaS tool. Like most SaaS products, one of the biggest benefits is that there is no hardware or software infrastructure to maintain. There’s no need to worry about server maintenance or applying software updates/patches: that’s all handled for you. In addition to the reduced ongoing maintenance burden, most SaaS CI/CD systems tend to be easy to get set up, especially if you’re using a SaaS VCS (Version Control System) like GitHub or Bitbucket. 

These are great points, but there are potential down-sides that must be considered. The cost of usage for a SaaS CI/CD solution may not scale nicely with your business. For example, the price of a SaaS CI/CD service may go up as your team gets larger. If you plan on scaling your team significantly, the cost of your CI/CD system could inflate dramatically. Furthermore, not all services support all platforms, tools, and environments. If you plan on introducing any new development technologies, you should make sure that they are supported by the CI/CD provider you choose.

Considerations for a Self-Hosted CI/CD Tool

While there are many attractive points in favor of a SaaS CI/CD service, a self-hosted solution is not without its merits. One potential benefit of a self-hosted solution is extensibility. Some self-hosted services can be customized with plugins/extensions to enable functionality that is not included “out of the box”. Jenkins is a prime example of this, with over 1,000 plugins available. Even without plugins, self-hosted CI/CD tools often have more support for development platforms, languages, and testing frameworks than many SaaS solutions. If there’s not first-class support (or a plugin/extension) for a technology that you use, you can usually make things work with some shell scripts and a little bit of creativity. In addition to extensibility, self-hosted solutions typically have fewer limitations on things like build configurations and concurrent build jobs. This isn’t always the case, however. The free version of TeamCity, a CI/CD tool from JetBrains, is limited to 100 build configurations and 3 build agents. Licenses for additional configurations and build agents are available for purchase, though.

Conversely, there are some potential down-sides to a self-hosted system. Perhaps the biggest of these is that you are required to manage your own infrastructure. This includes applying software updates/patches, and may include management of hardware if you’re not hosting the service on an IaaS platform like AWS, GCP, or Azure. In contrast to a SaaS solution, self-hosted systems may require a time-intensive process to get set up. Between getting the system linked up to your VCS (Version Control System), issue/ticket tracking software, and notification system(s), there can be a steep entrypoint in getting your CI/CD system initialized. In addition to the first-time setup, you may be required to manage authentication and authorization for users in your organization if the system you choose doesn’t support your organization’s user management system (LDAP, Google GSuite, etc.).

Final Thoughts

It is worth noting that some CI/CD SaaS tool providers offer self-hosted variants of their services. For instance, CircleCI offers an enterprise solution that can be self-hosted on your own networks, and Travis CI offers Travis CI Enterprise, which is optimized for deployment on Amazon EC2 instances. These offerings throw even more into the mix, and should be part of your consideration when determining which tool has the best fit.

As you can see, there are several factors that must be considered when choosing the CI/CD tool that is right for you. In this post, we discussed some of the trade-offs between SaaS and self-hosted systems. In future posts, we will look at other factors such as scalability, cost, and restrictions/limitations.

Read more ›

Don’t Let Your Server Patching Schedule Get in the Way of Cost Control

Don’t let your server patching schedule get in the way of saving money. The idea of minimizing cloud waste was a very new concept two years ago, but as cloud use has grown, so has the need for minimizing wasted spend. CFOs now demand that the cloud operations teams turn off idle systems in the face of rising cloud bills, but the users of these systems are the ones that have to deal with servers being off when they need them.

Users of ParkMyCloud are able to overcome some of the common objections to scheduling non-production resources. The most common objection is, “What if I need the server or database when it’s scheduled to be off?” That’s why ParkMyCloud offers the ability to “snooze” the schedule, which is a temporary override that lets you choose how long you need the system for. This snooze can be done easily from our UI, or through alternative methods like our API, mobile app, or Slackbot.

A related objection is related to how your parking schedule can work with your server patching schedule. The most common way of dealing with patching in ParkMyCloud is to use our API. The workflow would be to log in through the API, get a list of the resources, then choose which resources you want and choose to “snooze” the schedule for a couple of hours, or however long the patching takes. Once the schedule is snoozed, you can toggle the instance on, then do the patching. After the patching is complete, you can either cancel the snooze to go back to the original schedule or wait for the snooze to finish and timeout. If you have an automated patching tool that can make REST calls, this can be an easy way to patch on demand with minimal work.

If you’re on a weekly server patching schedule, you could also just implement the patch times into your pre-set schedules so that the instances turn on, say, at 2:00 a.m. on Wednesdays. By plugging this into your normal schedules, you can still save money during most off-hours, but have the instances on when the patch window is open. This can be a great way to do weekly backups as well, with minimal disruption.

This use of ParkMyCloud while plugging in to external tools and processes is the best way to get every developer and CloudOps engineer on board with continuous cost control. By reducing these objections, you can reduce your cloud costs and be the hero of your organization. Start up a free trial today to see these plug-ins in action!

Read more ›

SysAdmin vs. DevOps: 4 Ways That the Cloud is Redefining IT

SysAdmin vs. DevOps? IT Operations Management vs. Cloud Operations Management? Unless your head has been under a rock, you’re probably aware that the cloud has been rapidly reshaping and redefining IT as we know it — from the language we use to describe it to the management models and infrastructure itself.

Cloud providers like Amazon Web Services, Google Cloud Platform, and Microsoft Azure have transformed cloud computing, giving businesses access to IT resources anytime, anywhere. At the same time, this rapid migration to off-premise cloud has been reshaping the needs and roles in the IT department.

Here are 4 ways that the cloud is redefining IT roles and operations:  

Sysadmin vs. DevOps

When you compare sysadmin vs. DevOps, you’ll find that they’re similar roles, but uniquely distinct. A System Administrator, or sysadmin, is the person responsible for configuring, operating, and maintaining computer systems – servers in particular. This jack-of-all-trades IT role handles everything from installations and upgrades to security, troubleshooting, technical support and more.  

And then we have the evolution of DevOps, which could very well be the biggest gamechanger to the IT process. Under the DevOps umbrella, a team of software developers, IT operations, and product management people must combine strengths to effectively streamline and stabilize operations for rolling out new apps and updating code to support and improve the whole business.  

With the cloud taking over and without the need for physical, on-prem servers, a large portion of the sysadmin role has become lost to automation. As this change was occurring, sysadmins remained effective as their role shifted towards the support of developers, combining efforts and thus giving birth to to the term DevOps. So can you truly compare sysadmin vs. DevOps? Well, the roles are similar in the sense that sysadmins can do a lot of what DevOps guys do, but not the other way around, making DevOps the newer, bigger jack of all trades.

IT Operations Management vs. Cloud Operations Management

IT Operations Management is responsible for the efficiency and performance of IT processes, which can include anything from administrative processes to hardware and software support, and for both internal and external clients. IT management sets the standard policies and procedures for how service and support is carried out and how issues are resolved.

Thanks to the cloud, IT management has also given way to automation and outsourcing. Cloud operational processes are now a more efficient way of using resources, providing services, and meeting compliance requirements. In the same way that ITOP manages IT processes, Cloud Operations Management is doing so in a cloud environment with resource capacity planning and cloud analytics that provide vital intelligence into how to control resources and run them cost effectively (speaking of, check out our recent partnership aimed at making this easier for you).  

IT Service Management vs. Cloud Service Management

Traditional IT service management (ITSM) dealt with strategizing in the design, delivery, management and innovation of the way an organization is using IT. This involved developing, implementing, and monitoring IT governance and management through the use of frameworks like COBIT, Microsoft Operations Framework, Six Sigma, and ITIL, for example.  

As the cloud became a better option for operational management, companies have turned to cloud computing to transform their business model via service providers like Amazon Web Services, Google Cloud Platform, and Microsoft Azure to outsource IT for more efficient, scalable cloud services.

Since cloud computing resources are hosted as off-site VMs and managed externally, ITSM has grown more complex, introducing Cloud Service Management (CSM) as an extension of ITSM, pushing in three core areas: automated service provisioning, DevOps, and asset management. And as ITSM shifts towards CSM, the concerns lie in cloud adoption strategy and the approach for designing, deploying, and running of cloud services.

From Finance and Operations vs. DevFinOps

In a world where IT projects are known to exceed budgets and coming up with cost estimates is no easy feat, how can businesses break down a reasonable overall estimate for projects where we develop, build and run applications on a utility? The answer is to make estimates little by little as parts of the work get completed, integrating financial planning directly into IT development operations. In other words: DevFinOps.  

IT asset management merges the financial, contractual, and inventory components of an IT project to support life cycle management and strategic decision making. The strategy involves both software and hardware inventory and the decision making process for purchases and redistribution. DevFinOps expands and builds upon ITAM by fixing financial cost and value of IT assets directly into IT infrastructure, updating calculations in real time and simplifying the budgeting process.

What This Means For You

Cell phones, self-driving cars, DevOps — cloud computing is yet another evolution in technology, albeit a huge one, and IT is simply going through a metamorphosis. The best way of looking it at is that cloud is not killing IT, it’s redefining IT, and enterprises are following suit as they shift towards the cloud and change or update traditional IT roles. As IT evolves,  the cloud is paving the way for opportunities for those who adapt and evolve their roles with it.


Read more ›

Cloud Operations Management: Is the cloud really making operations easier?

As cloud becomes more mature, the need for cloud operations management becomes more pervasive. In my world, it seems pretty much like IT Operations Management (ITOM) from decades ago. In the way-back machine I used to work at Micromuse, the Netcool company, which was acquired by IBM Tivoli, the Smarter Planet company, which then turned Netcool into Smarter Cloud … well you get the drift. Here we are 10+ years later, and IT = Cloud (and maybe chuck in some Watson).

Cloud operations management is the process concerned with designing, overseeing, controlling, and subsequently redesigning cloud operational processes.  This involves management of both hardware and software as well as network infrastructures to promote an efficient and lean cloud.

Analytics is heavily involved in cloud operations management and used to maximize visibility of the cloud environment, which gives the organization the intelligence required to control the resources and running services confidently and cost-effectively.

Cloud operations management can:

  • Improve efficiency and minimize the risk of disruption
  • Deliver the speed and quality that users expect and demand
  • Reduce the cost of delivering cloud services and justify your investments

Since ParkMyCloud helps enterprises control cloud costs, we mostly talk to customers about the part of cloud operations concerned with running and managing resources. We are all about that third bullet – reducing the cost of delivering cloud services and justifying investments. We strive to accomplish that while also helping with the first two bullets to really maximize the value the cloud brings to an enterprise.

So what’s really cool is when we get to ask people what tools they are using to deploy, secure, govern, automate and manage their public cloud infrastructure, as those are the tools that they want us to integrate into as part of their cost optimization efforts, and we need to understand the roles operation folks now play in public cloud (CloudOps).

And, no it’s not easier to manage cloud. In fact I would say it’s harder. The cloud provides numerous benefits – agility, time to market, OpEx vs. CapEx, etc. – but you still have to automate, manage and optimize all those resources. The pace of change is mind boggling – AWS advertises 150+ services now, from basic compute to AI, and everything in between.

So who are these people responsible for cloud operations management? Their titles tend to be DevOps, CloudOps, IT Ops and Infrastructure-focused, and they are tasked with operationalizing their cloud infrastructure while teams of developers, testers, stagers, and the like are constantly building apps in the cloud and leveraging a bottoms-up tools approach. Ten years ago, people could not just stand up a stack in their office and have at it, but they sure as hell can now.

So what does this look like in the cloud? I think KPMG did a pretty good job with this graphic and generally hits on the functional buckets we see people stick tools into for cloud operations management.

So how should you approach your cloud operations management journey? Let’s revisit the goals from above.

  1. Efficiency – Automation is the name of the game. Narrow in on the tools that provide automation to free up your team’s development time.
  2. Deliverability – See the bullet above. When your team has time, they can focus on delivering the best possible product to your customers.
  3. Cost control – Think of “continuous cost control” as a companion to continuous integration and continuous delivery. This area, too, can benefit from automated tools – learn more about continuous cost control.


Read more ›

Microsoft’s Start/Stop VM Solution vs. ParkMyCloud

Microsoft recently released a preview of their Start/Stop VM solution in the Azure Marketplace. Users of Azure took notice and started looking into it, only to find that it was lacking some key functionality that they required for their business. Let’s take a look at what this Start/Stop tool offers and what it lacks, then compare it to ParkMyCloud’s comprehensive offering.

Azure Start/Stop VM Solution

The crux of this solution is the use of a few Azure services, specifically Automation and Log Analytics to schedule the VMs and SendGrid to let you know when a system was shut down or started via email. This use of native tools within Azure can be useful if you’re already baked into the Azure ecosystem, but can be prohibitive to exploring other cloud options.

This solution does cost money, but it’s not very easy to estimate the cost (but does that surprise you?). The total cost is based on the underlying services (Automation, Log Analytics, and SendGrid), which means it could be very cheap or very expensive depending on what else you use and how often you’re scheduling resources. The schedules can be based on time, but only for a single start and stop time. The page claims it can be based on utilization, but in the initial setup there is no place to configure that. It also needs to be set up for 4 hours before it can show you any log or monitoring information.

The interface for setting up schedules and automation is not very user-friendly. It requires creating automation scripts that are either for stopping or starting only, and only have one time attached. To create new schedules, you have to create new scripts, which makes the interface confusing for those who aren’t used to the Azure portal. At the end of the setup, you’ll have at least a dozen new objects in your Azure subscription, which only grows if you have any significant number of VMs.

How it stacks up to ParkMyCloud

So if the Start/Stop VM Solution from Microsoft can start and stop VMs, what more do you need? Well, we at ParkMyCloud have heard from customers (ranging from day-1 startups to Fortune 100 companies) that there are features necessary for a cloud cost optimization tool if it is going to get widespread adoption. Here are some of the features ParkMyCloud has that are missing from the Microsoft tool:

  • Single Pane of Glass – ParkMyCloud can work with multiple clouds, multiple accounts within each cloud, and multiple regions within each account, all in one easy-to-use interface.
  • Easy to change or override schedules – Users can change schedules or temporarily “snooze” them through the UI, our API, our Slackbot, or through our iOS app.
  • User Management – Admins can delegate access to users and assign Team Leads to manage sub-groups within the organization, providing user governance over schedules and VMs.
  • No Azure-specific knowledge needed – Users don’t need to know details about setting up Automation Scripts or Log Analytics to get their servers up and running. Many ParkMyCloud administrators provide access to users throughout their organizations via the ParkMyCloud RBAC. This is useful for users who may need to, say, start and stop a demo environment on demand, but who do not have the knowledge necessary to do this through the Azure console.
  • Enterprise features – Single sign-on, savings reports, notifications straight to your email or chat group, and full support access helps your large organization save money quickly.

As you can tell, the Start/Stop VM solution from Microsoft can be useful for very specific cases, but most customers will find it lacking the features they really need to make cloud cost savings a priority. ParkMyCloud offers these features at a low cost, so try out the free trial now to see how quickly you can cut your Azure cloud bill.

Read more ›

AWS Neptune Preview – Amazon’s Graph Database Service

At the AWS DC Meetup we organized last week, we got a preview of AWS Neptune, Amazon’s new managed graph database service. It was announced at AWS re:Invent 2017, is currently in preview and will launch for general availability this summer.

What is a graph database?

A graph database is a database optimized to store and process highly connected data – in short, it’s about relationships. The data structure for these databases consists of vertices and direct links called edges.

Use cases for such highly-connected data include social networking, restaurant recommendations, retail fraud detection, knowledge graphs, life sciences, and network & IT ops. For a restaurant recommendations use case, for example, you may be interested in the relationships between various users, where those users live, what types of restaurants those users like, where the restaurants are located, what sort of cuisine they serve, and more. With a graph database, you can use the relationships between these data points to provide contextual restaurant recommendations to users.

Tired of SQL?

If you’re tired of SQL, AWS Neptune may be for you. A graph database is fundamentally different from SQL. There are no tables, columns, or rows – it feels like a NoSQL database. There are only two data types: vertices and edges, both of which have properties stored as key-value pairs.

AWS Neptune is fully managed, which means that database management tasks like hardware provisioning, software patching, setup, configuration, and backups are taken care of for you.

It’s also highly available and shows up in multiple availability zones. This is very similar to Aurora, the relational database from Amazon, in its architecture and availability.

Neptune supports Property Graph and W3C’s RDF. You can use these to build your own web of data sets that you care about, and build networks across the data sets in the way that makes sense for your data, not with arbitrary presets. You can do this using the graph models’ query languages: Apache TinkerPop Gremlin and SPARQL.

There is no cost to use Neptune during the preview period. Once it’s generally available, pricing will rely on On Demand EC2 instances – which means ParkMyCloud will be looking into ways to assist Neptune users with cost control.

If you’re interested in the new service, you can check out more about AWS Neptune and sign up for the preview.

Read more ›

The M Instance type: EC2 starts here

If you are using AWS EC2 in production, chances are good that you’re using the AWS M instance type. The M family is a “General Purpose” instance type in AWS, most closely matching a typical off-the-shelf server one would buy from Dell, HP, etc, and was the first instance family released by AWS in 2006.

If you are looking for mnemonics for an AWS certification exam, you may want to think of the M instance type as the Main choice, or the happy Medium between the more specialized instances. The M instance provides a good balance of CPU, RAM, and disk size/performance. The other instance types specialize in different ways, providing above average CPU, RAM, or disk size/performance, and include a price premium. The one exception is the “T” instance type, discussed further below.

For a normal web or application server workload, the M instance type is probably the best tool for the job. Unless you KNOW you are going to be running a highly RAM/CPU/IO-intensive workload, you can usually start with an M instance, monitor its performance for a while, and then if the instance is performance-limited by one of the hardware characteristics, switch over to a more specialized instance to remove the constraint. For example:

  • “C” instances for Compute/CPU performance.
  • “R” or “X” instances for lots of memory – RAM or eXtreme RAM
  • “D”, “H”, or “I” instances optimize for storage with different types/quantities of local storage drives (i.e., HDD or SDD that are part of the physical hardware the instance is running on) for high-Density storage (up to 48TB), High sequential throughput, or fast random I/O IOPS, respectively. (The latter two categories are much more specialized – see here for more details)

The “T” instance family is much like the “M” family, in that it is aimed at general purpose workloads, but at a lower price point. The key difference (and perhaps the only difference) is that the CPU performance is restricted to bursts of high performance (or “bursTs”) that are tracked by AWS through a system of CPU credits. Credits build up when the system is idle, and are consumed when the CPU load exceeds a certain baseline. When the CPU credit balance is used-up, the CPU is Throttled to a fraction of its full speed. T instances are good for low-load web servers and non-production systems, such as those used by developers or testers, where continuous predictable high performance is not needed.


Looking at some statistics, the Botmetric Public Cloud Usage Report for 2017 states that 46% of AWS EC2 usage is on the M family, and 83% of non-production workloads are on T instances. Within the ParkMyCloud environment, we see the following top instance family statistics across our customers’ environments:

  • I instances: 39%
  • M instances: 22%
  • T instances: 27%

Since many of our customers are focused on cost optimization for non-production cloud resources (i.e., a lot of developers and test environments), we are probably seeing more “T” instances than “M” instances as they are less expensive, and the “bursty” nature of T instances is not a factor in their work. For a production workload, M instances with dedicated CPU resources are more predictable. While we cannot say for sure why we are also seeing a very large number of “I” instances, it is quite possible that developers/testers are running database software in an EC2 instance, rather than in RDS, in order to have more direct control and visibility into the database system. Still, 49% of the resources are in the General Purpose M and T families.

The Nitty and/or Gritty

Assuming you have decided that an M instance is the right tool for your job, your next choice will be to decide which one. As of the date of this blog, there are twelve different instance types within the M family, covering two generations of systems.

Table 1 – The M Instance Family Specs (Pricing per hour for on-demand instances in US-East-1 Region)

The M4 generation was released in June 2015. The M4 runs 64-bit operating systems on hardware with the 2.3 GHz Intel Xeon E5-2686 (Broadwell) or 2.4 GHz Intel Xeon E5-2676 H3 (Haswell) processors, potentially jumping to 3GHz with Turbo Boost. None of the M4 instance family supports instance store disks, but are all EBS-optimized by default. These instances also support Enhanced Networking, a no-extra-cost option that allows up to 10 Gbps of network bandwidth.

The M5 generation was just released this past November at re:Invent 2017. The M5 generation is based on custom Intel Xeon Platinum 8175M processors running at 2.5GHz. When communicating with other systems in a Cluster Placement Group (a grouping of instances in a single Availability Zone), the m5.24xlarge instance can support an amazing 25 Gbps of network bandwidth. The M5 type also support EBS via an NVMe driver, a block storage interface designed for flash memory. Interestingly, AWS has not jacked-up the EBS performance guarantee for this faster EBS interface. This may be because it is the customer’s responsibility to install the right driver to get the higher performance on older OS images, so this could also be a cheap/free performance win if you can migrate to M5.

Amazon states that the M5 generation delivers 14% better price/performance on a per-core basis than the M4 generation. In the pricing above, one can do the math and find that all of the M5 instances cost $0.048 per vCPU per hour, and that the M4 instances all cost $0.05 per vCPU per hour. So right out of the box, the M5 is priced 4% cheaper than an equivalently configured M4. Do the same math for RAM vs vCPU and you can see that AWS allocates 4GB of RAM per vCPU in both the M4 and M5 generations. This probably says a lot about how the underlying hardware is sliced/diced for virtual machines in the AWS data centers.

For more thoughts on historic M instance pricing, please see our other blog about the dropping cost of cloud services.

Parting thoughts

Some key takeaways:

  • If you are not sure how your application is going to behave under a production load, start with an M instance and migrate to something more specialized if needed.
  • If you do not need consistent and continuous high CPU performance, like for dev/test or low usage systems, consider using the similarly General Purpose T instance family.
  • If you are launching a new instance, use the M5 generation for the better value.

Overall, the M family gives the best price/performance for General Purpose production systems,  Making it your Main choice for Middlin’ performance of Most workloads!


Read more ›

7 Ways Cloud Services Pricing is Confusing

Beware the sticker shock – cloud services pricing is nothing close to simple, especially as you come to terms with the dollar amount on your monthly cloud bill. While cloud service providers like AWS, Azure, and Google were meant to provide compute resources to save enterprises money on their infrastructure, cloud services pricing is complicated, messy, and difficult to understand. Here are 7 ways that cloud providers obscure pricing on your monthly bill:  

1 – They use varying terminology

For the purpose of this post, we’ll focus on the three biggest cloud service providers: AWS, Azure, and Google. Between these three cloud providers alone, different analogies are used for just about every component of services offered.

For example, when you think of a virtual machine (VM), that’s what AWS calls an “instance,” Azure calls a “virtual machine,” and Google calls a “virtual machine instance.” If you have a group of these different machines, or instances, in Amazon and Google they’re called “auto-scaling” groups, whereas in Azure they’re called “scale sets.” There’s also different terminology for their pricing models. AWS offers on-demand instances, Azure calls it “pay as you go,” and Google refers to it as “sustained use.” You’ve also got “reserved instances” in AWS, “reserved VM instances” in Azure, and “committed use” in Google. And you have spot instances in AWS, which are the same as low-priority VMs in Azure, and preemptible instances in Google.

2 – There’s a multitude of variables

Operating systems, compute, network, memory, and disk space are all different factors that go into the pricing and sizing of these instances. Each of these virtual machine instances also have different categories: general purpose, compute optimized, memory optimized, disk optimized and other various types. Then, within each of these different instance types, there are different families. In AWS, the cheapest and smallest instances are in the “t2” family, in Azure they’re called the “A” family. On top of that, there are different generations within each of those families, so in AWS there’s t2, t3, m2, m3, m4, and within each of those processor families, different sizes (small, medium, large, and extra large). So there’s lots of different options available. Oh, and lots confusion, too.  

3 – It’s hard to see what you’re spending

If you aren’t familiar with AWS, Azure, or Google Cloud’s consoles or dashboards, it can be hard to find what you’re looking for. To find specific features, you really need to dig in, but event just trying to figure out the basics of how much you’re currently spending, and predicting how much you will be spending – all can be very hard to understand. You can go with the option of building your own dashboard by pulling in from their APIs, but that takes a lot of upfront effort, or you can purchase an external tool to manage overall cost and spending.

4 – It’s based on what you provision…not what you use

Cloud services pricing can charge on a per-hour, per-minute, or per-second basis. If you’re used to the on-prem model where you just deploy things and leave them running 24/7, then you may not be used to this kind of pricing model. But when you move to the cloud’s on-demand pricing models, everything is based on the amount of time you use it.

When you’re charged per hour, it might seem like 6 cents per hour is not that much, but after running instances for 730 hours in a month, it turns out to be a lot of money. This leads to another sub-point: the bill you get at the end of the month doesn’t come until 5 days after the month ends, and it’s not until that point that you get to see what you’ve used. As you’re using instances (or VMs) during the time you need them, you don’t really think about turning them off or even losing servers. We’ve had customers who have servers in different regions, or on different accounts that don’t get checked regularly, and they didn’t even realize they’ve been running all this time, charging up bill after bill.

You might also be overprovisioning or oversizing resources — for example, provisioning multiple extra large instances thinking you might need them someday or use them down the line. If you’re used to that, and overprovisioning everything by twice as much as you need, it can really come back to bite you when you go look at the bill and you’ve been running resources without utilizing them, but are still getting charged for them – constantly.

5 – They change the pricing frequently

Cloud services pricing has changed quite often. So far, they have been trending downward, so things have been getting cheaper over time due to factors like competition and increased utilization of data centers in their space. However, don’t jump to conclude that price changes will never go up.

Frequent price changes make it hard to map out usage and costs over time. Amazon has already made changes to their price more than 60 times since they’ve been around, making it hard for users to plan a long-term approach. Also for some of these instances, if you have them deployed for a long time, the prices of instances don’t display in a way that is easy to track, so you may not even realize that there’s been a price change if you’ve been running the same instances on a consistent basis.

6 – They offer cost savings options… but they’re difficult to understand (or implement)

In AWS, there are some cost savings measures available for shutting things down on a schedule, but in order to run them you need to be familiar with Amazon’s internal tools like Lambda and RDS. If you’re not already familiar, it may be difficult to actually implement this just for the sake of getting things to turn off on a schedule.  

One of the other things you can use in AWS is Reserved Instances, or with Azure you can pay upfront for a full year or two years. The problem: you need to plan ahead for the next 12 to 24 months and know exactly what you’re going to use over that time, which sort of goes against the nature of cloud as a dynamic environment where you can just use what you need. Not to mention, going back to point #2, the obscure terminology for spot instances, reserved instances, and what the different sizes are.

7 – Each service is billed in a different way

Cloud services pricing shifts between IaaS (infrastructure as a service), which uses VMs that are billed one way, and PaaS (platform as a service) gets billed another way. Different mechanisms for billing can be very confusing as you start expanding into different services that cloud providers offer.

As an example, the Lambda functions in AWS are charged based on the number of requests for your functions, the duration, and the time it takes for your code to execute. The Lambda free tier includes 1M free requests per month and 400,000 GB-seconds of compute time per month, or you can get 1M request free and $0.20 per 1M requests thereafter, OR use “duration” tier and get 400,000 GB-seconds per month free, $0.00001667 for every GB-second used thereafter – simple, right? Not so much.

Another example comes from the databases you can run in Azure. Databases can run as a single server or can be priced by elastic pools, each with different tables based on the type of database, then priced by storage, number of databases, etc.

With Google Kubernetes clusters, you’re getting charged per node in the cluster, and each node is charged based on size. Nodes are auto-scaled, so price will go up and down based on the amount that you need. Once again, there’s no easy way of knowing how much you use or how much you need, making it hard to plan ahead.

What can you do about it?

Ultimately, cloud service offerings are there to help enterprises save money on their infrastructures, and they’re great options IF you know how to use them. To optimize your cloud environment and save money on costs, we have a few suggestions:

    • Get a single view of your billing. You can write your own scripts (but that’s not the best answer) or use an external tool.  
    • Understand how each of the services you use is billed. Download the bill, look through it, and work with your team to understand how you’re being billed.
    • Make sure you’re not running anything you shouldn’t be. Shut things down when you don’t need them, like dev and test instance on nights and weekends.Try to plan out as much as you can in advance.
    • Review regularly to plan out usage and schedules as much as you can in advance
    • Put governance measures in place so that users can only access certain features, regions, and limits within the environment. 

Cloud services pricing is tricky, complicated, and hard to understand. Don’t let this confusion affect your monthly cloud bill. Try ParkMyCloud for an automated solution to cost control.

Read more ›

How to Use Terraform Provisioning and ParkMyCloud to Manage AWS

Recently, I’ve been on a few phone calls where I get asked about cost management of resources built in AWS using Terraform provisioning. One of the great things about working with ParkMyCloud customers is that I get a chance to talk to a lot of different technical teams from various types of businesses. I get a feel for how the modern IT landscape is shifting and trending, plus I get exposed to the variety of tools that are used in real-world use cases, like Atlassian Bamboo, Jenkins, Slack, Okta, and Hashicorp’s Terraform.

Terraform seems to be the biggest player in the “infrastructure as code” arena. If you’re not already familiar with it, the utilization is fairly straightforward and the benefits quickly become apparent. You take a text file, use it to describe your infrastructure down to the finest detail, then run “terraform apply” and it just happens. Then, if you need to change your infrastructure, or revoke any unwanted changes, Terraform can be updated or roll back to a known state. By working together with AWS, Azure, VMware, Oracle, and much more, Terraform can be your one place for infrastructure deployment and provisioning.

How to Use Terraform Provisioning and ParkMyCloud with AWS Autoscaling Groups

I’ve talked to a few customers recently, and they utilize Terraform as their main provisioning tool, while ParkMyCloud is their ongoing cloud governance and cost control tool. Using these two systems together is great, but one main confusion comes in with AWS’s AutoScaling Groups. The question I usually get asked is around how Terraform handles the changes that ParkMyCloud makes when scheduling ASGs, so let’s take a look at the interaction.

When ParkMyCloud “parks” an ASG, it sets the Min/Max/Desired to 0/0/0 by default, then sets the values for “started” to the values you had originally entered for that ASG. If you run “terraform apply” while the ASG is parked, then terraform will complain that the Min/Max/Desired values are 0 and will change them to the values you state. Then, when ParkMyCloud notices this during the next time it pulls from AWS (which is every 10 minutes), it will see that it is started and stop the ASG as normal.

If you change the value of the Min/Max/Desired in Terraform, this will get picked up by ParkMyCloud as the new “on” values, even if the ASG was parked when you updated it. This means you can keep using Terraform to deploy and update the ASG, while still using ParkMyCloud to park the instances when they’re idle.

How to Use Terraform to Set Up ParkMyCloud

If you currently leverage Terraform provisioning for AWS resources but don’t have ParkMyCloud connected yet, you can also utilize Terraform to do the initial setup of ParkMyCloud. Use this handy Terraform script to create the necessary IAM Role and Policy in your AWS account, then paste the ARN output into your ParkMyCloud account for easy setup. Now you’ll be deploying your instances as usual using Terraform provisioning while parking them easily to save money!

Read more ›

$12.9 Billion in wasted cloud spend this year.

Wake up and smell the wasted cloud spend. The cloud shift is not exactly a shift anymore, it’s an evident transition. It’s less of a “disruption” to the IT market and more of an expectation. And with enterprises following a visible path headed towards the cloud, it’s clear that their IT spend is going in the same direction: up.

Enterprises have a unique advantage as their cloud usage continues to grow and evolve. The ability to see where IT spend is going is a great opportunity to optimize resources and minimize wasted cloud spend, and one of the best ways to do that is by identifying and preventing cloud waste.

So, how much cloud waste is out there and how big is the problem? What difference does this make to the enterprises adopting cloud services at an ever-growing rate? Let’s take a look.

The State of the Cloud Market in 2018

The numbers don’t lie. For a real sense of how much wasted cloud spend there is, the first step is to look at how much money enterprises are spending in this space at an aggregate level.

Gartner’s latest IT spending forecast predicts that worldwide IT spending will reach $3.7 trillion in 2018, up 4.5 percent from 2017. Of that number, the portion spent in the public cloud market is expected to reach $305.8 billion in 2018, up $45.6 billion from 2017.

The last time we examined the numbers back in 2016, the global public cloud market was sitting at around $200 billion and Gartner had predicted that the cloud shift would affect $1 trillion in IT spending by 2020. Well, with an updated forecast and over $100 billion dollars later, growth could very well exceed predictions.

The global cloud market and the portion attributed to public cloud spend are what give us the ‘big picture’ of the cloud shift, and it just keeps growing, and growing, and growing. You get the idea. To start understanding wasted cloud spend at an organizational level, let’s break this down further by looking at an area that Gartner says is driving a lot of this growth: infrastructure as a service (IaaS).

Wasted Cloud Spend in IaaS

As enterprises increasingly turn to cloud service providers like Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) to provide compute resources for hosting components of their infrastructures, IaaS plays a significant role in both cloud spend and cloud waste.

Of the forecasted $305.8 billion dollar public cloud market  for 2018, $45.8 billion of that will be spent on IaaS, ⅔ of which goes directly to compute resources. This is where we get into the waste part:

  • 44% of compute resources are used for non-production purposes (i.e. development, staging, testing, QA)
  • The majority of servers used for these functions only need to run during the typical 40-hour work week (Monday through Friday, 9 to 5) and do not need to run 24/7
  • Cloud service providers are still charging you by the hour (or minute, or even by the second) for providing compute resources

The bottom line: for the other 128 hours of the week (or 7,680 minutes, or 460,800 seconds) – you’re getting charged for resources you’re not even using. And there’s a large percent of your waste!

What You Can Do to Prevent Wasted Cloud Spend

Turn off your cloud resources.

The easiest and fastest way to save money on your idle cloud resources when  is by simply by not using them. In other words, turn them off. When you think of the cloud as a utility like electricity, it’s as simple as turning off the lights every night and when you’re not at home. With ParkMyCloud you can automatically schedule your cloud resources to turn off when you don’t need them, like nights and weekends, and eliminate 65% or more on your monthly bill with AWS, Azure, and Google. Wham. bam.

Turn on your SmartParking.

You already know that you don’t need your servers to be on during nights and weekends, so you shut them off. That’s great, but what if you could save even more with valuable insight and information about your exact usage over time?

With ParkMyCloud’s new SmartParking feature, the platform will track your utilization data, look for patterns and create recommended schedules for each instance, allowing you to turn them off when they’re typically idle.

There’s a lot of cloud waste out there, but there’s also something you can do about it: try ParkMyCloud today.

Read more ›

Yeah, Yeah, Yeah we Park %$#@, but what really matters to Enterprises? – Frequently Asked Questions

Here at ParkMyCloud we get to do product demos for a lot of great companies all over the world, from startups to Fortune 500’s, and in many different industries – Software, IT, Financial, Media, Food and Beverage, and many more. And as we talk to industry analysts and venture capitalists they always ask about vertical selling and the like — we used to do this back at Micromuse where had Federal, Enterprise, Service Provider and SMB sales teams, for example. But here at ParkMyCloud we notice in general the questions from enterprises are vertical-agnostic, and since cloud is the great IT equalizer in my book, we decided to summarize the 8 Most Frequently Asked Questions we get from prospects of all shapes and sizes.

These are the more common questions we get beyond turning cloud resources off / on:

How does ParkMyCloud handle system patching?

Answer: The most common way of dealing with patching is to use our API.  The workflow would be to log in through the API, get a list of the resources, then choose which resources you want and choose to “snooze” the schedule (which is a temporary override of the schedule, if you haven’t played with that yet) for a couple of hours, or however long the patching takes.  Once the schedule is snoozed, you can toggle the instance on, then do the patching.  After the patching is complete, you can either cancel the snooze to go back to the original schedule or wait for the snooze to finish and timeout.

If your patching is done on a weekly basis, you could also just implement the patch times into the schedules so the instances turn on, say at 3am on Sunday.

How do I start and stop instances in a sequential order?

Answer: ParkMyCloud has created a feature that we call ‘Logical Groups’, basically you group cloud resources into a group or cluster within the platform and then assign the order you wish them to stop and start, you can also set how long it takes before resource 1 starts / stops and then resource 2 starts / stops and so forth. This way, your web server can stop first and the database can stop second so all the connections close properly. As this feature is very popular, we have had many requests to fully automate this using our policy engine and tags, a work in progress – that will be way cool.

My developers hate UI’s, how does he/she manage the schedules without using your UI?

Answer: Yes, this is an easy one but always gets asked. If you are anti-UI or just don’t want to use yet another UI, you can use the following channels to manage your resources in ParkMyCloud:

Can I govern user access and permissions?

Answer: Yes, we have support for Single-Sign On (SSO) and a full on Role-based Access Control model (RBAC) in the platform that allows you to import users, add them to teams and assign them roles. The common scenario around this is ‘I only want my SAP QA team to have access to the cloud resources they need for that project and nothing else, and limit their permissions’ – handled.

Can I automatically assign schedules based on tags?

Answer: Yes, and in general this what most companies do using ParkMyCloud. We have a Policy Engine where you can create policies that allow you to fully automate your cloud resource scheduling. Basically the policy reads the AWS, Azure, or Google Cloud metadata that is brought into the platform, and based on those tags (or even other data like resource name, size, region, etc.) and the corresponding policy, we can automatically assign schedules to cloud resources. And we take that a step further, as those resources can also be automatically parsed to Teams and Users as well based on their roles (see RBAC).

You can only park stuff based on tags? That’s so weak!

Answer: Not so fast my friend … I must admit we sort of threw this one in there but it does come up quite often, and we recently solved this problem with our release of SmartParking, which allows you to bring in metric data, trend it for a period of time, and then automatically create schedules based on those usage patterns – cool stuff.

Can we pick which instances we bring into ParkMyCloud?

Answer: Sort of, through their API the cloud providers don’t allow you to choose which cloud resources in an account you bring into the platform, if you link a cloud account to ParkMyCloud all the cloud resources in that account will populate (assuming our API supports those resources and the cloud provider allows you to ‘park’ them). But we do let you choose which accounts you bring into ParkMyCloud, so link accounts and bring in as many or as few accounts as you wish, and by the way AWS recommends you create accounts based on on function like Production, Dev, Test, QA, etc., and then breaks that down even more granular to Dev 1, Dev 2, Dev 3, etc. – this is ideal for ParkMyCloud.

Where is ParkMyCloud located?

Answer: Northern Virginia of course, in Sterling at Terminal 68 to be precise. It’s a co-working space we share with several other startups; we would also be remiss if we did not mention this area is also one of the finalist locations for Amazon’s H2Q – it’s a hotbed of cloud and data center activity.

We hope this was helpful and would value your feedback on the 8 Most Frequently Asked Questions we get, and if yours are the same or different, or of course our favorite … have you thought of XYZ as a feature? Let us know at info@parkmycloud.com.

Read more ›

The Cost of Cloud Computing Is, in Fact, Dropping Dramatically

You might read the headline statement that the cost of cloud computing is dropping  and say “Well, duh!”. Or maybe you’re on the other side of the fence. A coworker recently referred me to a very interesting blog on the Kapwing site that states Cloud costs aren’t actually dropping dramatically. The author defines“dramatically” based on the targets set by Moore’s Law or the more recently proposed Bezos’ Law, which states that “a unit of [cloud] computing power price is reduced by 50 percent approximately every three years.” The blog focused on the cost of the Google Cloud Platform (GCP) n1-standard-8 machine type, and illustrated historical data for the Iowa region:

Date N1-standard-8 Cost per Hour
January 2016 $0.40
January 2017 $0.40
January 2018 $0.38

The Kapwing blog also illustrates that the GCP storage and network egress costs have not changed at all in three years. These figures certainly add up to a conclusion that Bezos’ Law is not working…at least not for GCP.

Whose law is it anyway?

If we turn this around and try to apply Bezos’ Law to, well, Bezos’ Cloud we see a somewhat different story.

The approach to measuring AWS pricing changes needs to be a bit more systematic than for GCP, as the AWS instance types have been evolving quite a bit over their history. This evolution is shown by the digit that follows the first character in the instance type, indicating the version or generation number of the given instance type . For example, m1.large vs. m5.large. These are similar virtual machines in terms of specifications, with 2 vCPUs and about 8GB RAM, but the m1.large was released in October 2007, and the m5.large in November 2017. While  the “1” in the GCP n1-standard-8 could also be a version number,  it is still the only version I can see back to at least 2013. For AWS, changes in these generation numbers happen more frequently and likely reflect the new generations of underlying hardware on which the instance can be run.

Show me the data!

In any event, when we make use of the Internet Archive to look at  pricing changes of the specific instance type as well as the instance type “family” as it evolves, we see the following (all prices are USD cost per hour for Linux on-demand from the us-east-1 region in the earliest available archived month of data for the quoted year):

m1.large m3.large m4.large m5.large Reduction from previous year/generation 3-year reduction
2008 $0.40
2009 $0.40 0%
2010 $0.34  -18%
2011 $0.34 0% -18%
2012 $0.32 -6% -25%
2013 $0.26 -23% -31%
2014 $0.24 $0.23 -13% -46%
2015 $0.175 $0.14 -64% -103%
2016 $0.175 $0.133 $0.120 -17% -80%
2017 $0.175 $0.133 $0.108 -11% -113%
2018* $0.175 $0.133 $0.100 $0.096 -13% -46%

*Latest Internet Archive data from Dec 2017 but confirmed to match current Jan 2018 AWS pricing.

FWIW: The second generation m2.large instance type was skipped, though in October 2012 AWS released the “Second Generation Standard” instances for Extra Large and Double Extra Large – along with about an 18% price reduction for the first generation.

To confirm that we can safely compare these prices, we need to look at how the mX.large family has evolved over the years:

Instance type Specifications
m1.large (originally defined as the “Standard Large” type) 2vCPU w/ECU of 4, 7.5GB RAM
m3.large 2vCPU w/ECU of 6.5, 7.5GB RAM
m4.large 2vCPU w/ECU of 6.5, 8GB RAM
m5.large 2vCPU w/ECU of 10, 8GB RAM

A couple of notes on this:

  • ECU is “Elastic Compute Unit” –  a standardized measure AWS uses to support comparison between CPUs on different instance types. At one point, 1 ECU was defined as the compute-power of a 1GHz CPU circa 2007.
  • I realize that the AWS mX.large family is not equivalent to the GCP n1-standard-8 machine type mentioned earlier, but I was looking for an AWS machine type family with a long history and fairly consistent configuration(and this is not intended to be a GCP vs AWS cost comparison).

The drop in the cost of cloud computing looks kinda dramatic to me…

The net average of the 3-year reduction figures is -58% per year, so Bezos’ Law is looking pretty good. (And there is probably an interesting grad-student dissertation somewhere about  how serverless technologies fit into Bezos’ Law…)  When you factor the m1.large ECU of 4 versus the m5.large ECU of 10 into the picture, more than doubling the net computing power, one could easily argue that Bezos’ Law significantly understates the situation. Overall, there is a trend here of not just a significantly declining prices, but also greatly increased capability (higher ECU and more RAM), and certainly reflecting an increased value to the customer.

So, why has the pricing of the older m1 and m3 generations gone flat but is still so much more expensive? On the one hand, one could imagine that the older generations of underlying hardware consume more rack space and power, and thus cost Amazon more to operate. On the other hand, they have LONG since amortized this hardware cost, so maybe they could drop the prices. The reality is probably somewhere in between, where they are trying to motivate customers to migrate to newer hardware, allowing them to eventually retire the old hardware and reuse the rack space.

Intergenerational Rightsizing

There is definite motivation here to do a lateral inter-generation “rightsizing” move. We most commonly think of rightsizing as moving an over-powered/under-utilized virtual machine from one instance size to another, like m5.large to m5.medium, but intergenerational rightsizing can add up to some serious savings very quickly. For example, an older m3.large instance could be moved to an m5.large instance in about 1 minute or less (I just did it in 55 seconds: Stop instance, Change Instance Type, Start Instance), immediately saving 39%. This can frequently be done without any impact to the underlying OS. I essentially just pulled out my old CPU and RAM chips and dropped in new ones. Note that it is not necessarily this easy for all instance types – some older AMI’s can break the transition to a newer instance type because of network or other drivers, but it is worth a shot, and the AWS Console should let you know if the transition is not supported (of course: as always make a snapshot first!)


For the full view of cloud compute cost trends, we need to look at both the cost of specific instance types, and the continually evolving generations of that instance type. When we do this, we can see that the cost of cloud computing is, in fact, dropping dramatically…at least on AWS.

Read more ›

Why Serverless Computing Will Be Bigger Than Containers

One of the more popular trends in public cloud adoption is the use of serverless computing in AWS, Microsoft Azure, and Google Cloud. All of the major public cloud vendors offer serverless computing options, including databases, functions/scripts, load balancers, and more. When designing new or updated applications, many developers are looking at serverless components as an option. This new craze is coming at a time when the last big thing, containers, is still around and a topic of conversation. So, when users are starting up new projects or streamlining applications, will they stick with traditional virtual machines or go with a new paradigm? And out of all these buzzy trends, will anything come out on top and endure?

Virtual Machines: The Status Quo

The “traditional” approach to deployment of an application is to use a fleet of virtual machines running software on your favorite operating system. This approach is what most deployments have been like for 20 years, which means that there are countless resources available for installation, management, and upkeep. However, that also means you and your team have to spend the time and energy to install, manage, and keep that fleet going. You also have to plan for things like high availability, load balancing, and upgrades, as well as decide if these VMs are going to be on-prem or in the cloud. I don’t see the use of virtual machines declining anytime soon, but there are better options for some use cases.

Containers: The New Hotness, But Too Complex to be Useful

Containerization involves isolating an application by making it think it’s the only application on a server, with only the hardware available that you allow. Containers can divide up a virtual machine in a similar way that virtual machines can divide up a physical server. This idea has been around since the early 1980s, but has really started to pick up steam due to the release of Docker in 2013. The main benefits of containerization are the ability to maximize the utilization of physical hardware while deploying pieces of a microservices architecture that can easily run on any OS.

This sounds great in theory, but there are a couple of downsides to this approach. The primary problem is the additional operational complexity, as you still have to manage the physical hardware and the virtual machines, along with the container orchestration without much of a performance boost. The added complexity without removing any current orchestration means that you now have to think about more, not less, You also need to build in redundancy, train your users and developers, and ensure communication between pieces on top of your existing physical and virtual infrastructure.

Speaking of container orchestration, the other main downside is the multitude of options surrounding containers and their management, as there’s no one clear choice of what to use (and it’s hard to tell if any of the existing ones will just go away one day and leave you with a mess). Kubernetes seems to be the front runner in this area, but Apache Mesos and Docker Swarm are big players as well. Which do you choose, and do you force all users and teams to use the same one? What if the company who manages those applications makes a change that you didn’t plan for? There’s a lot of questions and unknowns, along with just having to make the choice that could have ramifications for years to come.

Serverless Computing: Less Setup, More Functionality

When users or developers are working on a project that involves a database and some python scripts, they just want the database and the scripts, not a server that is running database software and a server that runs scripts. That’s because the main idea behind serverless architecture is the goal of trying to eliminate all the overhead that comes along with these requests for specific software. This is a big benefit to those who just want to get something up and running without installing operating systems, tweaking configuration files, and worrying about redundancy and uptime.

This isn’t all sunshine and rainbows, however. One of the big downsides to serverless comes hand-in-hand with that reduced complexity, in that you also typically have reduced customization. Running an older database version or having a long-running python function might not be possible using serverless services. Another possible downside is that you are typically locked in to a vendor once you start developing your applications around serverless architecture, as the APIs are often going to be vendor-specific.

That being said, it appears that the reduced complexity is a big deal for the users who want things to “just work”. Dealing with less headaches and less management so they can get creative and deploy some cool applications is one of the main goals of folks who are trying to push the boundaries of what’s possible. If Amazon, Microsoft, or Google want to handle database patching and python versioning so you don’t have to, then let them deal with it and move on to the fun stuff!

Here at ParkMyCloud, we’re doing a mix of serverless and traditional virtual machines to maximize the benefits and minimize the overhead for what we do.  By using serverless where it makes sense without forcing a square peg into a round hole, we can run virtual machines to handle the code we’ve already written while using serverless architecture for things like databases, load balancing, and email messages.  We’re starting to see more customers going with this approach as well, who then use ParkMyCloud to keep the costs of virtual machines low when they aren’t in use. (If you’d like to do the same, check out a trial of ParkMyCloud to get your hybrid infrastructure optimized.)

When it comes to development and operations, there are numerous decisions to make that all have pros and cons. Serverless architecture is the latest deployment option available, and it clearly helps reduce complexity and accounts for things that may give you headaches. The reduced mobility is something that containers can handle really well, but involves more complexity in deployment and ongoing management. Software installed on virtual machines is a tried-and-true method, but does mean you are doing a lot of the work yourself. It’s the fact that serverless computing is so simple to implement that makes it more than a trend: this is a paradigm that will endure, where containers won’t.

Read more ›

ParkMyCloud Reviews – Customer Video Testimonials

A few weeks ago at the 2017 AWS re:Invent conference in Las Vegas, we had the opportunity to meet some of our customers at the booth, get their product feedback, and a few shared their ParkMyCloud reviews as video testimonials. As part of our ongoing efforts to save money on cloud costs with a fully automated, simple-to-use SaaS platform, we rely on our customers to give us insight into how ParkMyCloud has helped them. Here’s what they had to say:

TJ McAteer, Prosight Specialty Insurance

“It’s all very well documented. We got it set up within an afternoon with our trial, and then it was very easy to differentiate and show that value – and that’s really the most attractive piece of it.”

As the person responsible for running the cloud engineering infrastructure at ProSight Specialty Insurance, ParkMyCloud had everything TJ was looking for. Not only that, but it was easy to use, well managed, and demonstrated its value right away.

James LaRocque, Decision Resources Group

“What’s nice about it is the ability to track financials of what you’re actually saving, and open it up to different team members to be able to suspend it from the parked schedules and turn it back on when needed.”

As a Senior DevOps engineer at Decision Resources Group, James LaRocque discovered ParkMyCloud at the 2016 AWS re:Invent and has been a customer ever since. He noted that while he could have gone with scripting, ParkMyCloud offered the increased benefits of financial tracking and user capabilities.

“The return on investment is huge.”

Kurt Brochu, Sysco Foods

“We had instant gratification as soon as we enabled it.”

Kurt Brochu, Senior Manager of the Cloud Enablement Team at Sysco Foods, was immediately pleased to see ParkMyCloud saving money on cloud costs as soon as they put it into action. Once he was able to see how much they could save on their monthly cloud bill, the next step was simple.   

“We were able to save over $500 in monthly spend by just using it against one team. We are rolling out to 14 other teams over the course of the next 2 weeks.”

Mark Graff, Dolby Labs

“The main reason why we went for it was that it was easy to give our users the ability to start and stop instances without having to give them access to the console.”

Mike Graff, the Senior Infrastructure Manager at Dolby Labs, became a ParkMyCloud customer thanks to one of his engineers in Europe.

“We just give them credentials, they can hop into ParkMyCloud and go to start and stop instances. You don’t have to have any user permissions in AWS – that was a big win for us.”

We continue to innovate and improve our platform’s cloud cost management capabilities with the addition of SmartParking recommendations, SmartSizing, Alicloud and more. Customer feedback is essential to making sure that not only are we saving our customers time and money, but also gaining valuable insight into what makes ParkMyCloud a great tool.

If you use our platform, we’d love to get a ParkMyCloud review from you and hear about how ParkMyCloud has helped your business – there’s a hoodie in it for you! Please feel free to participate in the comments below or with a direct email to info@parkmycloud.com


Read more ›

Why ParkMyCloud is the leader in Automated Cloud Cost Control – It’s About the Platform: 2017 Year in Review

2017 was a big year for ParkMyCloud and automated cloud cost control. From working closely with our customers and understanding industry trends, we continued to strengthen and grow our cloud cost control platform, continuously innovating and adding new features to make ParkMyCloud easier to use, more automated, and continue doing what we do best: saving you money on your cloud costs. Here are the highlights of what improved in ParkMyCloud during 2017:


Auto-Scheduling for Microsoft Azure

You asked, we answered. After a year of growth and success with optimizing cloud resources for users of Amazon Web Services (AWS), ParkMyCloud broadened its appeal by optimizing and reducing cloud spend for Microsoft Azure. CEO Jay Chapel weighed in, “Support for Azure was the top requested feature, so today’s launch will help us drive even bigger growth during 2017 as we become a go-to resource for DevOps and IT users on all the major cloud service providers.”


Single Sign-On

In February, signing into ParkMyCloud became easier than ever with support for single sign-on using SAML. Signing in is simple – use your preferred identity provider for a more streamlined experience, reduce the numbers needed to remember and type in, and use SSO for security by keeping by keeping a single point of authentication.


Free Tier for ParkMyCloud

This release gave users the option for free cloud optimization using ParkMyCloud – forever. The free tier option was created to support developers who were resorting to writing their own scheduling scripts in order to turn off non-production resources when not in use, saving not only money, but lots of time.

Support for OneLogin for Single Sign-On

ParkMyCloud integrated with OneLogin’s App Catalog marketplace, further simplifying Single Sign-On configuration using SAML 2.0. Benefits included reducing the number of passwords needed to track and allowing administrators to control user access from one place.


More support for Single Sign-On

In May, ParkMyCloud made more SSO integrations make signing in easy and simple. You can connect with Okta through the Okta App Network (OAN), Centrify, and with Microsoft Active Directory Federation Services (ADFS). The updates rounded out to six major SSO providers that can be used to connect to ParkMyCloud: ADFS, Azure Active Directory, Google G-Suite, Okta, OneLogin, and Ping Identity.  


Support for Google Cloud Platform

In addition to AWS and Azure, ParkMyCloud added support for Google Cloud Platform, making automated cost savings available for all of the ‘big three’ cloud service providers. With the new addition, ParkMyCloud’s continuous cost control platform covered a majority of the $23 billion public cloud market, enabling enterprises to eliminate wasted cloud spend – an estimated $6 billion problem for 2017, projected to become a $17 billion problem by 2020.

Stop/Start for AWS RDS Instances

In June, ParkMyCloud announced that it would now be offering “parking” for AWS RDS instances, allowing users to automatically put database resources on on/off schedules, so they only pay for what they’re actually using. This was the first parking feature on the market to be fully integrated with AWS’s RDS start/stop capability.


Notifications via Slack and Email

You asked, we answered (again). This user-requested feature improved the user experience by providing notifications about your environment and ParkMyCloud account via email, Slack, and other webooks. Notifications include information about parking actions, system errors, and more. Additionally, ParkMyCloud’s SlackBot allows users to manage resources and schedules through their Slack channel.


Cloud Savings Dashboard

After turning two, ParkMyCloud continued shaping and growing its vision with a new reporting dashboard. This feature made it easy to access reports, providing greater insight information regarding cloud costs, team rosters, and more.


Mobile App for Cloud Cost Optimization

In the last two months of 2017, ParkMyCloud was not about to slow down. Cloud cost optimization reached a new level with the addition of the new ParkMyCloud mobile app. Users are now able to park idle instances directly from their mobile devices. Reduce cloud waste and cut monthly spend by 65% or more, now with even more capability and ease of use.

AWS Utilization Metric Tracking

From this release, ParkMyCloud partnered with CloudWatch to give AWS users resource utilization data for EC2 instances, viewable through customizable heatmaps. The update gives information about how resources are being used, providing necessary information to help ParkMyCloud gear up for its next release coming soon – SmartParking and SmartSizing.


Utilization Heatmaps

Building on the November release of static heat maps displaying AWS EC2 utilization metrics, ParkMyCloud used the utilization data to create animated heat maps. This new feature helps users better identify usage patterns over time and create automated parking schedules. Data is displayed and mapped to a sequence of time, in the form of an animated “video.”  

Coming in 2018…

2017 is over, but there’s no end in sight for ParkMyCloud and automated cloud cost control. In addition to all the features we added last year to make cloud cost automation easy, simple, and more available, we have even more in store for our users in 2018. Coming soon, ParkMyCloud will introduce SmartParking, SmartSizing, PaaS ‘parking’, support for AliCloud and more. Stay tuned for another year of updates, new releases, and saving money on cloud costs with ParkMyCloud.

Read more ›

Cloud Computing 101, the Holidays, DevOps Automation and Moscow Mules – How’s that for a mix!?

I’m back to thinking about Cloud Computing 101, DevOps automation, and the other topics that keep my mind whirring at night – a sure sign that the 2017 holiday season is now officially over. I kicked mine off with an Ugly Sweater Party and wrapped it up with the College BCS games. In between, we had my parents’ 50th wedding anniversary (congrats to them), work-related holiday functions, Christmas with family and friends, New Years Eve with friends, and even chucked in some work and skiing. My liver needs a break but I love those Moscow Mules! Oh, and I have a Fitbit now to tell me how much I sit on my arse all day and peck away at this damn laptop – thanks kids, love you :).

What does this have to do with the cloud, cost control, DevOps and ParkMyCloud? At the different functions and events I went to, people who know me and what we do here at ParkMyCloud asked how business was going. In short, it’s great! In case you didn’t notice, the public cloud is growing, and fast. According to this recent article in Forbes, IaaS is growing 36% year on year – giddy up! Enterprises all over the world use ParkMyCloud to automate cloud cost control as part of their DevOps process. In fact we have customers in 20+ countries now. And people from companies like Sysco Foods rave about the ease of use and cost savings provided by the platform.

Now, when I talked to folks who don’t know what we do or what the cloud is, it’s a whole different discussion. For example, here’s a conversation I had at a party with Lindsey – a fictitious name to protect the innocent (or perhaps it’s USA superstar skier Lindsey Vonn… you will never know.) I like to call this conversation and ones like it “Cloud 101.”

Lindsey: “Hey Jay, how’s it going?”

Jay: “Awesome, great to see you Lindsey. Staying fit I see. How’s the family?” (of course I am holding my Mule in my copper mug – love it!)

Blah blah blah – now to the good stuff.

Lindsey: “So what do you do now?”

Jay: “Do you know what the cloud is?”

Lindsey: “You mean like iTunes?”

Jay: “Sort of. You know all those giant buildings you see when driving around here in Ashburn (VA)? Those buildings are full of servers that run the apps that you use in everyday life. Do you use the Starbucks app?”

Lindsey: “Yes – I’m addicted to Peppermint Mochas.”

Jay: “I am an Iced Venti Skim Chai Tea person myself. So the servers in those data centers are what power the cloud, Starbucks develops apps in the cloud, servers cost money when they’re running, just like the lights in your house. And like the lights in your house, those development servers don’t need to run all the time – only when people are actually using them. So we help companies like Starbucks turn them off when they are not being used. In short, we help companies save money in the cloud.”

Side note to Starbucks — maybe if you used ParkMyCloud to save on your cloud costs with Microsoft and AWS you could stop raising the price of my Iced Venti Skim Chai Tea Latte… just a thought.

It’s thanks to all our customers and partners that I’m able to have this Cloud Computing 101 conversation and include ParkMyCloud in it – with a special thanks to the “Big 3” cloud service providers – AWS, Azure and Google Cloud. Without them, we would not exist as there would not be a cloud to optimize. Kind of like me without my parents, so glad they came together.

Looking ahead to the rest of 2018, we will have lots to write about here at ParkMyCloud — multi-cloud is trending up, automated cloud cost control is trending up, and DevOps will make this all more efficient. And ParkMyCloud will introduce SmartParking, SmartSizing, support for AliCloud and more. It’s all about action and automation baby. Game of Thrones better be back in 2018, too.

Read more ›
Page 1 of 41234
Copyright © ParkMyCloud 2016-2018. All rights reserved|Privacy Policy