Blog - ParkMyCloud

Continuous Integration and Delivery Require Continuous Cost Control

continuous cost controlToday, we propose a new concept to add to the DevOps mindset: Continuous Cost Control.

In DevOps, speed and continuity are king. Continuous Operations, Continuous Delivery, Continuous Integration. Keep everything running and get new features in the hands of users quickly.

For some organizations, this approach leads to a mindset of “speed at any cost”. Especially in the era of easily consumable public cloud, this results in a habit of wasted spend and blown budgets – which may, of course, meet the goals for delivery. But remember that a goal of Continuous Delivery is sustainability. This applies to the coding and backend of the application, but also to the business side.

With that in mind, we get to the cost of development and operations. At some point in every organization’s lifecycle comes the need to control costs. Perhaps it’s when your system or product reaches a certain level of predictability or maturity – i.e. maintenance mode – or perhaps earlier, depending your organization.

We all know that agility has helped companies create competitive advantage; but customers and others tell us it can’t be “agility at any cost.” That’s why we believe the next challenge is cost-effective agility. That’s what Continuous Cost Control is all about.

What is Continuous Cost Control?

Think of it as the ability to see and automatically take action on development and operations resources, so that the amount spent is a controlled factor and not merely a result. This should occur with no impact to delivery.

Think of the spend your department manages. It likely includes software license costs and true-ups and perhaps various service costs. If you’re using private cloud/on-premise infrastructure, you’ve got equipment purchases and depreciations, plus everything to support that equipment, down to the fuel costs for backup generators, to consider.

However, the second biggest line item (after personnel) for many agile teams is public cloud. Within this bucket, consider the compute costs, bandwidth costs, database costs, storage, transactions… and the list goes on.

While private cloud/on-premise infrastructure requires continuous monitoring and cost control, the problem becomes acute when you change to the utility model of the public cloud. Now, more and more people in your organization have the ability to spin up virtual servers. It can be easy to forget that every hour (or minute, depending on the cloud provider) of this compute time costs money – not to mention all the surrounding costs.

Continually controlling these costs means automating your cost savings at all points in the development pipeline.  Early in the process, development and test systems should only be run while actually in use.  Later, during testing and staging, systems should be automatically turned on for specific tests, then shut down once the tests are complete.  During maintenance and production support, make sure your metrics and logs keep you updated on what is being used – and when.

How to get started with Continuous Cost Control

While Continuous Cost Control is an idea that you should apply to your development and operations practices throughout all project phases, there are a few things you can do to start a cultural behavior of controlled costs.

  • Create a mindset. Apply principles of DevOps to cloud cost control.
  • Take a few “easy wins” to automate cost control on your public cloud resources.
    • Schedule your non-production resources to turn off when not needed
    • Build in a process to “right size” your instances, so you’re not paying for more capacity than you need
    • Use alternate services besides the basic compute services where applicable. In AWS, for example, this includes Auto Scaling groups, Spot Instances, and Reserved Instances
  • Integrate cost control into your continuous delivery process. The public cloud is a utility which needs to optimized from day one – or if not then, as soon as possible.
  • Analyze usage patterns of your development team to apply rational schedules to your systems to increase adoption rates
  • Allow deviations from the normal schedules, but make sure your systems revert back to the schedule when possible
  • Be honest about what is being used, and don’t just leave it up for convenience

We hope this concept of Continuous Cost Control is useful to you and your organization – and we welcome your feedback.

Read more ›

Top 3 Ways to Save Money on Azure

Perhaps your CFO or CTO came to you and gave a directive to save money on Azure. Perhaps you received the bill on your own, and realize that this needs to be reduced. Or maybe you’re just migrating to the cloud and want to make sure you’re set up for cost control in advance (if so, props to you for being proactive!)

Whatever the reason you want to reduce your bill, there are a lot of little tips and tricks out there. But to get started, here are the top 3 ways to save money on Azure.

1. Set a spending limit on your Azure account

Our first recommendation to save money on Azure is to set a spending limit on your Azure account. We especially recommend this if you are using your Azure account for non-production. This is because once your limit is reached, your VMs will be stopped and deallocated. You will get an email alert and an alert in the Azure portal, and you do have the ability to turn these back on, but this is of course not ideal for any production systems.

Additionally, keep in mind that there are still services you will be charged for, even if your spending limit has been reached, including Visual studio licenses, Azure Active Directory premium, and support plans.

Here are full instructions on how to use the Azure spending limit on the Azure website.

2. Right size your VMs

One easy way to spend too much on your Azure compute resources is to use VMs that are not properly sized for the workload you are running on them. Use Azure’s Advisor to ensure that you’re not overpaying for processor cores, memory, disk storage, disk I/O, or network bandwidth. More on right-sizing from TechTarget.

While you’re at it, check to see if there’s a less-expensive region you could choose for the VM for additional cost savings.

3. Turn non-production VMs off when they’re not being used

Our third recommendation to save money on Azure is to turn non-production VMs off when they’re not being used – otherwise, you’re paying for time you don’t need. It’s a quick fix, and one that can save 65% of the cost of the VM – if, for example, it was running 24×7 but is only needed 12 hours per day, Monday through Friday.

One basic approach is to ask developers and testers to turn their VMs off when they are done using them — if you do this, ensure that your users are using the Azure portal to put these VMs in the “stopped deallocated” state. If you stop from within a VM, it will be put in a “stopped” state and you will continue to be charged.

However, relying on human memory is not best, so you’ll want to schedule your non-production VMs to shut down on a schedule. You could attempt to script this, but this is counter productive and wastes valuable development resources to write and maintain.

Instead, it’s best to use software like ParkMyCloud’s to automate on/off schedules – including automating schedule and team assignment for access control – and keep your Azure non-production costs in check.

 

 

These three methods should get you started on your goal to reduce costs. Have any other preferred methods to save money on Azure? Leave a comment below to let us know.

Read more ›

DevOps Cloud Cost Control: How DevOps Can Solve the Problem of Cloud Waste

DevOps cloud cost control: an oxymoron? If you’re in DevOps, you may not think that cloud cost is your concern. When asked what your primary concern is, you might say speed of delivery, or integrations, or automation. However, if you’re using public cloud, cost should be on your list of problems to control.

The Cloud Waste Problem

If DevOps is the biggest change in IT process in decades, then renting infrastructure on demand is the most disruptive change in IT operations. With the switch from traditional datacenters to public cloud, infrastructure is now used like a utility. Like any utility, there is waste. (Think: leaving the lights on or your air conditioner running when you’re not home.)  

How big is the problem? In 2016, enterprises spent $23B on public cloud IaaS services. We estimate that about $6B of that was wasted on unneeded resources. The excess expense known as “cloud waste” comprises several interrelated problems: services running when they don’t need to be, improperly sized infrastructure, orphaned resources, and shadow IT.

Everyone who uses AWS, Azure, and Google Cloud Platform is either already feeling the pressure — or soon will be — to reel in this waste. As DevOps teams are primary cloud users in many companies, DevOps cloud cost control processes become a priority.

4 Principles of DevOps Cloud Cost Control

Let’s put this idea of cloud waste in the framework of some of the core principles of DevOps. Here are four key DevOps principles, applied to cloud cost control:

1. Holistic Thinking

In DevOps, you cannot simply focus on your own favorite corner of the world, or any one piece of a project in a vacuum. You must think about your environment as a whole.

For one thing, this means that, as mentioned above, cost does become your concern. Businesses have budgets. Technology teams have budgets. And, whether you care or not, that means DevOps has a budget it needs to stay within. Whether it’s a concern upfront or doesn’t become one until you’re approached by your CTO or CFO, at some point, infrastructure cost is going to be under scrutiny – and if you go too far out of budget, under direct mandates for reduction.

Solving problems not only speedily and elegantly, but cost efficiently becomes a necessity. You can’t just be concerned about Dev and Ops, you need to think about BizDevOps.

Holistic thinking also means that you need to think about ways to solve problems outside of code… more on this below.

2. No Silos

The principle of “no silos” means not only no communication silos, but also, no silos of access. This applies to the problem of cloud cost control when it comes to issues like leaving compute instances running when they’re not needed. If only one person in your organization has the ability to turn instances on and off, then all responsibility to turn those instances off falls on his or her shoulders.

It also means that if you want to use an instance that is scheduled to be turned off… well, too bad. You either call the person with the keys to log in and turn your instance on, or you wait until it’s scheduled to come on.  Or if you really need a test environment now, you spin up new instances – completely defeating the purpose of turning the original instances off.

The solution is eliminating the control silo by allowing users to access their own instances to turn them on when they need them and off when they don’t — of course, using governance via user roles and policies to ensure that cost control tactics remain uninhibited.

(In this case, we’re thinking of providing access to outside management tools like the one we provide, but this can apply to your public cloud accounts and other development infrastructure management portals as well.)

3. Rapid, Useful Feedback

In the case of eliminating cloud waste, the feedback you need is where, in fact, waste is occurring. Are your instances sized properly? Are they running when they don’t need to be? Are there orphaned resources chugging away, eating at your budget?

Useful feedback can also come in the form of total cost savings, percentages of time your instances were shut down over the past month, and overall coverage of your cost optimization efforts.  Reporting on what is working for your environment helps you decide how to continually address the problem that you are working on next.

You need monitoring tools in place in order to discover the answers to these questions. Preferably, you should be able to see all of your resources in a single dashboard, to ensure that none of these budget-eaters slip through the cracks. Multi-cloud and multi-region environments make this even more important.

4. Automation

The principle of Automation means that you should not waste time creating solutions when you don’t have to. This relates back to the problem of solving problems outside of code mentioned above.

Also, when “whipping up a quick script”, always remember the time cost to maintain such a solution. More about why scripting isn’t always the answer.

So when automating, keep your eyes open and do your research. If there’s already an existing tool that does what you’re trying to code, it could be a potential time-saver and process-simplifier.

Take Action

So take a look at your DevOps processes today, and see how you can incorporate a DevOps cloud cost control – or perhaps, “continuous cost control”  – mindset to help with your continuous integration and continuous delivery pipelines. Automate cost control to reduce your cloud expenses and make your life easier.

Read more ›

New: ParkMyCloud Supports Centrify for Single Sign-On

Announcing: ParkMyCloud now integrates with Centrify for Single Sign-On (SSO). What, did you think we were finished with SSO integrations?

That brings the list of SSO providers you can use with your ParkMyCloud account to:

  • Active Directory Federation Services (ADFS) – Microsoft
  • Azure Active Directory – Microsoft
  • Centrify
  • Google G-Suite
  • Okta (in Okta App Network)
  • OneLogin (in App Catalog)
  • Ping Identity (in App Catalog)

Stay tuned: ParkMyCloud will be listed in the Centrify marketplace shortly.

We have integrated with Centrify for Single Sign-On, as well as the other SSO providers, to make it simpler:

  1. For account administrators, who can use just-in-time provisioning to automatically add their organization members to ParkMyCloud as they are authenticated in Centrify – all you need to do as an administrator is share your organization’s unique ParkMyCloud login link with your users. This can be found in the ParkMyCloud management console.
  2. For users, who will not need separate login information and a password for ParkMyCloud.

For a step-by-step guide for setting up Centrify as a SAML IdP server for ParkMyCloud, please see this article on our support site. Note that you will already need to have your ParkMyCloud account created – though there’s no need to add additional users until you’ve connected with Centrify, at which point you can add them directly from the SSO provider.

If we still don’t support your SSO provider of choice, please leave a comment below or contact us – we’re all about meeting user needs, here!

Read more ›

Cutting through the AWS and Azure Cloud Pricing Confusion (Caveat Emptor)

Before I try to break down the AWS and Azure cloud pricing jargon, let me give you some context. I am a crusty, old CTO who has been working in advanced technology since the 1980’s. (That’s more than 18 Moore’s Law cycles for processor and chipset fans, and I have lost count of how many technology hype cycles that has been.)

I have grown accustomed to the “deal of a lifetime” on the “technology of the decade” coming around about once every week. So, you can believe me, when I tell you have a very low BS threshold for dishonest sales folks and bogus technology claims. Yes, I am jaded.

My latest venture is a platform, ParkMyCloud, that brings together  multiple public cloud providers. And I can tell you first hand that it is not for the faint-of-heart. It’s like being dropped off in the middle of the jungle in Papua, New Guinea. Each cloud provider has its own culture, its own philosophy, its own language and customers, its own maturity level and, worst of all — its own pricing strategy — which makes it tough for buyers to manage costs. I am convinced that the lowest circles of hell are reserved for people who develop cloud service pricing models. AWS and Azure cloud pricing gurus, beware. And reader, to you: caveat emptor.

AWS and Azure Terminology Differences

Case in point: You have probably read the comparisons of various services across the top cloud providers, as people try to wrap their minds around all the varying jargon used to describe pretty much the same thing. For example, let’s just look at one service: Cloud Computing.

In AWS, servers are called Elastic Compute Cloud (EC2) “Instances”. In Azure they are called “Virtual Machines” or “VMs”. Flocks of these spun up from a snapshot according scaling rules are called “auto scaling groups” in AWS. The same things are called “scale sets” in Azure.

Of course cloud providers had to start somewhere, then they learned from their mistakes and improved. When AWS started with EC2, they had not yet released virtual private clouds (VPCs), so their instances ran outside of VPCs. Now all the latest stuff runs inside of VPCs. The older ones are called, “classic” and have a number of limitations.
The same thing is true of Azure. When they first released, their VMs were not set up to use what is now their Resource Manager or be managed in Resource Groups (the moral equivalent of CloudFormation Stacks in AWS). Now, all of their latest VMs are compatible with Resource Manager. The older ones are called, you guessed it … “classic”.

(What genius came up with the idea to call the older versions of these, the ones you’re probably stranded with and no longer want, “classic”?)

Both AWS and Azure have a dizzying array of instances/VMs to choose from, and doing an apples-to-apples comparison between them can be quite daunting. They have different categories: General purpose, compute optimized, storage optimized, disk optimized, etc.

Then within each one of those, there are types or sizes. For example, in AWS the tiny, cheap ones are currently the “t2” family. In Azure, they are the “A” series. On top of that there are different generations of processors. In AWS, they use an integer after the family type, like t2, m3, m4 and there are sizes, t2.small, m3.medium, m4.large, r16.ginormus (OK, I made that one up).  

In Azure, they use a number after the family letter to connote size, like A0, A1, A2, D1, etc. and “v1”, “v2” after that to tell what generation it is, like D1v1, D2v2.

The bottom line: this is very confusing for folks moving their workloads to public cloud from on-premise data centers (yet another Wonderland of jargon and confusion in its own right). How does one decide which cloud provider to use? How does one even begin to compare prices with all of this mess? Cheer up … it gets worse!

AWS and Azure Cloud Pricing – Examining Differences in Charging

To add to that confusion, they charge you differently for the compute time you use. What do I mean?  AWS prices their compute time by the hour. And by hour, they mean any fraction of an hour: If you start an instance and run it for 61 minutes then shut it down, you get charged for 2 hours of compute time.

Microsoft Azure cloud pricing is listed by the hour for each VM, but they charge you by the minute. So, if you run for 61 minutes, you get charged for 61 minutes. On the surface, this sounds very appealing (and makes me want to wag my finger at AWS and say, “shame on you, AWS”).

However, you really have to pay attention to the use case and the comparable instance prices. Let me give you a concrete example. I mentioned my latest venture, ParkMyCloud, earlier. We park (schedule on/off times) for cloud computing resources in non-production environments (without scripting by the way). So, here is a graph of 6 months worth of data from an m4.large instance somewhere in Asia Pac. The m4 processor family is based on the Xeon Broadwell or Haswell processor and it is one of the most commonly used instance types.

This instance is on a ParkMyCloud parking schedule, where it is RUNNING from 8:00 a.m. to 7:00 p.m. on weekdays and PARKED evenings and weekends. This instance, assuming Linux pricing, costs $0.125 per hour in AWS. From November 6, 2016 until May 9, 2017, this instance ran for 111,690 minutes. This is actually about 1,862 hours, but AWS charged for 1,922 hours and it cost $240.25 in compute time.

example of instance uptime in minutes per dayWhy the difference? ParkMyCloud has a very fast and accurate orchestration engine, but when you start and stop instances, the cloud provider and network response can vary from hour-to-hour and day-to-day, depending on their load, so occasionally things will run that extra minute. And, even though this instance is on a parking schedule, when you look at the graph, you can see that the user took manual control a few times. Stuff happens!

What would the cost have been if AWS charged the same way as Azure?  It would have only cost $232.69. Well, that’s not too bad over the course of six months, unless you have 1,000 of these. Then it becomes material.

However, I wouldn’t rush to judgment on AWS. If you look at the comparable Azure VM, the standard pricing DS2 V2, also running Linux, costs $0.152/hour. So, this same instance running in Azure would have cost $290.39. Yikes!

Therefore, in my particular use case, unless the Azure cloud pricing drops to make their CPU pricing more competitive, their per minute pricing really doesn’t save money.

Conclusion

The ironic thing about all of this, is that once you get past all the confusing jargon and the ridiculous approaches to pricing and charging for usage, the actual cloud services themselves are much easier to use than legacy on-premise services. The public cloud services do provide much better flexibility and faster time-to-value. The cloud providers simply need to get out of their own way. Pricing is but one example where AWS and Azure need to make things a lot simpler, so that newcomers can make informed decisions.

From a pricing standpoint, AWS on-demand pricing is still more competitive than Azure cloud pricing for comparable compute engine’s, despite Azure’s more enlightened approach to charging for CPU/Hr time. That said, AWS really needs to get in-line with both Azure and Google, who charge by the minute. Nobody likes being charged extra for something they don’t use.

In the meantime, ParkMyCloud will continue to help you turn off non-production cloud resources, when you don’t need them and help save you a lot of money on your monthly cloud bills. If we make anything sound more complex than it needs to, call us out. No hiding behind jargon here.

Read more ›

New: ParkMyCloud Supports ADFS for Single Sign-On – and Is Now in Ping Identity App Catalog

We are happy to share that ParkMyCloud now supports Active Directory Federation Services (ADFS) for Single Sign-On (SSO).

Additionally, ParkMyCloud is now integrated into the Ping Identity App catalog, making it easier to configure your SSO options and add users from Ping accounts.

With these updates, you can now connect to ParkMyCloud through six major SSO providers:

  • Active Directory Federation Services (ADFS) – Microsoft
  • Azure Active Directory – Microsoft
  • Google G-Suite
  • Okta (in Okta App Network)
  • OneLogin (in App Catalog)
  • Ping Identity (in App Catalog)

All of these SSO providers are among the top of those ranked in Gartner’s 2016 Magic Quadrant for Identity and Access Management as as Service.

Using SSO simplifies processes for both users and administrators. Users need to track and remember fewer passwords, and administrators can control user access in the single location of their SSO provider dashboard, to simplify processes and tighten access control.

Through these SSO providers, ParkMyCloud supports just-in-time provisioning of new users. This means that users are automatically created in ParkMyCloud as they are authenticated from the SSO provider.  All the administrator needs to do is email users the organization’s unique ParkMyCloud login link, which can be found in the ParkMyCloud management console.

For more information about configuring SSO for your ParkMyCloud account, please see this article in our support portal – there are instructions for each SSO provider. (You’ll need to have an active ParkMyCloud account in place before you can start adding users from your SSO provider – here’s the signup link if you need to create one first.)

Read more ›

Azure vs. AWS 2017: Is Azure really surpassing AWS?

Azure vs. AWS 2017: what’s the deal? There’s been a lot of speculation lately that Microsoft Azure may be outpacing Amazon Web Services (AWS). But before jumping to conclusions, it’s worth taking a look at these claims. After all, AWS has been dominating the public cloud market for so long, maybe the media is just bored of that story, and ready for the underdog to jump ahead. So let’s take a look.

Is Azure catching up to AWS?

You may have seen some of the recent reports on both Microsoft and Amazon’s recent quarterly earnings. There have certainly been some provocative headlines:

Here’s what the quarterly earnings reports actually showed:

  • AWS revenue grew 43% in the quarter, with quarterly earnings of $3.66 billion, annualized to $14.6 billion.
  • Microsoft reported that its Intelligent Cloud division grew 11% to $6.8 billion, and the Commercial Cloud division has an annualized run rate of $15.2 billion.
  • But, the Commercial Cloud includes Office 365, not just Azure.
  • Microsoft stated that Azure’s growth rate was 93%, without providing an actual revenue number.

So is Azure bigger than AWS?

Well, no. There’s no evidence of that.

But is it growing quickly?

Yes – that it is.

Where is Azure actually gaining ground?

Now let’s take a look at what is driving Azure’s 93% growth, and where Azure is actually gaining ground.

First of all, as companies grow beyond dipping their toes in the water of public cloud, they become more interested in secondary options for diversity and different business cases. Just from our own conversations, we’re finding that more and more AWS users are using Azure as a secondary option.

Second, enterprises have been enmeshed in a variety of Microsoft products for years — Windows and beyond. Microsoft already has the foothold, relationships, and enterprise agreements with these organizations, so they’re pushing Azure as a cross-sell – which is something Azure is counting on as it targets the cloud migration market. AWS, on the other hand, lacks these pathways.

Azure is also doing well in Europe, where more users report using Azure rather than AWS as their primary provider.

How does the Azure vs. AWS 2017 debate matter to the customer?

How does the Azure vs. AWS 2017 debate matter to the customer, when choosing a new or secondary cloud provider? Well… in terms of market performance, it probably doesn’t. As always, the specific needs of your business are going to be what’s important.

Let’s not forget that both Google and IBM both have growing public cloud offerings too (and Google is looking to take on the enterprise market this year.) All of this competition drives innovation, and therefore IaaS and PaaS offerings – and perhaps, better pricing.

For the customer, the basic questions remain the same when evaluating public cloud providers:

  • How understandable are the public cloud offerings to new customers?
  • How much do the products cost?
  • Are there adequate customer support and growth options?
  • Are there useful surrounding management tools?
  • Will our DevOps processes translate to these offerings?
  • Can the PaaS offerings speed time-to-value and simplify things sufficiently, to drive stickiness?

We’ll continue to track the AWS vs. Azure comparison as the companies’ offerings and pricing options grow and change – we’ll be interested to see how this evaluation changes in 2018.

Read more ›

New: ParkMyCloud Supports Okta for Single Sign-On through Okta App Network

okta for single sign-onAs of today, you can now connect to Okta for Single Sign-On (SSO) through the Okta App Network (OAN). This simplifies SSO configuration using SAML 2.0.

Using Okta for Single Sign-On allows administrators to easily add and govern their existing internal users in ParkMyCloud. It also reduces the number of passwords that users need to remember and use.

If you are an Okta customer, it is straightforward to connect to your ParkMyCloud account. First, run your account in admin mode and search for ParkMyCloud on the OAN. All you need from ParkMyCloud is an identifier string, provided in your account settings. Once configured, your users will automatically be added to ParkMyCloud, to the team you specify, after they have been authenticated through Okta.

This makes it extremely simple to get your enterprise users started with parking and saving in ParkMyCloud.

For more details about connecting Okta to your ParkMyCloud account, please see our knowledge base article on the subject.

We also recently added support for OneLogin for SSO, which joined Ping, Google Apps, and Azure Active Directory as SSO options for your ParkMyCloud account.

Read more ›

Your Instance Management Tool Checklist

a few important items for your instance management tool checklistWhen you start looking for an instance management tool to help manage your cloud infrastructure costs, you’ll realize there are a lot of options. While evaluating such tools, you need to make sure to have a list of requirements to make sure the software fits your needs and will help you reduce cloud waste. Here are a few items you might want to have on your checklist:

1. High visibility

One factor that contributes to cloud waste is the inability to track cloud instances.  In today’s world, cross-cloud and cross-region are must-haves in order to provide high availability and true redundancy.  Any modern instance management tool must be able to see all of your instances in one place, or you’re sure to have some fall through the cracks.

2. Reporting

You might hate making reports, but solid reporting can be the difference between a well-informed organization and a proverbial dumpster fire. With the help of a good tool, you can generate reports that show the data you need for decision-making, without wasting time.

3. Takes Action

Sure, reports and pretty graphs are nice, but something needs to actually be acted upon in order to make any real difference to your monthly AWS or Azure bill!  A lot of tools will gather up that data for you, but you really need something that can actually turn off the lights, so to speak — not just tell you which lights haven’t been turned off.

4. Simple to use UI

The user experience of an application can sometimes go unnoticed, but it’s often the difference between a useful tool and shelfware.  One of the main difficulties in determining how easy an interface is to use is that you need to understand who the actual end user will be.   The IT administrator who is evaluating products may be able to figure out the interface, but if other team members will need to use it, then their needs must be taken into account.

5. APIs and Automation

With the rise of DevOps practices and automated infrastructures, API access is a must.  By enabling inbound actions and outbound notifications, new tools can work seamlessly with existing operations to eliminate wasted resources.  Automation should also take into account your naming conventions and tagging standards for optimal integration.

6. Schedule Overrides

Once you’ve started working on solving your cloud waste problem by scheduling resources to turn off when not needed, you need to be able to adapt to the changing needs of the user and the organization.  Anyone with proper access to a system should be able to override a given schedule if necessary, since any tool you use should be helping your users get work done.

7. Team Governance

A huge concern when letting users run wild with any new tool is how you can make sure they aren’t going to break anything.  Giving someone the minimum required access is a security best practice, but sometimes those access controls can be confusing.  In addition to a simple UI, the role-based access controls should also be simple to set up, modify, and understand.

8. Single Sign-On

Some might consider this a nice-to-have, but most enterprises today have started requiring this for all products they use.  Users find it easy to sign in without remembering a million credentials, and admins find it more secure and faster to deploy.  If SSO is being used within your organization, then you should start picking tools that integrate with it easily.

 

This is a starting point, but of course when evaluating an instance management tool, make sure to incorporate any unique needs your organization. What else would you include on your checklist?

Read more ›

Riding Wave of Rapid Growth, ParkMyCloud to Pitch at Collision Conference 2017

collision conference 2017April 26, 2017 (Dulles, VA) — ParkMyCloud, the platform that helps enterprises optimize public cloud spend, will be pitching at Collision Conference in New Orleans, Lousiana next week in the midst of a period of rapid growth for the company.

ParkMyCloud’s customer base has grown by 50% already in 2017, following a successful first full year in 2016. Customers cross verticals and range in size from startups through Fortune 500 companies, and include McDonald’s, Fox, Capital One, Sage, Wolters Kluwer, Neustar, Philips Respironics, and Raytheon.

Customer use of ParkMyCloud’s platform has also been a success. Customers routinely save 65% or more using ParkMyCloud, and the amount of cumulative savings accelerates every day as new users come on board.

While customers benefit from ParkMyCloud’s automated savings, the team has been busy enhancing the platform. Most notably, in January, ParkMyCloud announced support for Microsoft Azure, in addition to Amazon Web Services, which has been supported since September 2015.

Additionally, customers can now log in to ParkMyCloud and add new users in their organizations using Single Sign-On (SSO) providers, including: Google Apps, Ping, Okta, Azure Active Directory, and OneLogin. A recently launched free tier allows users to use ParkMyCloud’s core scheduling functionality for free, forever – enabling developers to free up the time they may have otherwise spent scripting in-house solutions to schedule off time for their instances.

The success has not gone unnoticed outside of the company. ParkMyCloud was voted through to the Final 4 of the DC Inno Tech Madness competition in March. The company was also selected as a finalist in the 2017 Greater Washington Innovation Awards for Emerging Tech Innovator of the Year. ParkMyCloud is routinely mentioned by industry pundits 451 Research, EMA, CRN and Cloud Computing Magazine as a rising star in cloud cost optimization.

Through the rest of 2017, ParkMyCloud looks to continue advancing the platform functionality with more cost optimization functionality , including support for Google Cloud Platform and optimization options for databases and storage. This poises ParkMyCloud for exponential growth over the next several years.

About ParkMyCloud

ParkMyCloud is a SaaS platform that helps enterprises optimize their public cloud spend by automatically reducing resource waste — think “Nest for the cloud”. ParkMyCloud has helped customers such as McDonald’s, Capital One, Unilever, Fox, Sage Software, and Infor dramatically cut their cloud bills by up to 65%, delivering millions of dollars in savings. For more information, visit http://www.parkmycloud.com.

Read more ›

New: ParkMyCloud Supports OneLogin for Single Sign-On

Onelogin for single sign-onAs of today, you can now connect ParkMyCloud to OneLogin for Single Sign-On. ParkMyCloud is now integrated with OneLogin’s App Catalog marketplace to simplify Single Sign-On configuration using SAML 2.0.

Using OneLogin for Single Sign-On (SSO) simplifies processes for users by reducing the number of passwords they need to track. It also simplifies the burden for administrators, by allowing them to control user access in one place (for example, through OneLogin) so they don’t have to manage access separately for each user for all applications.

Through OneLogin, ParkMyCloud supports just-in-time provisioning of new users, which means that as soon as a user is authenticated in OneLogin, he or she is automatically created in ParkMyCloud. All the administrator needs to do is email users the organization’s unique ParkMyCloud login link, which can be found in the ParkMyCloud management console.

Once you have a ParkMyCloud account that you would like to integrate with OneLogin, go to the App Catalog and search for “ParkMyCloud”. Once on the ParkMyCloud page, configure the SSO options and accessibility to users, then connect the metadata to ParkMyCloud. That’s it! Now  you can invite your OneLogin users to ParkMyCloud using your unique link, and they will be able to log in to ParkMyCloud in one step with their OneLogin info.

For more information about configuring OneLogin SSO for your ParkMyCloud account please refer to this article.

By the way, ParkMyCloud also supports SSO via Ping, Okta, Google Apps, and Azure Active Directory – see more about our support for these providers.

 

Read more ›

Why Your CFO is About to Tell You to Control Azure Costs

As more and more companies adopt Microsoft Azure as their public cloud, the need to control Azure costs becomes ever more important. As IT, Development and Operations grow their usage of Azure cloud assets, Finance is catching up. Your CFO has seen the bill, and says, “I thought cloud was supposed to be cheaper. So why is this so high?”

Azure Spend Growing

azure spend growingIt’s no secret that overall Azure spend is rising rapidly. Azure is the fastest-growing cloud provider, both from adoption by new customers, and growth within accounts of existing customers. Many users of other clouds, such as AWS, are also adopting Azure as a secondary option for diversity.

Here’s the thing: as this spend grows, so too does wasted spend. And customers know this. But as one ParkMyCloud user told us, “As we started to dive into it, we found that a large part of our spend is simply on waste. We didn’t have visibility and policies in place. Our developers aren’t properly cleaning up after themselves, and resources aren’t being tracked, so it’s easy for them to be left running. It’s something we want to change, but it takes time and energy to do that.”

So it’s no wonder that IT, Development, and Operations teams are being tapped by CFOs left and right to reduce costs, as the Azure bill becomes a growing line item in the budget.

Control Azure Costs Before Your CFO Makes You

There are a few things you can do to be proactive and control Azure costs before your CFO comes bursting through your office door. Here are some starting points:

  • Control your view –  the first step toward change is awareness, so use an Azure dashboard to view all of your resources in one, consolidated place. We’ve heard from ParkMyCloud users, upon getting a single view of all of their resources in the ParkMyCloud dashboard, that they found VMs they didn’t even know were running.
  • Control your processes – talk with your team and set clear guidelines around provisioning appropriately sized VMs, stopping non-production VMs when they are not needed, and governing existing VMs (for example, whose responsibility is it to make sure each team is only running the resources they actually need?)
  • Control Azure costs – there are a few simple actions you can take to get your actual Azure costs in control. Here are some starting points:
    • “Right size” your VMs – make sure you aren’t choosing larger capacity/memory/CPU than you need
    • Set automatic schedules so your non-production VMs don’t run when you don’t need them (free with ParkMyCloud’s core version – try it out)
    • Set a spending limit on your Azure account. You can do a hard cutoff that will turn off your VMs once you hit the limit, or simply sign up to receive email alerts when you approach or hit the spending limit.

So, automate your operations today and make your CFO happy. Bring your Azure spend down before it becomes a problem!

Read more ›

New ParkMyCloud Plan Brings Free Cloud Cost Optimization to All

free cloud cost optimizationApril 18, 2017 (Sterling, VA) – ParkMyCloud, the SaaS platform that helps enterprises optimize public cloud spend, has announced a new pricing model, including the core cost savings functionality available in a free plan. The free cloud cost optimization plan was created to enable developers and DevOps practitioners to save money on non-production resources that are not being used nights and weekends. Currently, these practitioners are using homegrown scheduling scripts, which although they are “notionally free”, are actually are a suboptimal approach and require valuable in-house resources to develop and maintain.

ParkMyCloud is taking this burden off the shoulders of developers by offering its core automated instance scheduling functionality to all users, with or without a paid subscription. The account setup process typically takes customers less than 15 minutes. Then users can quickly create automated scheduling policies, so that their non-production resources, such as development, staging, test, and QA, are turned off when they are not needed, such as nights and weekends. A schedule that runs these instances during an 8-hour workday, 5 days a week, and “parks” them the rest of the week, saves 65% of the cost of the instance – all with minimal setup and maintenance time required.

In the new pricing structure, advanced features such as SSO, multi-cloud support, API, audit logs, and reporting will require subscription to a paid tier. The premium features have been added to support companies who are scaling in the public cloud or who need higher levels of governance or security.

The new model gives all new users access to a 14-day trial that allows them to experience all of the premium features. At the end of those 14 days, users can choose to subscribe to the plan that meets their needs.

Customer feedback has been positive regarding the ease of use and savings achieved. “ParkMyCloud has been great for us to reduce AWS costs,” said DevOps Engineer Tosin Ojediran. “We’re better staying within budget now. ParkMyCloud actually really exceeded my expectations. We sent the savings numbers to our CTO, and he said, ‘wow, this is awesome.’ It’s easy to use, it does what it’s supposed to use.”

About ParkMyCloud

ParkMyCloud is a SaaS platform that helps enterprises optimize their public cloud spend by automatically reducing resource waste — think “Nest for the cloud”. ParkMyCloud has helped customers such as McDonald’s, Capital One, Unilever, Fox, Sage Software, and Infor dramatically cut their cloud bills by up to 65%, delivering millions of dollars in savings. For more information, visit http://www.parkmycloud.com.

Read more ›

“Is that old cloud instance running?” How visibility saves money in the cloud

make sure you didn't leave a cloud instance running with better cloud visibility“Is that old cloud instance running?”

Perhaps you’ve heard this around the office. It shouldn’t be too surprising: anyone who’s ever tried to load the Amazon EC2 console has quickly found how difficult it is to keep a handle on everything that is running.  Only one region gets displayed at a time, which makes it common for admins to be surprised when the bill comes at the end of the month.  In today’s distributed world, it not only makes sense for different instances to be running in different geographical regions, but it’s encouraged from an availability perspective.

On top of this multi-region setup, many organizations are moving to a multi-cloud strategy as well.  Many executives are stressing to their operations teams that it’s important to run systems in both Azure and AWS.  This provides extreme levels of reliability, but also complicates the day-to-day management of cloud instances.

So is that old cloud instance running?

You may get a chuckle out of the idea that IT administrators can lose servers, but it happens more frequently than we like to admit.  If you only ever log in to US-East1, then you might forget that your dev team that lives in San Francisco was using US-West2 as their main development environment. Or perhaps you set up a second cloud environment to make sure your apps all work properly, but forgot to shut them down prior to going back to your main cloud.

That’s where a single-view dashboard (like the view you get with ParkMyCloud) can provide administrators with unprecedented visibility into their cloud accounts.  This is a huge benefit that leads to cost savings right off the bat, as the cloud servers running that you forgot about or thought you turned off can be seen in a single pane of glass. Knowledge is power: now that you know it exists, you can turn it off. You also get an easy view into how your environment changes over time, so you’ll be aware if instances get spun up in various regions.

This level of visibility also has a freeing effect, as it can lead you to utilizing more regions without fear of losing instances.  Many folks know they should be distributed geographically, but don’t want to deal with the headache of keeping track of the sprawl.  By tracking all of your regions and accounts in one easy-to-use view, you can start to fully benefit from cloud computing without wasting money on unused resources.

Now with ParkMyCloud’s core functionality available for free, it’s easy to get this single view of your AWS and Azure environments.  We think you’ll get a new perspective on your existing cloud infrastructure – and maybe you’ll find a few lost servers! Get started with the free version of ParkMyCloud.

Read more ›

New Free Tier of ParkMyCloud – Free Cloud Optimization for All, & No More Scripting

We’re announcing a new, free tier of ParkMyCloud! That’s right – you now have the option for free cloud optimization using ParkMyCloud – forever.

Why Script When ParkMyCloud is free?

free cloud optimization with new ParkMyCloud free tierWe’ve created the free tier option to support our developer friends who, to save money on non-production resources that are not being used nights and weekends, are currently using the best option available: home grown scheduling scripts. Friends: we want to save you the trouble of scripting (it’s not the answer!) Now you can use ParkMyCloud to apply on/off schedules to your instances — for free, which means no approvals required. Plus with ParkMyCloud’s policy engine, you can have instances automatically set on schedules and assigned to teams. The full automation means no work for you to keep your instances’ up/down time optimized.

Premium Features

Of course, more advanced features, such as SSO, multi-cloud, API, audit logs, and reporting do require subscription to a paid tier. Many of our customers start with small public cloud environments and grow from there. The premium features have been added to support companies who are scaling in the public cloud or who need higher levels of governance or security. We aim to make these as cost-effective for users as possible – customers usually recoup their yearly fee in 6 weeks or less.

14-Day Trial of All Features

The new model gives all new users access to a free 14-trial where they can experience all of the premium features. At the end of those 14 days, you can choose to subscribe to the plan that meets your needs. If you decide you don’t need the premium features, or you want to spend longer testing the tool, you have the option to move to the free tier. Just remember that if you do move to free you will lose those premium features.

Get Started with Free Cloud Optimization

Get started with your free 14-day trial here. (And if you like what you see, stay on for free as long as you like!)

Read more ›

Is your AWS bill too high this month?

Amazon Web Services (AWS) monthly bills start arriving in inboxes the world round about this time every month. When they do, there are two questions we like to ask AWS users.aws bill too high?

One, did you look at your AWS bill?

For some readers, the idea that you might not is ridiculous. You may be surprised how many companies we’ve talked to where even key decision makers are unsure how much they are spending on cloud services. (Mature cloud users are more likely to worry about spend, as found by RightScale’s 2017 State of the Cloud Report, but that doesn’t mean that even those users have their eye on the bill each month.)

Okay, so let’s assume that you have looked at your AWS bill. Time for the second question. Was your AWS bill more than you expected this month?

For more and more cloud users, the answer is yes. Only 46% of enterprises monitor and rightsize cloud resources – which means 54% do nothing. Between resources left running when they’re not needed, incorrectly sized resources, and orphaned volumes, it’s easy for bills to climb out of control.

We’ve written extensively about how to reduce cloud waste, whether you should build cost-reduction tools yourself, and how to control AWS spend. If that’s overwhelming, there’s one simple thing you can do to get started, and combat that sticker shock in time for your next AWS bill.

Your first step toward getting AWS bills in control is to schedule on/off times for your non-production resources, so you’re not wasting a single dollar on compute time you don’t need.

It’s easy – get started with a free trial of ParkMyCloud.

Read more ›

The Cloud Waste Problem That’s Killing Your Business (and What To Do About It)

cloud wasteWaste not, want not. That was one of the well-healed quips of one the United States’ Founding Fathers, Benjamin Franklin. It couldn’t be more timely advice in today’s cloud computing world – the world of cloud waste. (When he was experimenting with static electricity and lightning, I wonder if he saw the future of Cloud? :^) )

Organizations are moving to the Cloud in droves. And why not? The shift from CapEx to monthly OpEx, the elasticity, the reduced deployment times and faster time-to-market: what’s not to love?

The good news: the public cloud providers have made it easy to deploy their services. The bad news: the public cloud providers have made it easy to deploy their services…really easy.  

And, experience over the past decade has shown that leads to cloud waste. What is “cloud waste” and where does it come from? What are the consequences? What can you do to reduce it?

What is Cloud Waste?

“Cloud waste” occurs when you consume more cloud resources than you actually need to run your business.

It takes several forms:

  • Resources left running 24×7 in development, test, demo, and training environments where they don’t need to be running 24×7.  (Thoughts of parents yelling at children to “turn the lights out” if they are the last one in a room.) I believe this is bad habit that was reinforced by the previous era of on premise data centers. The thinking: It’s a sunk cost any, why bother turning it off?  Of course, it’s not a sunk cost anymore.

This manifests itself in various ways:

    • Instances or VMs which are left running, chewing up $/CPU-Hr costs and network charges
    • Orphaned volumes (volumes not attached to any servers), which are not being used and incurring monthly $/GB charges
    • Old snapshots of those or other volumes
    • Old, out-of-date machine images

However, cloud consumers are not the only ones to blame. The public cloud providers are also responsible when it comes to their PaaS (platform as a service) offerings for which there is no OFF switch (e.g., AWS’ RDS, Redshift, DynamoDB and others). If you deliver a PaaS offering, make sure it has an OFF switch.

  • Resources that are larger than needed to do the job. Many developers don’t know what size instance to spin up to do their development work, so they will often spin up larger ones. (Hey, if 1 core and 4 GB of RAM is good, then 16 cores and 64 GB of RAM must be even better, right?) I think this habit also arose in the previous era of on-premise data centers: “We already paid for all this capacity anyway, so why not use it?” (Wrong again.)

This, too, rears its ugly head in several ways:

    • Instances or VMs which are much larger than they need to be
    • Block volumes which are larger than they need to be
    • Databases which are way over-provisioned compared to what their actual IOPS or sequential throughput requirements actually are.

Who is Affected by Cloud Waste?

The consequences of cloud waste are quite apparent. It is killing everyone’s business bottom line. For consumers, it erodes their return on assets, return on equity and net revenue.  All of these ultimately impact earnings per share for their investors as well.

Believe it or not, it also hurts the public cloud providers and their bottom line.  Public cloud providers are most profitable when they can oversubscribe their data centers. Cloud waste forces them to build more, very expensive data centers than they need to, killing their oversubscription rates and hurting their profitability as well. This is why you see cloud providers offering certain types of cost cutting solutions. For example, AWS offers Reserved Instances, where you can pay up front for break in on-demand pricing. They also offer Spot Instances, Auto-Scaling Groups and Lambda.  Azure also offers price breaks to their ELA customer and Scale Sets (the equivalent of ASGs).

How to Prevent Cloud Waste

So, what can you do to address this? Ultimately, the solution to this problem exists between your ears. Most of it is common sense: It requires rethinking… rewiring your brain to look at cloud computing in a different way. We all need to become honorary Scotsmen (short arms and deep pockets… with apologies to my Scottish friends).

  • When you turn on resources in non-production environments, turn on the minimum size needed to get the job done and only grudgingly move up to the next size.
  • Turn stuff off in non-production environments, when you are not using it. And for Pete’s sake, when it comes to compute time, don’t waste your time and money writing your own scripts…that just exacerbates the waste. Those DevOps people should spend that time on your bread and butter applications. Use ParkMyCloud instead! (Okay, yes, that was a shameless plug, but it is true.)
  • Clean up old volumes, snapshots and machine images.
  • Buy Reserved Instances for your production environments, but make sure you manage them closely, so that they actually match what your users are provisioning, otherwise you could be double paying.
  • Investigate Spot fleets for your production batch workloads that run at night. It could save you a bundle.

These good habits, over time, can benefit everyone economically: Cloud consumers and cloud producers alike.  

Read more ›

Amazon uses robots for mundane tasks. Do the same and automate tasks in AWS

automate aws tasks like amazon automates in their warehouses with robotsI’m a fan of automation – as a CEO, I think you should do everything possible to simplify your day-to-day, whether that means you overhaul your calendar system or automate tasks in AWS.

Amazon themselves is great at this. I’m sure you are well aware of Amazon’s quest for automation in their warehouses (robots) and distribution (drones) to reduce costs and deliver packages faster. (That’s a great goal, by the way – I am an Amazon Prime customer – love it.) If you’re interested in Amazon’s robots, check out this article from Business Insider. And as the use of drones to deliver product becomes reality, Amazon has created a service – check out Amazon Prime Air if you haven’t already.

So let’s look at another branch of Amazon – Amazon Web Services (AWS). AWS is a large distributor of cloud, which in and of itself is actually a utility that can be used on demand to provide compute, database, and storage services to small and large companies alike. It’s just like the way utility companies provide electricity, water and heat to homes and business. Over time, features and services have been built and sold to optimize these traditional utilities in order to simplify mundane tasks via automation and achieve ROI by saving more money than cost. Here are a few examples:

  • Nest to detect, learn, and automate programmable thermostats to save on heating and cooling
  • In-office motion sensors to detect movement to turn lights off/on
  • Motion sensors on faucets and hand dryers to eliminate water and electric waste
  • Gadgets on showers, toilets, etc. to reduce water consumption and waste

The point is, all of these utilities need to be optimized with 3rd party technologies to automate and reduce waste, and to optimize spend. At home, you’re the CFO (or maybe your spouse is 🙂) but you don’t want to spend more than you need to, and you will buy technology to automate mundane tasks and save yourself money if there is a tangible ROI.

Amazon is doing this with robots and drones – neither of which they built. So why not use 3rd party technology to automate for AWS, Azure, and Google Compute. Remember that the public cloud is a utility, and utilities have waste. In public cloud, what can we look at that’s mundane, that can be automated and where you can save dollars on cloud waste?

I have a simple one – automate tasks in AWS by turning servers off and on. Did you know that on average, 66% of what you spend on the public cloud is on compute (servers), and 45% of that is on non-production systems like development, test, and QA – servers that’s don’t need to run 7×24. That’s $6B in waste per year.

Even better than Nest, which you can install and set up in 30 minutes, ParkMyCloud can be setup and configured in 15 minutes or less. The next day, we will tell you how much you saved in the previous 24 hours by simply automating the mundane task of turning idles servers on/off.

There’s a reason Amazon is so successful – they automate mundane tasks in a simple, efficient way, follow their lead – automate today!

Read more ›

An Open Letter to Snap: We can save you $80 million on your cloud bills

open-letter-to-snapDear Evan Spiegel, Bobby Murphy, and whoever manages Snap’s cloud infrastructure,

We have a proposition for you. We can save you $80 million on your cloud bills.

See, when you filed for IPO a few weeks back, one snippet of information that caught our eye was your use of public cloud – specifically your $400-million-annual four-year public cloud deal with Google Cloud Platform. We never doubted that your cloud infrastructure would be huge, see. After all, saw Netflix’s cloud spend rise to some $800 million a year in 2016 after they completed a near-total migration to AWS. These huge infrastructures are the most important to optimize – particularly as you grow.

As cloud waste reduction engineers here at ParkMyCloud, we are passionate about automating that optimization – and doing so quickly and simply. Obviously, in your case, you will continue to scale your infrastructure to deal with the exponential customer growth and daily peaks in usage (and by the way, my kids love Snapchat and I had to get an unlimited data plan – thanks!).  Based on our analysis, we know that the largest item on public cloud customers’ monthly bills is compute instances/VMs (typically about 70% of a cloud bill). Research has shown that industry-wide, the amount of non-production in this infrastructure is about 44%. These non-production instances are the number one place to start hunting for optimization opportunities.

Your rate of innovation is certainly impressive and we see that you spent some $185MM last year on R&D. We would be willing to bet that although your teams are incredible, that in the haste to deliver better and better product, your cloud waste is likely enormous. Based on what we have seen in similar set-ups, we think we can save close to half this spend by simply automating the turning off of your cloud instances when not being used.

Here’s what we propose: you need to put parking schedules in place on your non-production instances.  Snap, you need to ensure that your public cloud resources are only used when needed and turned off when not. Based on this alone we typically see our customers saving upwards of 65% off their compute spend. If you add additional optimization approaches that address industry rates of over-provisioning of compute instances (55%) and large scale inventory waste (15%) i.e. spend on resources that are no longer required savings grow even further, you will save even more on non-production (dev, test, QA etc.) workloads.

So when we see huge monthly spend numbers like yours, what gets us excited is thinking about how just how big your savings could be. And the truly wonderful thing with these type of savings is that everyone’s a winner – the DevOps team wins as they help the enterprise deliver more for less, the shareholders benefit from reduced Op-Ex and increased profits. (You probably don’t care, but your cloud providers will also benefit as they can better utilize their own datacenters.)

So Evan Spiegel, Bobby Murphy, and the rest of the team – shoot us a note.  We are happy to talk whenever you are.

Cheers,

Jay Chapel & the ParkMyCloud Team

Read more ›

Does your enterprise need to hire a Cloud Financial Administrator?

cloud financial administratorA recent report from 451 Research analyst William Fellows caught our eye – Now hiring: Cloud financial administrators. The report – which you can download here on our website – discusses a trending new role in enterprises as they seek to keep cloud costs in check.

Why would you need a cloud financial administrator?

The complexity of the cloud infrastructure space, plus the increasing costs in public cloud as enterprises grow, leaves many enterprises unprepared to manage the financial aspects of their cloud usage. IT, Operations, and Development managers and directors already have too much on their plates to add the entire responsibility category of managing cloud finances – and therefore, some are turning to creating a new role solely for this purpose.

Here at ParkMyCloud, we’ve had similar observations. Just take a look at our case studies to see stories of DevOps Engineers, Operations Engineers, and Directors of Infrastructure Services who were unprepared to take on the additional burden of cost optimization single-handedly. Or so they thought.

Is it worth the expense?

As mentioned in the report, the salary of any cost-savings hire should, of course, be lower than the amount that person saves the organization. If your enterprise’s cloud infrastructure is extensive enough to cover a CFA’s salary, it may seem like a cost-effective decision.

However, this can be done in a simpler and more cost-effective manner by introducing cost-optimization tools. The key is to ensure that they are simple to use, and won’t take much time for users to implement and maintain.

Our customers have found ParkMyCloud to fit this need. It takes just minutes to set up and provides savings in the hundreds of thousands and more.

So do you need a cloud financial administrator? Well, it’s up to you. But we don’t think so.

Read more ›
Page 1 of 712345...Last »
Copyright © ParkMyCloud 2016. All rights reserved|Privacy Policy