Top Cloud Finance Questions from CFOs

Top Cloud Finance Questions from CFOs

Taking control of your cloud finance is now more important than ever and there is no room for wasted spend. More organizations are shifting to cloud-based infrastructures – according to forecasting done by Gartner last year, the worldwide public cloud revenue is expected to grow 17.3 percent in 2019. While this is good news for technology innovation, from the finance side of the table, elastic infrastructure poses a challenge. CFOs need to ensure that IT and development departments are optimizing spend even while encouraging innovation and growth.

The Challenge When it Comes to Cloud Finance

Finance departments continue the search for capital optimization by lowering costs while prioritizing business models that transform and expand worldwide with flexibility. With this flexibility, though, comes complexity that is difficult to manage, deploy, and – most frustrating of all – to forecast.

With rapid growth comes rapid responsibility. If an organization is not cautious, cloud spending can spiral out of control, and using the cloud might seem counterproductive. Finance and IT departments must come to and work together to achieve key business goals and connect the disconnect to avoid a cost control strategy from becoming a project instead of an actionable and executable plan.

Smart Questions CFOs Should Be Asking

With the struggle to control cloud spend, CFOs need to address cloud finance questions and understand their impacts on operations. After all, most organizations cite lowering costs as one of their primary reasons for moving to the cloud. In order to make sure that financial teams and IT departments are on the same page, here are three smart top cloud finance questions CFOs should ask.  

1. Are we thinking about the cloud cost model correctly?  

Out of habit from the on-premises mindset, many organizations moving to the cloud purchase far more capacity than they actually need. Given that the major benefits for moving to the cloud are flexibility – to allow you to use the cloud based on your real-time needs, and capacity – to match in theory the physical space an on-site data center would provide. Unfortunately, the latter is not true, the majority of companies overspend in cloud resources they are not using for much or all of the time.  

So, when CFOs talk to their IT counterparts about cloud spending, they need to ensure that everyone is now in an OpEx mindset, rather than the on-prem model of CapEx.

2. Are we wasting cloud spend?

The answer is most likely yes. To further explain why this happens we need to look at the factors that contribute to this waste. A huge contributing factor is idle resources. The cloud runs 24/7, but most non-production resources used for development, testing, staging, and QA are only needed during the work week. In perspective, if you work a 40-hour week and only need to use resources then, you are paying for resources to stay idle after work hours. Assuming a twelve-hour workday window five days a week, that means 65% of the time you’re paying for, the resources site idle.

Another contributing factor is oversized resources. We recently found that the average CPU usage of resources managed in our platform is only 4.9%. That points to a trend of massive underutilization when resources can easily be sized down for 50-70% cost savings.

3. What steps are we taking to control and reduce cloud spend?

IT and development departments will be focused on growth, so it’s often the role of Finance to ensure that these teams are putting cost control measures in place on public cloud. Ensure that your technical departments have an actionable – preferably, automated – plan in place to combat wasted cloud spend. Ask for reports broken down by project or team over time, and research cloud optimization platforms that the technical teams should take advantage of. Furthermore, using a cloud optimization platform with automated and analytical capabilities will help you discover cost-savings opportunities and enable more efficient workflows between departments.

The Bottom Line

Finance departments can push the cloud conversation toward optimization of resources, ensuring that IT departments are both innovative and within budget. Create a competitive cloud finance strategy to include visibility, flexibility, and governance to create an opportunity for the business to function effectively across departments. This will increase ROI, reporting, and fundamentally, the implementation of better solutions to thrive in the cloud.  

15 AWS Best Practices for 2019

15 AWS Best Practices for 2019

There are a ton of great blogs that cover AWS best practices and use cases. To provide a little more insight into the latest practices offered by AWS, we put together 15 of the best practices since the beginning of 2019, consisting of tips and quotes from different experts.

1. Take Advantage of AWS Free Online Training Resources

“There’s no shortage of good information on the internet on how to use Amazon Web Services (AWS). Whether you’re looking for ways to supplement your certification study efforts or just want to know what the heck it’s all about, check out this compilation of free training and resources on all things AWS.”

2. Keep Up With Instance Updates So You Can Periodically Make Changes to Costs and Uses

“AWS expands its choices regularly, so you need to dynamically re-evaluate as your business evolves. The cloud presents many arbitrage opportunities including instance families, generations, types, and regions—but trying to do this manually is a recipe for time-consuming frustration. Don’t fall victim to Instance Inertia: even though the process of making a change is simple enough, it can be difficult to accomplish without having any conclusive evidence of either cost gains or performance improvements.”

3. Limit Access by Assigning User Permissions

“Your configuration of IAM, like any user permission system, should comply with the principle of “least privilege.” That means any user or group should only have the permissions required to perform their job, and no more.”

4. Visibility Across Multiple Accounts in One Frame Helps Make More Informed Decisions

Use a cloud security solution that provides visibility into the volume and types of resources (virtual machines, load balancers, security groups, users, etc.) across multiple cloud accounts and regions in a single pane of glass. Having visibility and an understanding of your environment enables you to implement more granular policies and reduce risk.”

5. Tag IAM Entities to Help Manage Access Granted to Resources Based on an Attribute  

AWS has now added the ability to tag IAM users and roles, which eases management of IAM entities by enabling the delegation of tagging rights and enforcement of tagging schemes.”

“A primary use case for the new feature is to grant IAM principals access to AWS resources dynamically based on attributes. This can now be achieved by matching AWS resource tags with principal tags in a condition”

“As cloud deployments grow, teams deal with an increasing amount of resources that are constantly moving, growing, and changing. Projects may be shared between teams or customers and can rely on different regions and platforms. This makes it easy to lose track of what’s being used until the bill comes due. For tags to be actionable at scale, most teams require visibility of exactly which resources are at play at any given time, who is using them, and what they are being used for, and who is responsible for them. Essentially, the more high-quality information associated with a resource, the easier it becomes to manage.”

“Within each of these categories, you can then define your own tags that are specific to your organization for standardization”

6. Creating a Start/Stop Schedule With an Instance Scheduler Will Help You Optimize Costs

“EC2 is a main compute service on AWS, they’re your (Windows and Linux) virtual machines. Running compute resources costs money, simple as that….”

“Paying only for the resources you actually need and use can save you a LOT of money.”

7. Decrease Errors and Streamline Your Deployments With An Automation Tool

“Whether you choose to use AWS CodeDeploy or a different tool, automating your software deployments helps you more consistently deploy an application across development, test, and production environments. The importance of automation in deployment in order to decrease errors and increase speed cannot be overstated.”

“Automate your deployment. This saves you from potentially costly and damaging human error. With the automation services available today, you have many options to customize every part of your deployment without letting automation fully take over if you prefer.”

8. Have a Reserved Instances Strategy

“Purchasing an RI is only the beginning; you should have a process in place to continuously monitor RI utilization and modify unused RIs (split/join or exchange convertible RIs) to maximize their usage. A common AWS billing model is a centralized account with consolidated billing, linked to autonomous accounts so individual accounts can purchase RIs based on their individual usage patterns.”

9. Account For the Capacity You Will Need So You Have a Size That Fits Your Environment

“We know that AWS EC2 instance types are sized and priced exponentially. With millions of sizing options and pricing points, choosing the wrong instance type can mean a major pricing premium—or worse, a substantial performance penalty! We see many organizations choose an instance type based on generic guidelines that do not take their specific requirements into account.”

“AWS offers a variety of types and sizes of EC2 instances. That means that it’s perfectly possible to select an instance type that’s too large for your actual needs, which means you’ll be paying more than necessary. In fact, the data shows that this is happening most of the time.

10. Save Your Team Time and Money with Serverless Management

“AWS data is housed in different regions all over the world. Its cloud-based system means you’re able to access your data in just a matter of minutes.”

“No more having to set up and maintain your own servers. That’s just more stress and money out of your pocket. Instead, you can leave it to the experts at AWS who will ensure the infrastructure your business is running efficiently.”

“The AWS Serverless Application Repository allows developers to deploy, publish, and share common serverless components among their teams and organizations. Its public library contains community-built, open-source, serverless components that are instantly searchable and deployable with customizable parameters and predefined licensing. They are built and published using the AWS Serverless Application Model (AWS SAM), the infrastructure as code, YAML language, used for templating AWS resources.”

11. Set up a Secure Multi-Account with AWS Landing Zone

“With the large number of design choices, setting up a multi-account environment can take a significant amount of time, involve the configuration of multiple accounts and services, and require a deep understanding of AWS services.

This solution can help save time by automating the set-up of an environment for running secure and scalable workloads while implementing an initial security baseline through the creation of core accounts and resources.”

12. Ensure Consistency in your Environment with Containers

“Containers offer a lightweight way to consistently port software environments for applications. This makes them a great resource for developers looking to improve infrastructure efficiency, becoming the new normal over virtual machines (VMs).”

“Containers share an operating system installed on the server and run as resource-isolated processes, ensuring quick, reliable, and consistent deployments, regardless of environment.”

13. Auto Scaling Groups

“Auto Scaling Groups can be used to control backend resources behind an ELB, provide self-replication (when the instance crashes, Auto Scaling Group will immediately provision a new one to maintain the desired capacity), simplify deployments (regular releases, blue/green deployments, etc.), and for many other use cases…..

The unnecessary spending on EC2 instances is usually caused by unused, or underused, compute resources, that increase your monthly bill. This is an age-old problem where you provision more than you need, to make sure you have enough to handle the expected, but also unexpected traffic. An Auto Scaling Group solves this issue by handling the scalability requirements for you.”

14. Automatically Backup Tasks

“AWS Backup performs automated backup tasks across an organization’s various assets stored in the AWS cloud, as well as on-premises. It provides a centralized environment, accessible through the AWS Management Console, for organizations to manage their overall backup strategies.

AWS Backup eliminates the need for organizations to custom-create their own backup scripts for individual AWS services, the company contends.”

15. Use API Gateway to Manage APIs at Scale

“Capable of accepting and processing hundreds of thousands of concurrent API calls, API Gateway can manage such related tasks as: API version management; authorization and access control; traffic management and monitoring.”

Have any AWS best practices you’ve learned recently? Let us know in the comments below!

5 Priorities for the Cloud Center of Excellence

5 Priorities for the Cloud Center of Excellence

One of the terms we have been hearing used more often when talking to prospects and customers alike is Cloud Center of Excellence (CCoE). DevOps, CloudOps, Infrastructure and Finance teams are joining together to create a cloud center to improve cloud operations in the enterprise. These are also known as a Cloud Command Center, Cloud Operations Center, Cloud Knowledge Center, or perhaps Cloud Enablement Team.

Essentially, a CCoE brings together a cross-functional team to manage cloud strategy, governance, and best practices, and serve as cloud leaders for the entire organization.

Who Needs a Cloud Center of Excellence?

When we talk to prospects and customers that have adopted a CCoE, there seem to be a couple of common themes:

  1.  Cloud-centric organizations where the DevOps, Security and Finance teams want to ensure that the organization’s diverse set of business units are using a common set of best practices, as no one wants the wild west for cloud management
  2.  Large organizations who are now multi-cloud and they need to standardize on a set of tools and processes that work across the CSPs for security, governance, operations and cost control
  3.  MSPs who are developing cloud centers focused on creating best practices for their customers, for both single and multi-cloud; for example, you would have an Azure Cloud Center of Excellence (ACCoE) or a Google Cloud Center of Excellence (GCCoE)

For more, see this presentation from Zendesk and CloudHealth from AWS re:Invent 2018 to understand how a large, cloud-centric organization leverages the CCoE concept to improve governance and operational efficiency.

What Should the Cloud Center of Excellence Prioritize?

No matter why you have established a cloud center within your organization, there are a few important priorities in order to make your effort a success:

  1. Interdepartmental Communication — the CCoE serves as a bridge between departments that use, measure, or fund cloud operations. All of these departments and stakeholders need to be on the same page about goals, timelines, and budgets for cloud operations, which is the entire idea of establishing a CCoE.
  2. Technology Expertise — as a resource and driver of innovation throughout the organization, it is imperative that the CCoE are the experts on the cloud technology used in the organization. Given the rate of innovation by the cloud providers, this requires dedicated time and effort.
  3. Governance — there are two major elements important for governance: authority and standardization. In order for the CCoE to be effective, it needs to be granted authority to set policies and standards for cloud security, compliance, and cost control — with the expectation that people throughout the organization will follow these policies. Once that authority is held, the CCoE needs to set, communicate, and enforce the policy standards as an initial priority.
  4. Repeatability and Automation — once policies are established, it’s time to make deployment processes repeatable with reference architectures, and to get tools and platforms in place for governance and cost control.
  5. End-User Buy In – we all know that if a developer doesn’t want to do something, it’s pretty likely they’re not going to do it. Developing a sense of, if not excitement exactly, but engagement, is important for your new structure to succeed. Several of our customers with cloud centers regularly host tech talks, brown bag lunches, and other learning experiences to promote buy-in and adoption of tools and processes.

Call it What You Want: A Dedicated Effort is Key

Maybe Cloud Center of Excellence is too cheesy a phrase for your taste. What matters is cross-departmental collaboration and standardizing a plan for cloud migration, growth, and management.

Is your organization using a Cloud Center of Excellence model? How’s it going? We’d love to hear in the comments below!

Will Robotic Process Automation Save Your Company Time and Money in the Cloud?

Will Robotic Process Automation Save Your Company Time and Money in the Cloud?

We’ve been hearing buzz about a new concept in AI, robotic process automation. The promise of the technology is that it can automate processes that employees are doing manually, saving your employees’ time and potentially reducing operational costs. It fits right in with the current trends in cloud computing toward optimization. We’re all about saving time and money – so let’s take a look at this trend to see if it can help you do either of these things.

What is Robotic Process Automation?


Robotic process automation (RPA) is a way to automate business processes by creating software robots to perform manual and mundane work-tasks. It allows users the ability to configure within an application the capability to handle a variety of repetitive tasks by processing, employing, generating and communicating information automatically. For example, you might program RPA bots to do first-level customer support tasks by searching for answers; copy and paste data from one system to another for invoicing or expense management or issue refunds. This video from IBM shows an example in action.

Furthermore, RPA tools can be trained to make judgments about future outputs. Many users appreciate its non-intrusive nature and the ability to integrate within infrastructures without causing disruption to systems already in place.

How can you use Robotic Process Automation?


Companies like Walmart, AT&T, and Walgreens are adopting the use of RPA. Clay Johnson, the CIO of Walmart, says they use RPA bots to automate pretty much anything from answering employee questions to retrieving useful information from audit documents. The CIO of American Express Global Business Travel, David Thompson, says they implement the use of RPA to automate the process for canceling an airline ticket and issuing refunds. In addition, Thompson is looking to use RPA to facilitate automatic rebooking recommendations, and to automate certain expense management tasks in the company.

More specific to cloud computing and IT, one great application for RPA is in automated software testing. If testing involves multiple applications and monotonous work, RPA can replace workers’ time spent testing. Additionally, RPA can be used to automate processes in monolithic legacy systems that are not worth developers’ time to update, to bring automation while work on newer microservices systems is in progress.

Is Robotic Process Automation the Best Way to Automate Cost Control?


A recent study found that not all automation is achievable with RPA. In the study, they conclude that only three percent of organizations have managed to scale RPA to a high level. Additionally, Gartner placed RPA tools at the “Peak of Inflated Expectations” in their Hype Cycle guide for artificial intelligence last year – another vote for more buzz than potential.

So can it save you time and money? If employees at your company are spending a large percentage of their time on repetitive tasks that require little to no decision making, then yes, it probably can. It’s also important to free up developer time that is spent on automatable tasks, like scripting, so they can focus on creating value for your business.

For complex and long-term automation, though, purpose-built software is a better solution. If there is already a solution to your automation needs on the market, it will probably serve you better than RPA, because there won’t be an upfront period needed to program bots, you won’t need to make frequent changes to your processes like many RPA bots will require, and it’s a better solution for the long run.

The Rise of the Enterprise Cloud Manager

The Rise of the Enterprise Cloud Manager

There’s a growing job function among companies using public cloud: the Enterprise Cloud Manager. We recently did a study on ParkMyCloud users which showed that a growing proportion them have “cloud” or the name of their cloud provider such as “AWS” in their job title. This indicates a growing degree of specialization for individuals who manage cloud infrastructure. In some companies, there is a dedicated role for cloud management – such as an Enterprise Cloud Manager.

Why would you need an Enterprise Cloud Manager?

The world of cloud management is constantly changing and becoming increasingly complex, which can make it confusing, expensive, and hard to control. If someone is not fully versed in this field, they may not always know how to handle problems related to governance, security, and cost control. It is important to dedicate resources in your organization to cloud management and related cloud job roles. This chart from Gartner gives us a look at all the things that are involved in cloud management so we can better understand how many parts need to come together for it to run smoothly.



Having a role in your organization that is dedicated to cloud management allows others, who are not specialized in that field, to focus on their jobs, while also centralizing responsibility.  With the help of an Enterprise Cloud Manager, responsibilities are delegated appropriately to ensure cloud environments are handled according to best practices in governance, security, and cost control.

The role of an Enterprise Cloud Manager is to oversee cloud operations. They know all the ins and outs of cloud management so they are able to create processes for resource provisioning and services. Their focus is on optimizing their infrastructure which will help streamline all their cloud operations, improve productivity, and optimize cloud costs.

Automation Tools are Essential

With so much going on in this space, it isn’t possible to expect just one person or a team to manage all of this – you need automation tools. The great thing is that these tools work for companies of any size. Primary users can be people dedicated to this full time, such as an Enterprise Cloud Manager, as well as people managing cloud infrastructure on top of other responsibilities.

Why are these tools important? They provide two main things: visibility and action to act on those recommendations. Customers that were once managing resources manually are now saving time and money by implementing an automation tool. Take a look at the automation tools that are set up through your cloud vendor, as well as third-party tools that are available. Setting up these tools for automation will lessen the need for routine check-ins and maintenance while ensuring your infrastructure is optimized.

Do I really need one?

If you want your organization to be well informed and up to date, then it is important that you have someone or something in place to oversee your cloud operations – an Enterprise Cloud Manager and automation tools.