Who Should Manage App Development Costs?

Who Should Manage App Development Costs?

We speak to enterprises large and small about cloud cost optimization, and one of the more dominant themes we have been hearing lately is: who should manage app development costs?  Cloud Operations teams (ITOps, DevOps, FinOps, Cloud Center of Excellence, etc.) that are responsible for the management, governance and optimization of an enterprise’s cloud resources need to get the Application owners or the lines of business owners to be responsible for cost. It can’t simply be the centralized cloud team who cares about cost. Folks using cloud services on a daily basis for engineering, development, QA, testing, etc. need to take actions related to optimizing cloud costs, managing user governance and security operations.

I liken this a bit to the response to the COVID-19 pandemic given this is the event that has defined 2020. The Federal Government can collect data from across the country, provide resources and publish guidelines but ultimately the State Governments need to take the actions to shut down schools and non-essential businesses, and certain counties or jurisdictions within those states can even decide if they will adhere to the state guidelines, there could be very good reasons they don’t based on data or essential businesses. We see the same underlying process in enterprises when it comes to cloud cost optimization and management.

Let’s play this out.

Cloud spend has become the largest single IT cost outside of labor and is growing 10-15% month on month. So, the CloudOps team is given a directive from Finance and/ or IT Management to find tools or solutions to identify cloud waste and control cloud spend primarily in AWS, Azure and Google clouds.

Then, the CloudOps team researches tools, both 3rd party and native cloud provider tools, and finds a couple important things:

  1. If the enterprise is multi-cloud, the native CSP tools are a non-starter
  2. Tools must be data-driven, so the recommendations to reduce the app development cost are believable and actually useful
  3. The tools must be self-service, i.e., the application owners or the lines of business need to be able to take the actions. Otherwise, they will deem CloudOps as being draconian (and push back because they know their app better … sounds like the States).

Next, CloudOps brings in a tool to do a pilot. It starts small with a sandbox account, but as data and trust build, the pilot expands to include many AWS, Azure, and/or GCP accounts that are used by the application owners. Then CloudOps determines a “friendly” line of business where the app development cost owner is keen to identify waste, reduce costs, and increase their cloud efficiency.

CloudOps and the cloud optimization vendor provide a demo to the app owners using their own data and showing them where they have waste, such as idle resources, over-provisioned resources, orphaned resources, resources that could leverage reservations, and so forth. The app owners are intrigued and are keen to understand if they are the master of their own domain.

Common questions:

  • Where is this data coming from? Is it reliable?
  • Can we take our own actions? Is this self-service?
  • What about user governance? My QA team does not need to manage resources that belong to dev or staging. Can we reject a recommendation because the app we are running requires that configuration?
  • Can we group resources into application stacks and manage them as a single entity?
  • Can we override an action?

In order to effectively manage the app development cost, CloudOps needs to involve the owners and users of those applications and provide them with the data and tools to make decisions and take actions. The cloud is self-service, so in order to effectively manage your cloud services, you need the optimization and governance tools to also be self-service and adapt to the needs of each business unit within your organization.

How to Use the New AWS Cost Categories for Better Cost Allocation

How to Use the New AWS Cost Categories for Better Cost Allocation

Last week, AWS announced the general release of AWS Cost Categories. Anyone involved in managing AWS costs within your organization should ensure they understand this new feature and start using it for better cost allocation and billing management. 

What are AWS cost categories?

AWS Cost Categories are now visible in the console on the billing dashboard.

AWS cost categories are a new way to create custom groups that transcend tags and allow you to better manage costs according to your organizational structure, for example, by application group, team, or location.

Cost categories allow you to write rules and create custom groups of billing line items, which you can then use in AWS Cost Explorer, AWS Budgets, the AWS Cost and Usage Report, etc. You can group by account, tag, service, and charge types. 

How are cost categories different from tags?

At first, this “category” structure may seem similar to the tagging structure you already use in Cost Explorer, but there are a few key differences. 

First, you can create logic to categorize costs from specific services to all belong to the same team. For example, you may have RDS resources belong to a category for the DBA team; Redshift belong to a FinOps category, or CodePipeline belong to a DevOps category. Categories also allow you to include resources that are not taggable, such as AWS support costs and some Reserved Instance charges.

Why should you use AWS cost categories?

The ability to create your own categorization rules is what makes this new option powerful. You can do this through the Cost Categories rule builder, JSON editor, or API. The rule builder is straightforward and has some built-in logic options such as “is, is not, contains, starts with” and “ends with”. 

For organizations with many AWS accounts, categories of accounts into business units, products, applications, or production vs. non-production are helpful in allocating and evaluating costs.

Ensure costs are in control

Of course, whenever you take a closer look at your current costs, you’ll find areas that can be better optimized. Make sure you are only paying for the AWS resources you actually need, and schedule idle resources to turn off using ParkMyCloud – now supporting EKS and soon to support Redshift. 

22 Google Cloud Regions and 67 Zones Equals Endless Possibilities

22 Google Cloud Regions and 67 Zones Equals Endless Possibilities

Google Cloud offers services worldwide at locations across 200+ countries and territories, and it’s up to you to pick which of the Google Cloud Regions and Zones your applications will live in. When it comes to Google Cloud resources and services, they can either be zonal, regional or managed by Google across different regions. Here’s what you need to know about these geographic locations along with some tips to help you pick the right one for you.

What are Google Cloud Regions and How Many are There?

In Google Cloud, regions are independent geographic areas that are made up of one or more zones where users can host their resources. There are currently 22 regions around the world, scattered across North America, South America, Europe, Asia, and Australia. 

Since regions are independent geographic areas, spreading your resources and applications across different regions and zones provides isolation from different kinds of resources, applications, hardware, software, and infrastructure failures. This provides an even higher level of failure independence meaning the failure of one resource will not affect other resources in different regions and zones. 

Within a region, you will find regional resources. Regional resources are resources that are redundantly deployed across all the zones within a region, giving them higher availability. 

Here’s a look at the different region names and their region descriptions:

Region Name Region Description
asia-east1 Taiwan
asia-east2 Hong Kong
asia-northeast1 Tokyo
asia-northeast2 Osaka
asia-northeast3 Seoul
asia-south1 Mumbai
asia-southeast1 Singapore
australia-southeast1 Sydney
europe-north1 Finland
europe-west1 Belgium
europe-west2 London
europe-west3 Frankfurt
europe-west4 Netherlands
europe-west6 Zürich
northamerica-northeast1 Montréal
southamerica-east1 Sao Paulo
us-central1 Iowa
us-east1 South Carolina
us-east4 Northern Virginia
us-west1 Oregon
us-west2 Los Angeles
us-west3 Salt Lake City

 

What are Google Cloud Zones and How Many are There?

Zones are isolated locations in a region. Zones are deployment areas for your resources in a region. Zones should be considered a single failure domain within a region. To deploy fault-tolerant applications with high availability and help protect against unexpected failures, deploy your applications across multiple zones in a region. Around the world, there are currently 67 zones.

Zones have high-bandwidth, low-latency network connections to other zones in the same region. As a best practice, Google suggests deploying applications across numerous zones and multiple regions so users can deploy high availability, fault-tolerant applications. This is a key step as it helps protect against unexpected failures of components. Within a Zone, you will find zonal resources that operate within a single zone. If a zone becomes unavailable, all zonal resources in that zone are unavailable until service is restored. 

 Here’s a closer look at the available Zones broken down by region. 

Region Name Region Description Zones
asia-east1 Taiwan asia-east1-a

asia-east1-b

asia-east1-c

asia-east2 Hong Kong asia-east2-a

asia-east2-b

asia-east2-c

asia-northeast1 Tokyo asia-northeast1-a

asia-northeast1-b

asia-northeast1-c

asia-northeast2 Osaka asia-northeast2-a

asia-northeast2-b

asia-northeast2-c

asia-northeast3 Seoul asia-northeast3-a

asia-northeast3-b

asia-northeast3-c

asia-south1 Mumbai asia-south1-a

asia-south1-b

asia-south1-c

asia-southeast1 Singapore asia-southeast1-a

asia-southeast1-b

asia-southeast1-c

australia-southeast1 Sydney australia-southeast1-a

australia-southeast1-b

australia-southeast1-c

europe-north1 Finland europe-north1-a

europe-north1-b

europe-north1-c

europe-west1 Belgium europe-west1-b

europe-west1-c

europe-west1-d

europe-west2 London europe-west2-a

europe-west2-b

europe-west2-c

europe-west3 Frankfurt europe-west3-a

europe-west3-b

europe-west3-c

europe-west4 Netherlands europe-west4-a

europe-west4-b

europe-west4-c

europe-west6 Zürich europe-west6-a

europe-west6-b

europe-west6-c

northamerica-northeast1 Montréal northamerica-northeast1-a

northamerica-northeast1-b

northamerica-northeast1-c

southamerica-east1 Osasco southamerica-east1-a

southamerica-east1-b

southamerica-east1-c

us-central1 Iowa us-central1-a

us-central1-b

us-central1-c

us-central1-f

us-east1 South Carolina us-east1-b

us-east1-c

us-east1-d

us-east4 Northern Virginia us-east4-a

us-east4-b

us-east4-c

us-west1 Oregon us-west1-a

us-west1-b

us-west1-c

us-west2 Los Angeles us-west2-a

us-west2-b

us-west2-c

us-west3 Salt Lake City us-west3-a

us-west3-b

us-west3-c

 

Here are Some Things to Keep in Mind When Choosing a Region or Zone

Now that we know what regions and zones are, here are some things to be aware of when you are selecting which region or zone would be the best fit for your infrastructure. 

  • Distance – choose zones based on the location of your customers and where your data is required to live. It makes more sense to store your resources in zones that are closer to your point of service in order to keep network latency low. 
  • Communication – It’s important to be mindful of the fact that communication across and within regions will incur different costs and happen at different speeds. Typically, communication within a region will be cheaper than communication across different regions. 
  • Redundant Systems – As we mentioned above, Google is big on the fact that you should deploy fault-tolerant systems with high availability in case there are unexpected failures. Therefore, you should design any important systems with redundancy across multiple regions zones. This is to mitigate any possible effects if your instances were to experience an unexpected failure. 
  • Resource Distribution – Zones are designed to be independent of one another so if one zone fails or becomes unavailable, you can transfer traffic to another zone in the same region to keep your services running.
  • Cost – always check the pricing to compare the cost between regions.

What Sorts of Features are Defined by Region and Zone?

Each zone supports a combination of Sandy Bridge, Ivy Bridge, Broadwell, Skylake, Haswell, Cascade Lake, or Skylake CPU platforms. Once you’ve created an instance within a zone, your instance will use the default processor that’s supported in that zone. As an alternative option, you could choose what CPU platform you’d like. 

For example, take a look at the features offered in the europe-west6 region and us-east4-a Zones to see the similarities and differences.

These are the features that you can find in the europe-west6 region Zone:

  • Available CPU Platforms
    • Intel Xeon (Skylake) (default)
  •  N1 machine types with 96 vCPUs when using Skylake
  • E2 machine types up to 16 vCPUs and 128 GB of memory
  • Local SSDs 
  • Sole-tenant nodes

And in the us-east4-a Zone the features include:

  • Available CPU Platforms
    • Intel Xeon E5 v4 (Broadwell)(default)
    • Intel Xeon (Skylake) 
  • N1 machine types with up to 96 vCPUs when using Skylake platform
  • N2 machine types with up to 80 vCPUs and  640 GB of memory
  • E2 machine types up to 16 vCPU and 128 GB of memory
  • C2 machine types with up to 60 vCPUs and 240 GB of memory
  • M1 ultramem memory-optimized machine types with 160 vCPUs and 3.75 TB of memory
  • M2 ultramem memory-optimized machine types with 416 vCPUs and 11.5 TB of memory
  • Local SSDs
  • GPUs
  • Sole-tenant nodes

As you can see, the europe-west6 region doesn’t have quite as many features as the rest of the Zones. 

Multiregional Products

There are a handful of Google Cloud services that are managed to be redundant and distributed across and within regions. These services optimize performance, resource efficiency and availability. However, these services do require a trade-off – users must choose between either the consistency or latency model. 

Note: *These trade-offs are documented on a product-specific basis.*

A key feature of multiregional resources is that the data associated with these resources aren’t tied to a specific region so therefore can be moved between regions. There are seven products that are multiregional are, they are:

  • BigQuery
  • Cloud Storage
  • Cloud Spanner
  • Cloud Firestore
  • Container Registry
  • Cloud Key Management Service
  • Cloud EKM

Google Cloud’s expansion shows no sign of slowing down, they are continuing to announce new regions and services to help serve their customers worldwide and continue to advance their place in the public cloud market share

Further Reading:

5 Things You Need to Know About AWS Regions and Availability Zones

NEW in ParkMyCloud: Now Offering Azure AKS Cost Optimization!

NEW in ParkMyCloud: Now Offering Azure AKS Cost Optimization!

We’re excited to share the latest in cost optimization for container services: ParkMyCloud now enables enterprises to optimize their Azure AKS (managed Azure Kubernetes Service) cloud costs. This is the second managed container service supported in the platform since we announced support for the scheduling of Amazon EKS (managed Elastic Kubernetes Service) last month.

Why is Container Cost Optimization Essential?

As we continue to expand our container management offering, it’s essential to understand that container management, like the broader cloud management, includes orchestration, security, monitoring, and of course, optimization.

Containers provide opportunities for efficiency and more lightweight application development, but like any on-demand computing resource, they also leave the door open for wasted spend. If not managed properly unused, idle, and otherwise suboptimal container options will contribute billions more to the estimated $17.6 billion in wasted cloud spend expected this year alone.

AKS Scheduling in ParkMyCloud

The opportunities to save money through container optimization are in essence no different than for your non-containerized resources. ParkMyCloud analyzes resource utilization history and creates recommended schedules for compute, database and container resources, and programmatically schedules and resizes them, saving enterprises around the world tens of millions of dollars.

You can reduce your AKS costs by setting schedules for AKS nodes based on working hours and usage, and automatically assign those schedules using the platform’s policy engine and tags. Or, use ParkMyCloud’s schedule recommendations for your resources based on your utilization data. 

Already a ParkMyCloud user? Log in to your account to optimize your AKS costs. Please note you’ll have to update your Azure permissions. Details available in the release notes

Not yet a ParkMyCloud user?start a free trial to get started.

What’s Next for Container Optimization?

This is the second release for container optimization in ParkMyCloud. The platform already offers support for Amazon EKS (managed Elastic Kubernetes Service). Support scheduling for Amazon ECS, AWS Fargate, and Google Kubernetes Engine (GKE) will be available soon in the next few months, so stay tuned.

Questions? Feature requests? We’d love to hear them. Comment below or contact us directly.

Further reading:

 

6 Benefits To Adopting An AWS Multi-Account Strategy

6 Benefits To Adopting An AWS Multi-Account Strategy

Taking your organization into a full multi-cloud deployment can be a daunting task, but focusing on adopting just an AWS multi-account strategy can provide many benefits without a lot of extra effort. AWS makes it quite easy to create new accounts on a whim, and can simplify things with consolidated billing. Let’s take a look at why you might want to split your monolithic AWS account into micro accounts.

1. Logical Separation of Resources

There are a few options for separating your resources within a single AWS account, including tagging, isolated VPCs, or using different regions for different groups. However, these practices can still lead to extensive lists of resources within your account, making it hard to find what you need. By creating a new account for each project, business unit, or development stage, you can enforce a much better logical separation of your resources. You can still use separate VPCs or regions within an account, but you aren’t forced to do so.

2. Security and Governance

In addition to separation for logical purposes, multiple accounts can also help from a security perspective. For example, having a “production” account separate from a “development” account lets you give broader access to your developers and operations teams based on which account they need access to. AWS provides a great “IAM Analyzer” tool that can help you ensure proper security and roles for your users.  And if you have ever had a developer hard-code account access information, separated accounts can help bring that to light (we have not this happen at ParkMyCloud, but we have definitely seen it a couple of times over the years…).

3. Cost Allocation

In addition to tagging your systems for cost reporting, separation into different accounts can help with the chargeback and showback to your business units. Knowing which accounts are spending too much money can help you tweak your processes and find cloud waste. The AWS Cost and Usage Reports show exactly which account is associated with each expense.

4. Cost Savings Automation

You can apply cost savings automation at a granular level – but it’s easier if you don’t have to. For example, you should enforce schedules to automatically turn off resources outside of business hours. Some of our customers are eager to add their development-focused account to ParkMyCloud to allow for scheduling automation, but are a bit leery of adding Production accounts where someone might turn something off by accident. Automated scripts and platforms such as ParkMyCloud can be fully adopted on dev and sandbox accounts to streamline your continuous cost control, while automation around your production environment can be used to make sure everything is up and running.  AWS IAM policies can also allow you to set different policies on different accounts, for example, allowing scheduling and rightsizing automation in dev/test accounts, but only manual rightsizing in production.

5. Reserved Instances and Savings Plans

In an AWS environment where you have multiple accounts all rolling up to an Organization account, Reserved Instances and Savings Plans can be shared across all the associated accounts. Say you buy an RI or Savings plan in one account, but then end up not fully using it in that account.  AWS will automatically allocate that RI to any other account in the Organization that has the right kind of system running at the right time. A couple of our larger customers with really mature cloud management practices take this a step further and carefully manage all RI purchases using a dedicated “cloud management” account within the Organization. This allows them to maintain a portfolio of RIs and Savings Plans (kind of like a stock market portfolio) designed to optimize spend across the entire company, and limiting commitments to RIs that might not be needed due to idle RI’s purchased by some other group on some other account.  This allows them to smooth out the purchase of expensive multi-year and all-upfront RIs and Savings Plans over the course of time.

6. Keeping Your Options Open

Even if you aren’t multi-cloud at the moment, you never know how your cloud strategy might evolve over the next few years. By separating into multiple AWS accounts, it helps you keep your options available for individual groups or applications to move to different cloud providers without disrupting other departments. This flexibility can also help your management feel at ease with choosing AWS, as they won’t feel as locked-in as they otherwise might.

Get Started With An AWS Multi-Account Strategy

If you haven’t already started using multiple AWS accounts, Amazon provides a few different resources to help. One recent announcement was AWS Control Tower, which helps with the deployment of new accounts in an automated and repeatable fashion. This is a step beyond the AWS Landing Zone solution, which was provided by Amazon as an infrastructure-as-code deployment. Once you have more than one account, you’ll want to look into AWS Organizations to help with management and grouping of accounts and sharing reservations.

For maximum cost savings and cloud waste reduction, use ParkMyCloud to find and eliminate cloud waste – it’s fully multi-account aware, allowing you to see all of your accounts in a single pane of glass. Give it a try today and get recommended parking schedules across all of your AWS accounts.