How to Use the New AWS Cost Categories for Better Cost Allocation

How to Use the New AWS Cost Categories for Better Cost Allocation

Last week, AWS announced the general release of AWS Cost Categories. Anyone involved in managing AWS costs within your organization should ensure they understand this new feature and start using it for better cost allocation and billing management. 

What are AWS cost categories?

AWS Cost Categories are now visible in the console on the billing dashboard.

AWS cost categories are a new way to create custom groups that transcend tags and allow you to better manage costs according to your organizational structure, for example, by application group, team, or location.

Cost categories allow you to write rules and create custom groups of billing line items, which you can then use in AWS Cost Explorer, AWS Budgets, the AWS Cost and Usage Report, etc. You can group by account, tag, service, and charge types. 

How are cost categories different from tags?

At first, this “category” structure may seem similar to the tagging structure you already use in Cost Explorer, but there are a few key differences. 

First, you can create logic to categorize costs from specific services to all belong to the same team. For example, you may have RDS resources belong to a category for the DBA team; Redshift belong to a FinOps category, or CodePipeline belong to a DevOps category. Categories also allow you to include resources that are not taggable, such as AWS support costs and some Reserved Instance charges.

Why should you use AWS cost categories?

The ability to create your own categorization rules is what makes this new option powerful. You can do this through the Cost Categories rule builder, JSON editor, or API. The rule builder is straightforward and has some built-in logic options such as “is, is not, contains, starts with” and “ends with”. 

For organizations with many AWS accounts, categories of accounts into business units, products, applications, or production vs. non-production are helpful in allocating and evaluating costs.

Ensure costs are in control

Of course, whenever you take a closer look at your current costs, you’ll find areas that can be better optimized. Make sure you are only paying for the AWS resources you actually need, and schedule idle resources to turn off using ParkMyCloud – now supporting EKS and soon to support Redshift. 

NEW in ParkMyCloud: Now Offering Azure AKS Cost Optimization!

NEW in ParkMyCloud: Now Offering Azure AKS Cost Optimization!

We’re excited to share the latest in cost optimization for container services: ParkMyCloud now enables enterprises to optimize their Azure AKS (managed Azure Kubernetes Service) cloud costs. This is the second managed container service supported in the platform since we announced support for the scheduling of Amazon EKS (managed Elastic Kubernetes Service) last month.

Why is Container Cost Optimization Essential?

As we continue to expand our container management offering, it’s essential to understand that container management, like the broader cloud management, includes orchestration, security, monitoring, and of course, optimization.

Containers provide opportunities for efficiency and more lightweight application development, but like any on-demand computing resource, they also leave the door open for wasted spend. If not managed properly unused, idle, and otherwise suboptimal container options will contribute billions more to the estimated $17.6 billion in wasted cloud spend expected this year alone.

AKS Scheduling in ParkMyCloud

The opportunities to save money through container optimization are in essence no different than for your non-containerized resources. ParkMyCloud analyzes resource utilization history and creates recommended schedules for compute, database and container resources, and programmatically schedules and resizes them, saving enterprises around the world tens of millions of dollars.

You can reduce your AKS costs by setting schedules for AKS nodes based on working hours and usage, and automatically assign those schedules using the platform’s policy engine and tags. Or, use ParkMyCloud’s schedule recommendations for your resources based on your utilization data. 

Already a ParkMyCloud user? Log in to your account to optimize your AKS costs. Please note you’ll have to update your Azure permissions. Details available in the release notes

Not yet a ParkMyCloud user?start a free trial to get started.

What’s Next for Container Optimization?

This is the second release for container optimization in ParkMyCloud. The platform already offers support for Amazon EKS (managed Elastic Kubernetes Service). Support scheduling for Amazon ECS, AWS Fargate, and Google Kubernetes Engine (GKE) will be available soon in the next few months, so stay tuned.

Questions? Feature requests? We’d love to hear them. Comment below or contact us directly.

Further reading:

 

If You Just Do One Thing Today, Run the AWS IAM Access Analyzer

If You Just Do One Thing Today, Run the AWS IAM Access Analyzer

When it was announced in December last year, AWS called the AWS IAM Access Analyzer “the sort of thing that will improve security for just about everyone that builds on AWS.” Last week, it was expanded to the AWS Organizations level. If you use AWS, use this tool to ensure your access is granted as intended across your accounts. 

“IAM” Having Problems

AWS provides robust security and user/role management, but that doesn’t mean you’re protected from the issues that can arise from improperly configured IAM access.  Here are a few we’ve seen the most often.

Creating a user when it should have been a role. IAM roles and IAM users can both be assigned policies, but they are intended to be used differently. IAM users should correspond to specific human users, who can be assigned long-term credentials and directly interact with AWS services. IAM roles are sets of capabilities that can be assumed by other entities – for example, third-party software that interacts with your AWS account (hi! 👋). Check out this post for more about roles vs. users

Assigning a pre-built policy vs. creating a custom policy. There are plenty of pre-built policies – here are a few dozen examples – but you can also create custom policies. The problems arise when, in a hurry to grant access to users, you grant more than necessary, leaving holes. For example, we’ve seen people get frustrated when their users don’t have access to a VM but little insight into why – while it could be that the VM has been terminated or moved to a region the user can’t view, an “easy fix” is to broaden that user’s access.

Leaving regions or resource types open. If an IAM role needs permission to spin EC2 instances up and down, you might grant full EC2 privileges. But if the users with that role only ever use us-east-1 and don’t look around the other regions (why would they?) or keep a close eye on their bill, they may have no idea that some bad actor is bitcoin mining in your account over in us-west-2.

Potential attacks need only an opportunity to get access to your account, and the impact  could range from exposing customer data to ransomware to total resource deletion. So it’s important to know what IAM paths are open and whether they’re in use. 

Enter the AWS IAM Access Analyzer

The IAM Access Analyzer uses “automated reasoning”, which is a type of mathematical logic, to review your IAM roles, S3 buckets, KMS keys, AWS Lambda functions, and Amazon SQS queues. It’s free to use and straightforward to set up.

Once you set up an analyzer, you will see a list of findings that shows items for you to review and address or dismiss. With the expansion to the organizational level, you can establish your entire organization as a “zone of trust”, so that issues identified are for resources accessible from outside the organization. 

The Access Analyzer continuously monitors for new & updated policies, and you can manually re-analyze as well. 

3 Things to Go Do Now

If you had time to read this, you probably have time to go set up an analyzer:

  1. Create an AWS IAM access analyzer for your account or organization
  2. Review your findings and address any potential issues.
  3. Check the access you’re granting to any third-party service. For example, ParkMyCloud requests only the minimum permissions needed to do its job. Are you assigning anyone the AWS-provided “ReadOnlyAccess” role?  If so, you are sharing far more than is likely needed.
Managed Kubernetes Pricing Comparison: EKS vs. AKS vs. GKE

Managed Kubernetes Pricing Comparison: EKS vs. AKS vs. GKE

As container adoption continues to grow, we thought it’d be interesting to take a look at the hosted Kubernetes pricing options from each of the big three cloud providers. The Kubernetes services across the cloud providers are Amazon Elastic Kubernetes Service (EKS), Azure Kubernetes Service (AKS), and Google Kubernetes Engine (GKE). In this blog, we’ll take a closer look at each offering, their similarities and differences, and their pricing. Note: all pricing data points are as of this writing in April 2020.

AWS Cloud Container Services (EKS)

Amazon Elastic Kubernetes Service (Amazon EKS) is AWS’s service to manage and deploy containers via Kubernetes container orchestration service. Pricing is $0.10 per hour for each EKS cluster you create – you can use a single cluster to run multiple applications using Kubernetes namespaces and IAM security policies. You can run EKS on AWS using EC2 or Fargate. You can also run on-prem with AWS Outposts. 

If you use EC2, you would pay for the resources you created to run your Kubernetes worker nodes. This is on demand: you only pay for what you use, as you use it. You pay per cluster and underlying resource. If you choose to run your EKS clusters on Fargate, it will remove the need to provision and manage servers.  With Fargate you can specify and pay for resources per application – pricing is based on the vCPU and memory resources used from the time you start to download your container image until the Amazon EKS pod terminates (minimum 1-minute charge).

EKS worker nodes are standard Amazon EC2 instances – you are billed for them based on normal EC2 prices.

With the $0.10 price per hour, you’d be spending $72 per month for a cluster running the full month. It’s important to note that this price is the necessary cost to operate a cluster note – you still have to pay for the computation costs on top of this (e.g. EC2 instance hours or Fargate compute resources). 

Azure Cloud Container Services (AKS)

Azure Kubernetes Service (AKS) is Azure’s free fully managed solution to manage and deploy containers via Kubernetes container orchestration service. You pay only for the VM instances, storage, and networking resources used for the Kubernetes cluster, per-second, with no additional charge. 

How can you save on VM nodes using AKS? Use Reserved VM instances. (Check out the 10 things you should know before purchasing an Azure Reserved VM instances). You can pay up-front for either a 1- or 3-year term. For example, if you look at a D4 v3 node instance, the pay as you go price per hour is $0.192, a commitment of 1-year the price would be $0.1145, and a 3-year commitment would be $0.0738. So with a 1-year commitment, you would see savings of about 40% and for a 3-year commitment, you’d see savings of about 62%.

Unlike AWS and GCP, there is no charge for cluster management.  Users only pay for the nodes that the containers are built on. 

Google Cloud Container Services (GKE)

Google Kubernetes Engine (GKE) is Google Cloud’s fully managed solution to manage and deploy containers via Kubernetes container orchestration service. A GKE environment is made up of multiple machines grouped together to form a cluster. In GKE, a cluster is the foundation. A cluster consists of at least one cluster master and multiple worker machines called nodes.

Starting in June 2020, GKE will charge $0.10 per cluster per hour as a cluster management fee. This fee applies to clusters of any size (one zonal cluster per billing account is free). The fee will not apply to Anthos GKE clusters. 

How will this fee affect your bill? If you have the smallest instance size, (1vCPU, 3.75RAM) you will be paying about $73 per cluster per month. 

Google will also be introducing a service level agreement for GKE that guarantees 99.95% availability for regional clusters and 99.5% for zonal clusters.

Cloud Pricing Comparison Chart

For the example below, assume you need 80 CPU and 320 GB RAM for 1 year to run your cluster. You’d need 20 of each instance giving you 175,200 compute hours per year. Here’s what that would look like across the cloud providers.

So, EKS is the most expensive, but only by about 5% per year from the least expensive option, GKE.

Overall, AWS is the most popular cloud to run containers and Kubernetes. According to a survey from Forbes, 29% of the respondents use AWS EKS, 28% use Google GKE and 25% use Azure AKS. 

Don’t forget about your free options! In Azure and Google Cloud, you have the option to start a free account on the platform and with that, you get access to AKS and GKE for 12 months for free. 

Using containers? You can reduce your costs by running them only when needed. Learn how ParkMyCloud can help optimize container costs. 

NEW in ParkMyCloud: Containers! Now Offering AWS EKS Cost Optimization

NEW in ParkMyCloud: Containers! Now Offering AWS EKS Cost Optimization

We are happy to announce that ParkMyCloud can now optimize costs for container services, beginning today with Amazon EKS (managed Elastic Kubernetes Service) cost optimization through scheduling. As container adoption grows faster than ever before, ParkMyCloud is here to ensure your costs are controlled and optimized.

Container Cost Optimization Is Needed – Now

Container adoption is growing. Gartner predicts that by 2023, more than 70% of global organizations will have more than two containerized applications in production – and that’s compared to less than 20% last year. That growth will, of course, come with greater spending on these technologies, with 451 Research predicting a $2.7 billion market this year, growing by 60% to $4.3 billion by 2022. In a customer survey we conducted earlier this year, 50% of users of EKS said they planned to either use it the same as this year (14%) or see increased usage (36%).

As is the case with compute and database resources, inefficient use of containers can cause significant wasted cloud spend. Sources of waste include: nonproduction pods that are idle outside of working hours, oversized pods, oversized nodes, and overprovisioned persistent storage. 

EKS Scheduling in ParkMyCloud

Since 2015, ParkMyCloud users have reduced cloud costs by identifying idle and over-provisioned compute and database resources, and programmatically scheduling and resizing them, saving enterprises around the world tens of millions of dollars.

Now, that same scheduling is available to reduce EKS costs. You can set schedules based on working hours and automatically assign those schedules with the platform’s policy engine. Better yet, see ParkMyCloud schedule recommendations for your resources based on your utilization data. 

Try it now:

  • If you are new to ParkMyCloud – start a free trial to get started.
  • If you have an existing ParkMyCloud account, please note you’ll have to update your IAM role permissions. Details available in the release notes
  • If you use another cloud or container service…we’ll catch up with you soon! See below.

More Container Optimization Coming Soon

This release is just the beginning of container optimization in ParkMyCloud. In the next few months, the platform will also support scheduling for Amazon ECS, AWS Fargate, Azure Kubernetes Services (AKS), and Google Kubernetes Engine (GKE).

In addition to container on/off scheduling, we also plan to provide Kubernetes Cluster rightsizing recommendations, and help you account for your containers when looking at buying reserved instances and savings plans.

Questions? Feature requests? We’d love to hear them. Comment below or contact us directly.

Further reading:

How Microsoft Azure Deallocate VM vs. Stop VM States Differ

How Microsoft Azure Deallocate VM vs. Stop VM States Differ

Do you know the difference between Azure “deallocate VM” and “stop VM” states? They are similar enough that in conversation, I’ve noticed some confusion around this distinction.  

If your VM is not running, it will have one of two states – Stopped, or Stopped (deallocated). Essentially, if something is “allocated” – you’re still paying for it. So while deallocating a virtual machine sounds like a harsh action that may be permanently deleting data, it’s the way you can save money on your infrastructure costs and eliminate wasted Azure spend with no data loss.

Azure’s Stopped State

When you are logged in to the operating system of an Azure VM, you can issue a command to shut down the server. This will kick you out of the OS and stop all processes, but will maintain the allocated hardware (including the IP addresses currently assigned). If you find the VM in the Azure console, you’ll see the state listed as “Stopped”. The biggest thing you need to know about this state is that you are still being charged by the hour for this instance.

Azure’s Deallocated State

The other way to stop your virtual machine is through Azure itself, whether that’s through the console, Powershell, or the Azure CLI. When you stop a VM through Azure, rather than through the OS, it goes into a “Stopped (deallocated)” state.  This means that any non-static public IPs will be released, but you’ll also stop paying for the VM’s compute costs. This is a great way to save money on your Azure costs when you don’t need those VMs running, and is the state that ParkMyCloud puts your VMs in when they are parked.

Which State to Choose?

The only scenario in which you should ever choose the stopped state instead of the deallocated state for a VM in Azure is if you are only briefly stopping the server and would like to keep the dynamic IP address for your testing. If that doesn’t perfectly describe your use case, or you don’t have an opinion one way or the other, then you’ll want to deallocate instead so you aren’t being charged for the VM.

If you’re looking to automate scheduling when you deallocate VMs in Azure, ParkMyCloud can help with that. ParkMyCloud makes it easy to identify idle resources using Azure Metrics and to automatically schedule your non-production servers to turn off when they are idle, such as overnight or on weekends. Try it for free today to save money on your Azure bill!

Further reading: