How to Use the New AWS Cost Categories for Better Cost Allocation

How to Use the New AWS Cost Categories for Better Cost Allocation

Last week, AWS announced the general release of AWS Cost Categories. Anyone involved in managing AWS costs within your organization should ensure they understand this new feature and start using it for better cost allocation and billing management. 

What are AWS cost categories?

AWS Cost Categories are now visible in the console on the billing dashboard.

AWS cost categories are a new way to create custom groups that transcend tags and allow you to better manage costs according to your organizational structure, for example, by application group, team, or location.

Cost categories allow you to write rules and create custom groups of billing line items, which you can then use in AWS Cost Explorer, AWS Budgets, the AWS Cost and Usage Report, etc. You can group by account, tag, service, and charge types. 

How are cost categories different from tags?

At first, this “category” structure may seem similar to the tagging structure you already use in Cost Explorer, but there are a few key differences. 

First, you can create logic to categorize costs from specific services to all belong to the same team. For example, you may have RDS resources belong to a category for the DBA team; Redshift belong to a FinOps category, or CodePipeline belong to a DevOps category. Categories also allow you to include resources that are not taggable, such as AWS support costs and some Reserved Instance charges.

Why should you use AWS cost categories?

The ability to create your own categorization rules is what makes this new option powerful. You can do this through the Cost Categories rule builder, JSON editor, or API. The rule builder is straightforward and has some built-in logic options such as “is, is not, contains, starts with” and “ends with”. 

For organizations with many AWS accounts, categories of accounts into business units, products, applications, or production vs. non-production are helpful in allocating and evaluating costs.

Ensure costs are in control

Of course, whenever you take a closer look at your current costs, you’ll find areas that can be better optimized. Make sure you are only paying for the AWS resources you actually need, and schedule idle resources to turn off using ParkMyCloud – now supporting EKS and soon to support Redshift. 

8 Things You Should Know About AWS Redshift Pricing

8 Things You Should Know About AWS Redshift Pricing

If you use AWS, it’s likely you’ll use or at least run across Amazon Redshift – so make sure you know these eight things about how AWS Redshift Pricing works.

Amazon Redshift Overview

AWS calls Redshift the “most popular and fastest” cloud data warehouse. It is fully-managed, and scalable to petabytes of data for storage and analysis. You can use Redshift to analyze your data using SQL and business intelligence tools. It features:

  • Integration with data lakes and other AWS services that allows you to query data and write data back to data lake in open formats
  • High performance – fast query performance, columnar storage, data compression, and zone maps. 
  • Petabyte scale – “virtually unlimited” according to AWS, with scalable number and type of nodes, with limitless concurrency.

There are three node types, two current and one older:

  • RA3 nodes with managed storage – scale and pay for compute and managed storage independently. You choose the number of nodes based on performance requirements and pay only for the storage you used. Managed storage uses SSDs in each RA3 nodes for local storage, and Amazon S3 for longer-term storage. If the data in your node exceeds the local SSDs, data will be automatically offloaded to S3. 
  • DC2 nodes – these compute-intensive data warehouses include local SSD storage. You choose the number of nodes based on data size and performance requirements. Data is stored locally, and as it grows, you can add more compute nodes. AWS recommends this for datasets under 1TB uncompressed, otherwise, use RA3 for the S3 offloading capability to keep costs down.
  • DS2 nodesthese use HDDs, and AWS recommends you use RA3 instead.

Where did the name come from? In astronomy, “redshift” refers to the lengthening of electromagnetic radiation toward longer wavelengths as an object moves away from the observer – the light equivalent of the change in an ambulance siren pitch as it passes you, collectively known as the Doppler Effect. Or, if you’re into gossip, it’s a thumb to the nose at “big red” Oracle.

AWS Redshift Pricing Structure

So, how much does Amazon Redshift cost? Like EC2 and other services, the core cost is on-demand by the hour, based on the type and number of nodes in your cluster. 

Core On-Demand Pricing

  • RA3 – as of this writing, prices range from $3.26 per hour for an ra3.4xlarge in US East (N. Virginia) to $5.195 for the same type in Sao Paulo. Price increases linearly with size, with the ra3.16xlarge costing 4 times the ra3.4xlarge.
    • Data stored in managed storage is billed separately based on actual data stored in the RA3 node types, at the same rate whether your data is in SSDs or S3.
  • DC2 – the dc2.large currently costs $0.25 per hour in US East (N. Virginia) up to $0.40 in Sao Paulo.

Note: We’ve omitted DS2 as those are no longer recommended.

Pricing for Additional Capabilities

Amazon Redshift Spectrum Pricing – Redshift Spectrum allows you to run SQL queries directly against Amazon S3 data. You are charged for the number of bytes scanned by Spectrum, rounded up by megabyte, with a 10MB minimum per query. Compressed and columnar data will keep costs down. Current pricing is $5 per terabyte of data scanned.

Concurrency Scaling Pricing – you can accumulate one hour of concurrency scaling cluster credits every 24 hours while your main cluster is running. Past that, you will be charged the per-second on-demand rate

Redshift Managed Storage Pricing – managed storage for RA3 node types is at a fixed GB-month rate for the region you are using. It is calculated per hour based on total data present in the managed storage. 

8 Things to Keep in Mind

Of course, there’s always more to know.

  • Free trial – for your first Redshift cluster, you can get a two month free trial of a DC2.Large node. It’s actually a trial of 750 hours per month, so if you use more than 160GB of compressed SSD storage or run multiple nodes, you will exhaust that in less than one month, at which point you’ll be charged the on-demand rate unless you spin down the cluster. This is separate from the AWS Free Tier
  • Reserved Instances are available – by paying upfront, you can pay less overall – 3% to 63% depending on the payment options you choose. But should you use them?
  • Billed per-secondfor partial hours, you will only be billed at a per-second rate. Surprisingly, this was only released in February of this year.
  • You can pause – you can pause and resume to suspend billing, but you will still pay for backup storage while a cluster is paused.
  • Redshift Spectrum pricing does not include costs for requests made against your S3 buckets – see S3 pricing for those rates. 
  • Redshift managed storage pricing does not include backup storage due to snapshots, and once the cluster is terminated, you will still be charged for your backups. Don’t let these get orphaned!
  • Data transfer – there is no charge for data transferred between Amazon Redshift and Amazon S3 within the same region for backup, restore, load, and unload operations – but for all other data transfers in and out, you will be billed at standard data transfer rates. 
  • RA3 is not available in all regions

Keep Your AWS Redshift Costs In Control

There are a few things you can do to optimize your AWS Redshift costs:

  • Use Reserved Instances where you have predictable needs, and where the savings over on-demand is high enough
  • Delete orphaned snapshots – like all backups, ensure that you are managing your snapshots and deleting when clusters are deleted
  • Schedule on/off times – for Redshift clusters used for development, testing, staging, and other purposes not needed 24×7, make sure you schedule them to turn off when not needed – now possible with the announcement from last month that Redshift clusters can now be paused. Automated scheduling coming soon in ParkMyCloud!
If You Just Do One Thing Today, Run the AWS IAM Access Analyzer

If You Just Do One Thing Today, Run the AWS IAM Access Analyzer

When it was announced in December last year, AWS called the AWS IAM Access Analyzer “the sort of thing that will improve security for just about everyone that builds on AWS.” Last week, it was expanded to the AWS Organizations level. If you use AWS, use this tool to ensure your access is granted as intended across your accounts. 

“IAM” Having Problems

AWS provides robust security and user/role management, but that doesn’t mean you’re protected from the issues that can arise from improperly configured IAM access.  Here are a few we’ve seen the most often.

Creating a user when it should have been a role. IAM roles and IAM users can both be assigned policies, but they are intended to be used differently. IAM users should correspond to specific human users, who can be assigned long-term credentials and directly interact with AWS services. IAM roles are sets of capabilities that can be assumed by other entities – for example, third-party software that interacts with your AWS account (hi! 👋). Check out this post for more about roles vs. users

Assigning a pre-built policy vs. creating a custom policy. There are plenty of pre-built policies – here are a few dozen examples – but you can also create custom policies. The problems arise when, in a hurry to grant access to users, you grant more than necessary, leaving holes. For example, we’ve seen people get frustrated when their users don’t have access to a VM but little insight into why – while it could be that the VM has been terminated or moved to a region the user can’t view, an “easy fix” is to broaden that user’s access.

Leaving regions or resource types open. If an IAM role needs permission to spin EC2 instances up and down, you might grant full EC2 privileges. But if the users with that role only ever use us-east-1 and don’t look around the other regions (why would they?) or keep a close eye on their bill, they may have no idea that some bad actor is bitcoin mining in your account over in us-west-2.

Potential attacks need only an opportunity to get access to your account, and the impact  could range from exposing customer data to ransomware to total resource deletion. So it’s important to know what IAM paths are open and whether they’re in use. 

Enter the AWS IAM Access Analyzer

The IAM Access Analyzer uses “automated reasoning”, which is a type of mathematical logic, to review your IAM roles, S3 buckets, KMS keys, AWS Lambda functions, and Amazon SQS queues. It’s free to use and straightforward to set up.

Once you set up an analyzer, you will see a list of findings that shows items for you to review and address or dismiss. With the expansion to the organizational level, you can establish your entire organization as a “zone of trust”, so that issues identified are for resources accessible from outside the organization. 

The Access Analyzer continuously monitors for new & updated policies, and you can manually re-analyze as well. 

3 Things to Go Do Now

If you had time to read this, you probably have time to go set up an analyzer:

  1. Create an AWS IAM access analyzer for your account or organization
  2. Review your findings and address any potential issues.
  3. Check the access you’re granting to any third-party service. For example, ParkMyCloud requests only the minimum permissions needed to do its job. Are you assigning anyone the AWS-provided “ReadOnlyAccess” role?  If so, you are sharing far more than is likely needed.
How to Use Azure Resource Groups for Better VM Management

How to Use Azure Resource Groups for Better VM Management

When you create a virtual machine in Microsoft Azure, you are required to assign it to an Azure Resource Group. This grouping structure may seem like just another bit of administrivia, but savvy users will utilize this structure for better governance and cost management for their infrastructure.

What are Azure Resources Groups?

Azure Resources Groups are logical collections of virtual machines, storage accounts, virtual networks, web apps, databases, and/or database servers. Typically, users will group related resources for an application, divided into groups for production and non-production — but you can subdivide further as needed.

They are part of the Azure resource group management model, which provides four levels, or “scopes” of management to help you organize your resources.

  • Management groups: These groups are containers that help you manage access, policy, and compliance for multiple subscriptions. All subscriptions in a management group automatically inherit the conditions applied to the management group.
  • Subscriptions: A subscription associates user accounts and the resources that were created by those user accounts. Each subscription has limits or quotas on the amount of resources you can create and use. Organizations can use subscriptions to manage costs and the resources that are created by users, teams, or projects.
  • Resource groups: A resource group is a logical container into which Azure resources like web apps, databases, and storage accounts are deployed and managed.
  • Resources: Resources are instances of services that you create, like virtual machines, storage, or SQL databases.

One important factor to keep in mind when managing these scopes is that there is a difference between azure subscription vs management group. A management group cannot include an Azure Resource. It can only include other management groups or subscriptions. Azure Management Groups provide a level of organization above Azure Subscriptions.

You will manage resource groups through the “Azure Resource Manager”. Benefits of the Azure Resource Manager include the ability to manage your infrastructure in a visual UI rather than through scripts; tagging management; deployment templates; and simplified role-based access control.

You can organize your resource groups for securing, managing, and tracking the costs related to your workflows. 

Group structures like Azure’s exist at the other big public clouds — AWS, for example, offers optional Resource Groups, and Google Cloud “projects” define a level of grouping that falls someplace between Azure subscriptions and Azure Resource Groups.

Tips for Using Resource Groups

When organizing your resource groups, it is essential to understand that all the resources in a group should have the same life-cycle when including them. For instance, if an application requires different resources that need to be updated together, such as having a SQL database, a web app or a mobile app, then it makes sense to group these resources in the same resource group. However, for dev/test, staging, or production, it is important to use different resource groups as the resources in these groups have different lifecycles. 

Other things to consider when building your Azure list of resource groups: 

  • Resources can be added to or deleted from an Azure Resource Group. However, each of your resources should belong to an Azure Resource Group, so if you remove the resources from one Resource Group, you should add it to another one.
  • Keep in mind, not all resources can be moved to different resource groups.
  • Azure resource group regions: the resources you include in a resource group can be located in different Azure regions.
  • Grant access with resource groups:  you should use resource groups to control access to your resources – more on this below.

How to Use Azure Resource Groups Effectively for Governance

Azure resource groups are a handy tool for role-based access control (RBAC). Typically, you will want to grant user access at the resource group level – groups make this simpler to manage and provide greater visibility.

Azure resource group permissions help you follow the principle of least privilege. Users, processes, applications, and devices can be provided with the minimum permissions needed at the resource group level, rather than at the management group or subscription levels. For example, a policy relating to encryption key management can be applied at the management group level, while a start/stop scheduling policy might be applied at the resource group level.

Effective use of tagging allows you to identify resources for technical, automation, billing, and security purposes. Tags can extend beyond resource groups, which allows you to use tags to associate groups and resources that belong to the same project, application, or service. Be sure to apply tagging best practices, such as requiring a standard set of tags to be applied before a resource is deployed, to ensure you’re optimizing your resources.

Azure Resources Groups Simplify Cost Management

Azure Resource Groups also provide a ready-made structure for cost allocation  — resource groups make it simpler to identify costs at a project level than just relying on Azure subscriptions. Additionally, you can use groups to manage resource scheduling and, when they’re no longer needed, termination.

You can do this manually, or through your cost optimization platform such as ParkMyCloud. Continuous cost control comes from actual action – which is what ParkMyCloud provides you through a simple UI (with full RBAC), smart recommendations with one-click remediation, and an automatic policy engine that can schedule your resources by default based on your tagging or naming conventions. For almost all Azure users, this means automatic assignment to teams, so you can provide governed user access to ParkMyCloud. It also means you can set on/off schedules at the group level, to turn your non-production groups off when they’re not needed to help you reduce cloud waste and maximize the value of your cloud. Start a trial today to see the automation in action.

NEW in ParkMyCloud: Containers! Now Offering AWS EKS Cost Optimization

NEW in ParkMyCloud: Containers! Now Offering AWS EKS Cost Optimization

We are happy to announce that ParkMyCloud can now optimize costs for container services, beginning today with Amazon EKS (managed Elastic Kubernetes Service) cost optimization through scheduling. As container adoption grows faster than ever before, ParkMyCloud is here to ensure your costs are controlled and optimized.

Container Cost Optimization Is Needed – Now

Container adoption is growing. Gartner predicts that by 2023, more than 70% of global organizations will have more than two containerized applications in production – and that’s compared to less than 20% last year. That growth will, of course, come with greater spending on these technologies, with 451 Research predicting a $2.7 billion market this year, growing by 60% to $4.3 billion by 2022. In a customer survey we conducted earlier this year, 50% of users of EKS said they planned to either use it the same as this year (14%) or see increased usage (36%).

As is the case with compute and database resources, inefficient use of containers can cause significant wasted cloud spend. Sources of waste include: nonproduction pods that are idle outside of working hours, oversized pods, oversized nodes, and overprovisioned persistent storage. 

EKS Scheduling in ParkMyCloud

Since 2015, ParkMyCloud users have reduced cloud costs by identifying idle and over-provisioned compute and database resources, and programmatically scheduling and resizing them, saving enterprises around the world tens of millions of dollars.

Now, that same scheduling is available to reduce EKS costs. You can set schedules based on working hours and automatically assign those schedules with the platform’s policy engine. Better yet, see ParkMyCloud schedule recommendations for your resources based on your utilization data. 

Try it now:

  • If you are new to ParkMyCloud – start a free trial to get started.
  • If you have an existing ParkMyCloud account, please note you’ll have to update your IAM role permissions. Details available in the release notes
  • If you use another cloud or container service…we’ll catch up with you soon! See below.

More Container Optimization Coming Soon

This release is just the beginning of container optimization in ParkMyCloud. In the next few months, the platform will also support scheduling for Amazon ECS, AWS Fargate, Azure Kubernetes Services (AKS), and Google Kubernetes Engine (GKE).

In addition to container on/off scheduling, we also plan to provide Kubernetes Cluster rightsizing recommendations, and help you account for your containers when looking at buying reserved instances and savings plans.

Questions? Feature requests? We’d love to hear them. Comment below or contact us directly.

Further reading: