Why the Principle of Least Privilege is Important for SaaS-based Cloud Management

Why the Principle of Least Privilege is Important for SaaS-based Cloud Management

The principle of least privilege is important to understand and follow as you adopt SaaS technologies. The market for SaaS-based tools is growing rapidly, and can typically be activated much more quickly and cheaply than creating a special-purpose virtual machine within your cloud environment. In this blog, I am focusing specifically on the SaaS cloud management tool area, which can include services like cloud diagramming tools, configuration management tools, storage management and backup tools, or cost optimization tools like ParkMyCloud.

Why the Principle of Least Privilege is Important

Before you start using such tools and services, you should carefully consider how much access you are granting into your cloud. The principle of least privilege is a fundamental tenet of any identity and access control policy, and basically means a service or user should have no more permissions than absolutely required in order to do a job.

Cloud account privileges and permissions are typically granted via roles and permissions. All of the cloud providers provide numerous predefined roles, which consist of pre-packaged sets of permissions. Before granting any requested predefined role to a 3rd-party, you should really investigate the permissions or security policy embedded in that role. In many (most?) cases, you are likely to find that the predefined roles give a lot more information or capabilities away than you are really likely to want.

SaaS Onboarding – Where Least Privilege Can Get Lost

For on-boarding of new SaaS customers, the initial permissions setup is often the most complicated step, and some SaaS cloud management platforms try to simplify the process by asking for one of these predefined roles.  For example, the Amazon ReadOnlyAccess role or the Azure Reader role or the GCP roles/viewer role.  While this certainly makes onboarding of SaaS easier, it also exposes you to a massive data leakage problem.  For example, with the Amazon ReadOnlyAccess role a cloud diagramming tool can certainly get a good enough view of your cloud to create a map…but you are also granting read access for all of your IAM Users, CloudTrail events and history, any S3 objects you have not locked-down with a distinct bucket policy, and….lots of other stuff you probably do not even know you have.  It is like kinda like saying – “Here, please come on in and look at all of our confidential file cabinets – and it is OK for you to make copies of anything interesting, just please do not change any of our secrets to something else…”  No problem, right?

Obviously, least privilege becomes especially critical when giving permissions to a SaaS provider, given the risk of trusting your cloud environment to some unknown party.

Custom Policies for SaaS

Because of the broad nature of many of their predefined roles, all of the major cloud providers give you the ability to assign specific permissions to both internal and external users through Policies.  For example, the following policy snippets show the minimum permissions ParkMyCloud requests to list, start, and stop virtual machines on AWS, Google, and Azure.

Creating and assigning these permissions makes SaaS onboarding a bit more complicated, but it is worth the effort in terms of reducing your exposure.

Other Policy Restrictions

What if you want to give a SaaS provider permissions, but lock it down to only certain resources or certain regions?  AWS and Azure allow you to specify in the policy which resources the policy can be applied to. Google Cloud….not so much.  AWS takes this the farthest, allowing for very robust policies down to specific services, and the addition of tag-based caveats for the policy permissions, for example:

This policy locks down the Start and Stop permissions to only those instances that have the tag name/value parkmycloud: yes, and are located in the us-east-1 region.  Similar Conditions can be used to lock this down by region, instance type, and many other situations. (This recent announcement shows another way to handle the region restriction.)

Azure has somewhat similar features, though with a slightly different JSON layout, as described here.  It does not appear you can use resource tags to for Azure, nor does Azure provide easy ways to limit the geographic scope of permissions.  You can get around the location and grouping of resources by using Azure Management Groups, but that is not quite as flexible as an arbitrary tag-based system, and is actually more intended to aggregate resources across subscriptions, rather than be more specific within a subscription.  That said, the Azure permissions defined here are a bit more granular than AWS.  This does allow for a bit more specificity in permissions if it is needed, but can no doubt grow tedious to list and manage.

Google Cloud provides a long list of predefined roles here, with an excellent listing the contained permissions.  There is also an interesting page describing the taxonomy of the permissions here, but Google Cloud appears to make it a bit difficult to enumerate and understand the permissions individually, outside of the predefined roles.  Google does not provide any tag or resource-based restrictions, apart from assignment at the Project level. More on user management and roles by cloud provider in this blog.

Gotchas

You may note that the ec2:Describe permission in our last example does not have the tag-based restriction.  This is because the tag-based restriction can only be used for certain permissions, as shown in the AWS documentation.  Note also that some APIs can do several different operations, some of which you may be OK with sharing, and others not.  For example, the AWS ModifyInstance permission allows the API user to change the instance type.  But…this one API (and associated permission) also allows the API user to modify security group assignments, shutdown behaviors, and other features – things you may not want to share with an untrusted 3rd party.

Key takeaway here?  Look out for permissions that may have unexpected consequences.

Summary

Beware of SaaS cloud management providers who are asking for simple predefined roles from your cloud provider.  They are either giving a LOT more functionality than you are likely to want from a single provider, or they are asking for a lot more permissions than they need.  Ask for a “limited access policy” that gives the SaaS provider ONLY what they need, and look for a document that defines these permissions and how they tie back to what the SaaS provider is doing for you.

These limited access policies serve to limit your exposure to accidents or compromises at the SaaS provider.

Cloud Storage Cost Comparison: AWS vs. Azure vs. Google

Cloud Storage Cost Comparison: AWS vs. Azure vs. Google

Today, we’ll take a brief look at cloud storage cost comparison from the three major cloud service providers. When it comes to finding a solution for your cloud computing needs, it is fair to say that for every business the solutions are based on a case-by-case scenarios – and given the breadth of cloud storage options available, it is certainly true in this case. A few things we’ll briefly touch points on are pricing models, discounts and steps you can take to avoid wasted cloud spend.

The leading cloud service providers have certain fortes and weaknesses that ultimately differentiate each one of them to be the potential solution to support your development infrastructure, operations and applications. Cloud service providers offer many different cloud pricing points depending on your compute, storage, database, analytics, application and deployment requirements. Additionally, you’d want to consider available services and networks provided to see the full scope of their resource capabilities and governance.

Prices can be subject to the type of hosting option you choose. One example is Relational Database Services (RDS). RDS pricing changes according to which database management system you use, and there are many more services like this to choose from.

More detail, beyond just storage, available in our full cloud pricing comparison.

AWS and Google Stand Out

Although not always the case, AWS is presumed to be the least expensive option available and remains the leader in the cloud computing market. But, Microsoft Azure and Google (GCP) are not far behind, and in recent years they have commanded innovation and market pricing reductions, thus closing gaps to bring them closer to AWS. That been said, being the first in the market gives AWS a great advantage over the competition as they command a large scale of businesses and are able to offer lower prices than the competition. They are well known for attracting more businesses, and in turn, they invest their money back into the cloud by adding more servers to their data centers. Google is closing the gap on AWS as they were the first to cut prices in their pricing model to match AWS’.

Storage Services Overview

Let’s take a look at some of the more popular storage options offered by each of the major three providers.

Amazon S3

Amazon Simple Storage Service (S3) is the most durable, highly performant and secure cloud storage service. It manages accounts at every level, scales on-demand and offers insights with built-in analytics.  

Amazon EBS

Amazon Elastic Block Store (EBS) provides block level storage volumes for use with EC2 instances. EBS delivers low-latency and consistent performance scaled to the needs of your application.

Amazon Glacier

Amazon Glacier provides data archiving and long-term back up at a low-cost. It allows you to query data in place and retrieve only the subset of data you need from within an archive.

More about AWS options: https://aws.amazon.com/products/storage/

Google Cloud Storage

Google Cloud Storage offers a single API for all storage classes, simplifying development integration and reducing code complexity. Its highly scalable and performant with unlimited object storage.

Cloud Filestore

Google Filestore is a high-performance file storage for applications that require a filesystem interface and a shared filesystem for data.

Persistent Disk

Google Persistent Disk is a reliable high-performance block storage for virtual machine instances.

Explore Google storage options: https://cloud.google.com/products/storage/

Archive Storage

Azure Archive Storage offers a low-cost, durable, and highly available secure cloud storage for rarely accessed data with flexible latency requirements.

Blob Storage

Azure Blob Storage is a massively scalable object storage for unstructured data.

Azure Files

Azure Files is a simple, secure and fully managed cloud file sharing storage.

Check this out as well on Azure options: https://docs.microsoft.com/en-us/azure/architecture/aws-professional/services

Sample Pricing Comparison

cloud storage cost comparison chart

Eliminate Cloud Overspend and Save Money

Comparing cloud storage costs and getting the right solution for your storage use case is important, but don’t forget once you deploy you need to ensure you optimize your solution and cost. It’s important that your organization fully understands how much can be wasted on cloud spend. Over-provisioned, underutilized and idle cloud resources run your cloud bill up and create waste. Always ensure that you are optimizing costs and governing usage by eliminating wasted cloud spend  – get started today.

Amazon ECS Overview: What You Need To Know

Amazon ECS Overview: What You Need To Know

Amazon ECS is a great choice of container hosting platforms for AWS developers, among the many available options. Jumping into an ECS deployment can be daunting, as there are multiple options and varying terminology with hard-to-predict costs. We’ll go over some of the basics of Amazon ECS, including some terminology and price considerations you’ll need to consider.

Amazon ECS 101

Amazon ECS (which stands for Elastic Container Service) lets you run Docker containers without having to manage the orchestration of those containers. With ECS, you can deploy your containers on EC2 servers or in a serverless mode, which Amazon calls Fargate. Both deployment types handle the orchestration and underlying server management for you, so you can just schedule and deploy your containers.

Amazon ECS can work for both long-running jobs and short bursts of tasks, and includes tools for adjusting the scale of the container fleet as well as the scheduling of those containers. Task placement definitions let you choose which instances get which containers, or you can let AWS manage this by spreading across all Availability Zones.

Benefits of Amazon ECS include:

  • Easy integrations into other AWS services, like Load Balancers, VPCs, and IAM
  • Highly scalable without having to manage the cluster masters
  • Multiple management methods, including the AWS console, the AWS API, or CloudFormation templates
  • Elastic Container Registry helps you manage and sort your container images

Tasks and Services and Containers (Oh My!)

Diving into the world of containers on AWS requires the use of some terminology you may not be familiar with:

  • Container – An isolated environment that contains the bare minimum of services and code needed to run just a particular part of your application or microservice, designed to be run on any Docker-compatible OS.
  • Task Definition – A layout of the pieces required to run your application, which can include one or more containers along with networking and system requirements.
  • Task – An instantiation of a Task Definition.  Multiple tasks can use the same task definition.
  • Service – A layout of the boundaries and scaling options you set for your groupings of similar Tasks, which is similar to the relationship between AutoScaling Groups and EC2 Virtual Machines.
  • Cluster – A collection of EC2 instances running a specialized operating system where you will run your Service.

ECS Pricing: The (Hopefully Not) Million Dollar Question

Amazon ECS pricing has a few different variables, starting with your choice of deployment methods.  Since Fargate abstracts away the underlying infrastructure, you only pay for the seconds of vCPU and Memory that your Tasks are using (with a minimum of 1 minute for each Task). This pricing structure has the “serverless architecture” benefit of only paying for what you need when you need it, but also means that estimating these charges can be quite difficult.

Standard ECS pricing does not charge per-Task, but will charge based on the infrastructure you have deployed for your cluster. The cluster uses AutoScaling Groups of EC2 instances, and during setup of the cluster you can choose the instance size you want and the number of instances for the initial cluster deployment.  Since the cluster can scale up and down, you have the flexibility if you get a spike in task usage, but you do need to keep an eye on underutilized or idle instances.

Containing the Containers

As you can tell, utilizing Amazon ECS containers manages a lot of the back-end work for you, but brings a whole different set of considerations for your organization.  ParkMyCloud has some news coming later this year to help you manage your ECS containers! Contact us if you’d like to be notified when that’s available.

Not yet using containers, but have other AWS infrastructure? We can help control costs.

Why Reserved Instance Pricing Needs Careful Evaluation

Why Reserved Instance Pricing Needs Careful Evaluation

Once or twice a year we like to take a look at what is going on in the world of reserved instance pricing. We review both the latest offerings and options put out by cloud providers, as well as how users are choosing to use Reserved Instances (AWS), Reserved VMs (Azure) and Committed Use (Google Cloud).

A good place to start when it comes to usage patterns and trends is the annual Rightscale (Flexera) State of Cloud Report. The 2019 report shows that current reservation usage stands at 47% for AWS, 23% for Azure and 10 percent of GCP. These are some interesting data when you view them alongside companies overall reporting that their number one cloud initiative for the coming year is optimizing their existing use of the cloud. All of these cloud providers have a major focus on pre-selling infrastructure via their reservations programs as this provides them with predictable revenue (something much loved by Wall St) plus also allows them to plan for and match supply with demand. In return for an upfront commitment they offer discounts of ‘up to 80%”, albeit much as your local furniture retailer has big saving headlines, these discount levels still warrant further investigation.

While working on an upcoming new feature release we began to dig a little deeper into the nature of current reserved instance pricing and discounts. From our research it appears that a real world discount level is in the 30%-50% range. To achieve some of the much higher level discounts you might see the cloud providers pushing, typically requires commitments of three years; being restricted to only certain regions; restrictions on OS types; and generally a willingness to commit to spending a few million dollars.

Reservation discounts, while not as volatile as spot instances, do change and need to be carefully monitored and analyzed. For example as of this writing, one of the more popular modern m5.large instance types in a US East Region costs $0.096 per hour when purchased on demand, but reduces to $0.037, a significant 62% saving. However, to secure such a discount requires a three-year commitment and prepayment in full up front. While the numbers of such organizations committing to contracts of this nature is not publicly known, it is likely that only the most confident of organizations with large cash reserves would be positioned to make a play like this.

Depending on the precise program used to purchase the reservations, there can be certain options to either convert specific instance families, instance types and OS’s for other types or even to resell the instances on a secondary exchange for a penalty fee of 12%, on AWS for example. Or to terminate the agreement for the same 12% fee on Azure. GCP’s Committed Use program seems to be the most stringent as there is no way to cancel the contract or resell pre-purchased instances, albeit Google does not offer a pre-purchase option.

As the challenge of optimizing cloud spend has slowly moved up the priority list to take the #1 slot, so has a maturation process taken place inside organizations when it comes to undertaking economic analysis and understanding the various tradeoffs. Some organizations are using tools to support such analysis, others are hiring consultants or using in house analytics resources. Whatever the approach in terms of analyzing an organization’s use of cloud, this typically requires looking at balancing the purchase of different types of reservations, spot instances or using on-demand infrastructure that is highly optimized through automation tools. Whatever the approach, the level of complexity in such analysis is certainly not reducing, and mistakes are common. However, the potential savings are significant if you achieve the right balance and is clearly something you should not ignore.

The relative balance between the different options to purchase and consume cloud services in many ways reflects the overall context within which organizations operate, their specific business models and broader macro issues such as the outlook for the overall economy. Understanding the breadth of options is key and although for most organizations, reservations are likely to be a key component it is worth digging into just how large the relative trade offs might be.

New: SmartParking for Google Database and AWS RDS Cost Optimization

New: SmartParking for Google Database and AWS RDS Cost Optimization

Today, we’re happy to share the latest cost control functionality in ParkMyCloud: SmartParking for Google database and AWS RDS cost optimization – as well as several other improvements and updates to help you find and eliminate cloud waste.

Automatically Detect Idle Google & AWS RDS Databases

“SmartParking” is what we call automatic on/off schedule recommendations based on utilization history. ParkMyCloud analyzes your resource utilization history and creates recommended schedules for each resource to turn them off when they are typically idle. This minimizes idle time to maximize savings on cloud resources.

Like an investment portfolio, users can choose to receive SmartParking schedules that are “conservative”, “balanced”, or “aggressive” — where conservative schedules protect all historic “on” times, while aggressive schedules prioritize maximum savings.

With this release, Google Cloud SQL Databases and AWS RDS instances have been added to the list of resources that can be optimized with SmartParking – a list that also includes AWS EC2 instances, Azure virtual machines, and Google Cloud virtual machine instances.

Why not Azure? At this time, Azure databases can’t be “turned off” in the same way that AWS and Google Cloud databases can. If Azure releases this capability in the future, we will follow with parking and SmartParking capability shortly thereafter.

What Else is New?

In this release, other updates to the ParkMyCloud platform include:

  • Configurable notifications  users now have the option for configurable shutdown warning notification times, from 0.25 hours to 24 hours in advance. Notifications can be received through email, Slack, Microsoft Teams, Google Hangouts, or custom webhook.  
  • Usability updates to Single Sign-On configuration, Google Cloud Credentials add/edit screen, and filtering actions.

See details in the release notes here.

Beyond this most recent release, we’ve made plenty of updates to make ParkMyCloud work for you. These include:

How to Get Started  

It’s easy to get started with Google database and RDS cost optimization! If you haven’t tried out ParkMyCloud yet, get started with a 14-day free trial. During the trial, you’ll have access to the Enterprise tier, which lets you try out all the features listed above. After your trial is over, you can choose to subscribe to the tier that works for you – or keep using our free tier for as long as you like. See pricing details for more information.

If you already use ParkMyCloud, just log in and head over to the Recommendations tab. Depending on the time-window configured for your SmartParking settings, it may take several days or weeks to accumulate enough metrics data to make good recommendations. To configure the time window for recommendations, navigate to Recommendations and select the gear icon in the upper-right, and choose SmartParking Recommendation Settings. Then, sit back while we collect and analyze your data, and your databases will be SmartParking before you know it.

Cheers!

The Next Evolution of Cloud Management: Container Management

The Next Evolution of Cloud Management: Container Management

As we are talking to prospects and customers alike, one of the more requested features we get asked about is container management. Containers – one of several growing optimization trends – help you package and run applications ‘anywhere’ in isolated environments to reduce configuration time when deploying to production. In theory this, like VMs, will help you increase the efficiency of your infrastructure – and we’re big fans of efficiency and optimization.

Are there enough containers that need management?

As we begin to plan our container management offering for later in the year, we need to understand whether this is just hype (as it seems everything is) or is this something our customers actually want and need.

First, let’s review the players in the container field. There are the primary container services from Docker and Kubernetes, as well as the offerings from cloud service providers (CSPs) for managed services like AWS ECS and AWS EKS, Azure AKS and Google GKE, based on Kubernetes (longer container services comparison here).  

So let’s dig into how big that market actually is. Most industry publications estimated $1.5B was spent in 2018 on container technology, and the Compound Annual Growth Rate (CAGR) is in the 30% range. Here is one summary from 451 Research that shows strong projected growth:

What kind of container management is needed?

The containers are there, so the next question is: what type of management is needed – especially for the CSP managed container services?

Container management, like the broader cloud management, includes orchestration, security, monitoring, and of course, optimization.

In terms of optimization alone, we have identified 5 ways we think you can optimize and save on your container cloud costs:

  1. Rightsize your Pods
  2. Turn off your Idle Pods
  3. Rightsize your Nodes
  4. Consider Storage Opportunities
  5. Review Purchasing Options

Do you need to focus on container management?

In short, if you plan to use any sort of containers in the cloud: yes. Containers provide opportunities for efficiency and more lightweight application development, but like any on-demand computing resource, they also leave the door open for wasted spend. Earlier this year, we estimated that at least $14.1 billion will be wasted on idle and oversized VMs alone. Unused, idle, and otherwise suboptimal container options will contribute billions more to that waste.

So yes: container management and optimization should be part of your cloud optimization plan.

How to Turn AWS Utilization Data
into Automated Cost Control

 

 

 

Learn how your AWS utilization data in CloudWatch
can be harnessed to optimize your cloud costs.

Register now for a chance to win
a $100 Amazon.com gift card!

June 26th | 2 PM ET