Microsoft Azure IAM, also known as Access Control (IAM), is the product provided in Azure for RBAC and governance of users and roles. Identity management is a crucial part of cloud operations due to security risks that can come from misapplied permissions. Whenever you have a new identity (a user, group, or service principal) or a new resource (such as a virtual machine, database, or storage blob), you should provide proper access with as limited of a scope as possible. Here are some of the questions you should ask yourself to maintain maximum security:
1. Who needs access?
Granting access to an identity includes both human users and programmatic access from applications and scripts. If you are utilizing Azure Active Directory, then you likely want to use those managed identities for role assignments. Consider using an existing group of users or making a new group to apply similar permissions across a set of users, as you can then remove a user from that group in the future to revoke those permissions.
Programmatic access is typically granted through Azure service principals. Since it’s not a user logging in, the application or script will use the App Registration credentials to connect and run any commands. As an example, ParkMyCloud uses a service principal to get a list of managed resources, start them, stop them, and resize them.
2. What role do they need?
Azure IAM uses roles to give specific permissions to identities. Azure has a number of built-in roles based on a few common functions:
- Owner – Full management access, including granting access to others
- Contributor – Management access to perform all actions except granting access to others
- User Access Administrator – Specific access to grant access to others
- Reader – View-only access
These built-in roles can be more specific, such as “Virtual Machine Contributor” or “Log Analytics Reader”. However, even with these specific pre-defined roles, the principle of least privilege shows that you’re almost always giving more access than is truly needed.
For even more granular permissions, you can create Azure custom roles and list specific commands that can be run. As an example, ParkMyCloud recommends creating a custom role to list the specific commands that are available as features. This ensures that you start with too few permissions, and slowly build up based on the needs of the user or service account. Not only can this prevent data leaks or data theft, but it can also protect against attacks like malware, former employee revenge, and rogue bitcoin mining.
3. Where do they need access?
The final piece of an Azure IAM permission set is deciding the specific resource that the identity should be able to access. This should be at the most granular level possible to maintain maximum security. For example, a Cloud Operations Manager may need access at the management group or subscription level, while a SQL Server utility may just need access to specific database resources. When creating or assigning the role, this is typically referred to as the “scope” in Azure.
Our suggestion for the scope of a role is to always think twice before using the subscription or management group as a scope. The scale of your subscription is going to come into consideration, as organizations with many smaller subscriptions that have very focused purposes may be able to use the subscription-level scope more frequently. On the flip side, some companies have broader subscriptions, then use resource groups or tags to limit access, which means the scope is often smaller than a whole subscription.
More Secure, Less Worry
By revisiting these questions for each new resource or new identity that is created, you can quickly develop habits to maintain a high level of security using Azure IAM. For a real-world look at how we suggest setting up a service principal with a custom role to manage the power scheduling and rightsizing of your VMs, scale sets, and AKS clusters, check out the documentation for ParkMyCloud Azure access, and sign up for a free trial today to get it connected securely to your environment.
There is an abundance of great resources that cover Google Cloud best practices. To give a little more insight into the most recent practices offered by Google Cloud, here’s a list of 17 recent articles on best practices consisting of different tips and tricks to help you fully utilize and optimize your Google Cloud environment.
1. Ensure You Have Total Visibility of Data
- “Without a holistic view of data and its sources, it can be difficult to know what data you have, where data originated from, and what data is in the public domain that shouldn’t be.”
2. Design Data Loss Prevention Policies in G Suite
- “Data Loss Prevention in G Suite is a set of policies, processes, and tools that are put in place to ensure your sensitive information won’t be lost during a fire, natural disaster or break in. You never know when tragedy will strike, that’s why you should invest in prevention policies before it’s too late.”
3. Have a Logging Policy in Place
- “It is important to create a comprehensive logging policy within your cloud platform to help with auditing and compliance. Access logging should be enabled on storage buckets so that you have an easily accessible log of object access. Administrator audit logs are created by default, but you should enable Data Access logs for Data Writes in all services.”
4. Use Display Names in your Dataflow Pipelines
- “Always use the name field to assign a useful, at-a-glance name to the transform. This field value is reflected in the Cloud Dataflow monitoring UI and can be incredibly useful to anyone looking at the pipeline. It is often possible to identify performance issues without having to look at the code using only the monitoring UI and well-named transforms.”
5. Automate Cost Optimizations
- “The one of the best practices for cost optimization is to automate the tasks and reduce manual intervention. Automation is simplified using a label – which is a key-value pair applied to various Google Cloud services. You can attach a label to each resource (such as Compute instances), then filter the resources based on their labels.”
6. Take Advantage of Committed & Sustained Use Discounts
- “At a commitment of up to 3 years and no upfront payment, customers can save money up to 57% of the normal price with this purchase. Availing these discounts can be one among GCP best practices as these discounts can be utilized for standard, highcpu, highmem and custom machine types and node groups which are sole-tenant.”
- “GCP has a plan called “Sustained Use Discounts” which you can avail when you consume certain resources for a better part of a billing month. As these discounts are applicable to a lot of resource like sole-tenant nodes, GPU devices, custom machine, etc. opting for these discounts would be another best practice on GCP.”
7. Use Preemptible VMs
- “As with most trade-offs, the biggest reason to use a preemptible VM is cost. Preemptible VMs can save you up to 80% compared to a normal on-demand virtual machine. This is a huge savings if the workload you’re trying to run consists of short-lived processes or things that are not urgent and can be done any time.”
8. Purchase Commitments
- “The sustained usage discounts are a major differentiator for GCP. They apply automatically once your instance is online for more than 25% of the monthly billing cycle and can net you a discount of up to 30% depending on instance (“machine”) type. You can combine sustained and committed use discounts but not at the same time. Committed use can get you a discount of up to 57% for most instance types and up to 70% for memory-optimized types.”
9. Apply Compute Engine Rightsizing Recommendations
- “Compute Engine provides machine type rightsizing recommendations to help you optimize the resource utilization of virtual machine (VM) instances. These recommendations are generated automatically based on system metrics gathered by the Stackdriver Monitoring service over the previous eight days. Use these recommendations to resize your computer instance’s machine type to more efficiently use the instance’s resources.”
10. Utilize Cost Management Tools That Take Action
- “Using third-party tools for cloud optimization help with cost visibility and governance and cost optimization. Make sure you aren’t just focusing on cost visibility and recommendations, but find a tool that takes that extra step and takes those actions for you…This automation reduces the potential for human error and saves organizations time and money by allowing developers to reallocate their time to more beneficial tasks. ”
11. Ensure You’re Only Paying for the Compute Resources You Need
- When adopting or optimizing your public cloud use, it’s important to eliminate wasted spend from idle resources – which is why you need to include an instance scheduler in your plan. An instance scheduler ensures that non-production resources – those used for development, staging, testing, and QA – are stopped when they’re not being used, so you aren’t charged for compute time you’re not actually using.
12. Optimize Performance and Storage Costs
- “In the cloud, where storage is billed as a separate line item, paying attention to storage utilization and configuration can result in substantial cost savings. And storage needs, like compute, are always changing. It’s possible that the storage class you picked when you first set up your environment may no longer be appropriate for a given workload.”
13. Optimize Persistent Disk Performance
- “When you launch a virtual machine compute engine in GCP, a disk is attached to perform as the local storage for the application. When you terminate this compute engine, the unattached disk can still be running. Google continues to charge for the full price of the disk, even though the disks are not active. This can significantly increase your cloud costs. Make sure that you don’t have any unattached disks that are still running.”
14. Apply Least Privilege Access Controls /Identity and access management
- “The principle of least privilege is a critical foundational element in GCP security and security more broadly. The principle is the concept of only providing employees with access to applications and resources they need to properly do their jobs.”
15. Manage Unrestricted Traffic and Firewalls
- “Limit the IP ranges that you assign to each firewall to only the networks that need access to those resources. GCP’s advanced VPC features allow you to get very granular with traffic by assigning targets by tag and Service Accounts. This allows you to express traffic flows logically in a way that you can identify later, such as allowing a front-end service to communicate to VMs in a back-end service’s Service Account.”
16. Ensure Your Bucket Names are Unique Across the Whole Platform
- “It is recommended to append random characters to the bucket name and not include the company name in it. An example is “prod-logs-b7b12b36511ac3462d12e62164dfff4e”. This will make it harder for an attacker to locate buckets in a targeted attack.”
17. Set Up a Google Cloud Organizational Structure
- “When you first log into your Google Admin console, everything will be grouped into a single organizational unit. Any settings you apply to this group will apply to all the users and devices in the organization. Planning out how you want to organize your units and hierarchy before diving in will help you save time and create a more structured security strategy.”
You can use the best practices listed above as a quick reference of things to keep in mind when using Google Cloud. Have any Google Cloud best practices you’ve learned recently? Let us know in the comments below!
16 Tips to Manage Cloud Costs
15 AWS Best Practices for 2019
When it was announced in December last year, AWS called the AWS IAM Access Analyzer “the sort of thing that will improve security for just about everyone that builds on AWS.” Last week, it was expanded to the AWS Organizations level. If you use AWS, use this tool to ensure your access is granted as intended across your accounts.
“IAM” Having Problems
AWS provides robust security and user/role management, but that doesn’t mean you’re protected from the issues that can arise from improperly configured IAM access. Here are a few we’ve seen the most often.
Creating a user when it should have been a role. IAM roles and IAM users can both be assigned policies, but they are intended to be used differently. IAM users should correspond to specific human users, who can be assigned long-term credentials and directly interact with AWS services. IAM roles are sets of capabilities that can be assumed by other entities – for example, third-party software that interacts with your AWS account (hi! 👋). Check out this post for more about roles vs. users.
Assigning a pre-built policy vs. creating a custom policy. There are plenty of pre-built policies – here are a few dozen examples – but you can also create custom policies. The problems arise when, in a hurry to grant access to users, you grant more than necessary, leaving holes. For example, we’ve seen people get frustrated when their users don’t have access to a VM but little insight into why – while it could be that the VM has been terminated or moved to a region the user can’t view, an “easy fix” is to broaden that user’s access.
Leaving regions or resource types open. If an IAM role needs permission to spin EC2 instances up and down, you might grant full EC2 privileges. But if the users with that role only ever use us-east-1 and don’t look around the other regions (why would they?) or keep a close eye on their bill, they may have no idea that some bad actor is bitcoin mining in your account over in us-west-2.
Potential attacks need only an opportunity to get access to your account, and the impact could range from exposing customer data to ransomware to total resource deletion. So it’s important to know what IAM paths are open and whether they’re in use.
Enter the AWS IAM Access Analyzer
The IAM Access Analyzer uses “automated reasoning”, which is a type of mathematical logic, to review your IAM roles, S3 buckets, KMS keys, AWS Lambda functions, and Amazon SQS queues. It’s free to use and straightforward to set up.
Once you set up an analyzer, you will see a list of findings that shows items for you to review and address or dismiss. With the expansion to the organizational level, you can establish your entire organization as a “zone of trust”, so that issues identified are for resources accessible from outside the organization.
The Access Analyzer continuously monitors for new & updated policies, and you can manually re-analyze as well.
3 Things to Go Do Now
If you had time to read this, you probably have time to go set up an analyzer:
- Create an AWS IAM access analyzer for your account or organization
- Review your findings and address any potential issues.
- Check the access you’re granting to any third-party service. For example, ParkMyCloud requests only the minimum permissions needed to do its job. Are you assigning anyone the AWS-provided “ReadOnlyAccess” role? If so, you are sharing far more than is likely needed.
When you create a virtual machine in Microsoft Azure, you are required to assign it to an Azure Resource Group. This grouping structure may seem like just another bit of administrivia, but savvy users will utilize this structure for better governance and cost management for their infrastructure.
What are Azure Resources Groups?
Azure Resources Groups are logical collections of virtual machines, storage accounts, virtual networks, web apps, databases, and/or database servers. Typically, users will group related resources for an application, divided into groups for production and non-production — but you can subdivide further as needed.
They are part of the Azure resource group management model, which provides four levels, or “scopes” of management to help you organize your resources.
- Management groups: These groups are containers that help you manage access, policy, and compliance for multiple subscriptions. All subscriptions in a management group automatically inherit the conditions applied to the management group.
- Subscriptions: A subscription associates user accounts and the resources that were created by those user accounts. Each subscription has limits or quotas on the amount of resources you can create and use. Organizations can use subscriptions to manage costs and the resources that are created by users, teams, or projects.
- Resource groups: A resource group is a logical container into which Azure resources like web apps, databases, and storage accounts are deployed and managed.
- Resources: Resources are instances of services that you create, like virtual machines, storage, or SQL databases.
One important factor to keep in mind when managing these scopes is that there is a difference between azure subscription vs management group. A management group cannot include an Azure Resource. It can only include other management groups or subscriptions. Azure Management Groups provide a level of organization above Azure Subscriptions.
You will manage resource groups through the “Azure Resource Manager”. Benefits of the Azure Resource Manager include the ability to manage your infrastructure in a visual UI rather than through scripts; tagging management; deployment templates; and simplified role-based access control.
You can organize your resource groups for securing, managing, and tracking the costs related to your workflows.
Group structures like Azure’s exist at the other big public clouds — AWS, for example, offers optional Resource Groups, and Google Cloud “projects” define a level of grouping that falls someplace between Azure subscriptions and Azure Resource Groups.
Tips for Using Resource Groups
When organizing your resource groups, it is essential to understand that all the resources in a group should have the same life-cycle when including them. For instance, if an application requires different resources that need to be updated together, such as having a SQL database, a web app or a mobile app, then it makes sense to group these resources in the same resource group. However, for dev/test, staging, or production, it is important to use different resource groups as the resources in these groups have different lifecycles.
Other things to consider when building your Azure list of resource groups:
- Resources can be added to or deleted from an Azure Resource Group. However, each of your resources should belong to an Azure Resource Group, so if you remove the resources from one Resource Group, you should add it to another one.
- Keep in mind, not all resources can be moved to different resource groups.
- Azure resource group regions: the resources you include in a resource group can be located in different Azure regions.
- Grant access with resource groups: you should use resource groups to control access to your resources – more on this below.
How to Use Azure Resource Groups Effectively for Governance
Azure resource groups are a handy tool for role-based access control (RBAC). Typically, you will want to grant user access at the resource group level – groups make this simpler to manage and provide greater visibility.
Azure resource group permissions help you follow the principle of least privilege. Users, processes, applications, and devices can be provided with the minimum permissions needed at the resource group level, rather than at the management group or subscription levels. For example, a policy relating to encryption key management can be applied at the management group level, while a start/stop scheduling policy might be applied at the resource group level.
Effective use of tagging allows you to identify resources for technical, automation, billing, and security purposes. Tags can extend beyond resource groups, which allows you to use tags to associate groups and resources that belong to the same project, application, or service. Be sure to apply tagging best practices, such as requiring a standard set of tags to be applied before a resource is deployed, to ensure you’re optimizing your resources.
Azure Resources Groups Simplify Cost Management
Azure Resource Groups also provide a ready-made structure for cost allocation — resource groups make it simpler to identify costs at a project level than just relying on Azure subscriptions. Additionally, you can use groups to manage resource scheduling and, when they’re no longer needed, termination.
You can do this manually, or through your cost optimization platform such as ParkMyCloud. Continuous cost control comes from actual action – which is what ParkMyCloud provides you through a simple UI (with full RBAC), smart recommendations with one-click remediation, and an automatic policy engine that can schedule your resources by default based on your tagging or naming conventions. For almost all Azure users, this means automatic assignment to teams, so you can provide governed user access to ParkMyCloud. It also means you can set on/off schedules at the group level, to turn your non-production groups off when they’re not needed to help you reduce cloud waste and maximize the value of your cloud. Start a trial today to see the automation in action.
While going through our recent Cloud Cost Optimization Competency review with AWS, one of the things they asked us to do was remove the ability for customers to sign up for our service using AWS IAM User credentials. They loved the fact that we already supported AWS IAM Role credentials, but their concern was that AWS IAM User credentials could conceivably be stolen and used from outside AWS by anyone. (I say inconceivable, but hey, it is AWS.) This was a bit of a bitter pill to swallow, as some customers find IAM Users easier to understand and manage than IAM Roles. The #1 challenge of any SaaS cloud management platform like ours is customer onboarding, where every step in the process is one more hurdle to overcome.
While we could debate how difficult it would be to steal a customer cloud credential from our system, the key (pun intended) thing here is why is an IAM Role preferred over an IAM User?
Before answering that question, I think it is important to understand that an IAM Role is not a “role” in perhaps the traditional sense of Active Directory or LDAP. An AWS IAM Role is not something that is assigned to a “User” as a set of permissions – it is a set of capabilities that can be assumed by some other entity. Like putting on a hat, you only need it at certain times, and it is not like it is part of who you are. As AWS defines the difference in their FAQ:
An IAM user has permanent long-term credentials and is used to directly interact with AWS services. An IAM role does not have any credentials and cannot make direct requests to AWS services. IAM roles are meant to be assumed by authorized entities, such as IAM users, applications, or an AWS service such as EC2.
(The first line of that explanation alone has its own issues, but we will come back to that…)
The short answer for SaaS is that a customer IAM Role credential can only be used by servers running from within the SaaS provider’s AWS Account…and IAM User credentials can be used by anyone from anywhere. By constraining the potential origin of AWS API calls, a HUGE amount of risk is removed, and the ability to isolate and mitigate any issues is improved.
What is SaaS?
Software as a Service (SaaS) means different things to different vendors. Some vendors claim to be “SaaS” for their pre-built virtual machine images that you can run in your cloud. Maybe an intrusion detection system or a piece of a cloud management system. In my (truly humble) opinion this is not a SaaS – this is just another flavor of “on prem” (on-premise), where you are running someone’s software in your environment. Call it “in-cloud” if you do not want to call it “on-prem”, but it is not really SaaS, and it does not have the challenges you will experience with a “true” SaaS product – coming in from the outside. A core component of SaaS is that it is centrally hosted – outside your cloud. For an internal service, you might relax permissions and access mechanisms somewhat, as you have total control over data ingress/egress. A service running IN your network…where you have total control over data ingress/egress…is not the same as external access – the epitome of SaaS. Anyway: </soapbox>. (Or maybe </rant> depending on the tone you picked up along the way…)
The kind of SaaS I am focussing on for this blog is SaaS for cloud management, which can include cloud diagramming tools, configuration management tools, storage management+backup tools, or cost optimization tools like ParkMyCloud.
AWS has enabled SaaS for secure cloud management more than any other cloud provider. A bold statement, but let’s break that down a bit. We at ParkMyCloud help our customers optimize their expenses at all of the major cloud providers and so obviously all the providers allow for access from “outside”. Whether it is an Azure subscription, a GCP project, or an Alibaba account, these CSP’s are chiefly focussed on customer internal cross-domain access. I.e., the ability of the “parent” account to see and manage the “child” accounts. Management within an organization. But AWS truly acknowledges and embraces SaaS.
You could attribute my bold statement to an aficionado/fanboi notion of AWS having a bigger ecosystem vision, or more specifically that they simply have a better notion of how the Real World works, and how that has evolved in The Cloud. The fact is that companies buy IT products from other companies…and in the cloud that enables this thing called Software as a Service, or SaaS. All of the cloud providers have enabled SaaS for cloud access, but AWS has enabled SaaS for more secure cloud access.
AWS IAM Cross-account Roles
So…where was I? Oh…right…Secure SaaS access.
OK, so AWS enables cross-account access. You can see this in the IAM Create Role screen in the AWS Console:
If your organization owns multiple AWS accounts (inside or outside of an AWS “organization”), cross-account access allows you to use a parent account to manage multiple child accounts. For SaaS, cross-account access allows a 3rd-party SaaS provider to see/manage/do stuff with/for your accounts.
Looking a little deeper into this screen, we see that cross-account access requires you to specify the target account for the access:
The cross-account role allows you to explicitly state which other AWS account can use this role. More specifically: which other AWS account can assume this role.
But there is an additional option here talking about requiring an “external ID”…what is that about?
Within multiple accounts in a single organization, this may allow you to differentiate between multiple roles between accounts….maybe granting certain permissions to your DevOps folks…other permissions to Accounting…and still other permissions to IT/network management.
If you are a security person, AWS has some very interesting discussions about the “confused deputy” problem mentioned on this screen. It discusses how a hostile 3rd party might guess the ARN used to leverage this IAM Role, and states that “AWS does not treat the external ID as a secret” – which is all totally true from the AWS side. But summing it up: cross-account IAM Roles’ external IDs do not protect you from insider attacks. For an outsider, the External ID is as secret as the SaaS provider makes it.
Looking at it from the external SaaS side, we get a bit of a different perspective. For SaaS, the External ID allows for multiple entry points…and/or a pre-shared secret. At ParkMyCloud (and probably most other SaaS providers) we only need one entry point, so we lean toward the pre-shared secret side of things. When we, and other security-conscious SaaS providers, ask for access, we request an account credential, explicitly giving our AWS account ID and an External ID that is unique for the customer. For example, in our UI, you will see our account ID and a customer-unique External ID:
Assume Role…and hacking SaaS
If we look back at the definition of the AWS IAM Role, we see that IAM roles are meant to be assumed by authorized entities. For an entity to assume a role, that party has to be an AWS entity that has the AWS sts:AssumeRole permission for the account in which it lives. Breaking that down a bit, the sts component of this permission tells us this comes from the AWS Secure Token Services, which can handle whole chains of delegation of permissions. For ParkMyCloud, we grant our servers in AWS an IAM Role that has the sts:AssumeRole permission for our account. In turn, this allows our servers to use the customer account ID and external ID to request permission to “Assume” our limited-access role to manage a customer’s virtual machines.
From the security perspective, this means if a hostile party wanted to leverage SaaS to get access to a SaaS customer cloud account via an IAM User, they would need to:
- Learn an account ID for a target organization
- Find a SaaS provider leveraged by that target organization
- Hack the SaaS enough to learn the External ID component of the target customer account credentials
- Completely compromise one of the SaaS servers within AWS, allowing for execution of commands/APIs to the customer account (also within AWS), using the account ID, External ID, and Assume Role privileges of that server to gain access to the customer account.
- Have fun with the customer SaaS customer cloud, but ONLY from that SaaS server.
So….kind-of a short recipe of what is needed to hack a SaaS customer. (Yikes!) But this is where your access privileges come in. The access privileges granted via your IAM role determine the size of the “window” through which the SaaS provider (or the bad guys) can access your cloud account. A reputable SaaS provider (ahem) will keep this window as small as possible, commensurate with the Least Privilege needed to accomplish their mission.
Also – SaaS services are updated often enough that the service might have to be penetrated multiple times to maintain access to a customer environment.
So why are AWS IAM Users bad?
Going back to the beginning, our quote from AWS stated “An IAM user has permanent long-term credentials and is used to directly interact with AWS services”. There are a couple frightening things here.
“Permanent long-term credentials” means that unless you have done something pretty cool with your AWS environment, that IAM User credential does not expire. An IAM User credential consists of a Key ID and Secret Access Key (an AWS-generated pre-shared secret) that are good until you delete them.
“…directly interact with AWS services” means that they do not have to be used from within your AWS account. Or from any other AWS account. Or from your continent, planet, galaxy, dimension, etc. That Key ID and Secret can be used by anyone and anywhere.
From the security perspective, this means if a hostile party wanted to leverage SaaS to get access to a SaaS customer cloud account via an IAM Role, they would need to:
- Learn an account ID for a target organization
- Find a SaaS provider leveraged by that target organization
- Hack the SaaS enough to get the IAM User credentials.
- Have fun…from anywhere.
So this list may seem only a little bit shorter, but the barriers to compromise are somewhat lower, and the opportunity for long-term compromise is MUCH longer. Any new protections or updates for the SaaS servers has no impact on an existing compromise. The horse has bolted, so shutting the barn door will not help at all.
What if the SaaS provider is not in AWS? Or…what if *I* am not in AWS?
The other cloud providers provide some variation of an access identifier and a pre-shared secret. Unlike AWS, both Azure and Google Cloud credentials can be created with expiration dates, somewhat limiting the window of exposure. Google does a great job of describing their process for Service Accounts here. In the Azure console, service accounts are found under Azure AD>App registrations>All apps>App details>Settings>Keys, and passwords can be set to expire in 1 year, 2 years, or never. I strongly recommend you set reminders someplace for these expiration dates, as it can be tricky to debug an expired service account password for SaaS.
For all providers you can also limit your exposure by setting a very limited access role for your SaaS accounts, as we describe in our other blog here.
Azure does give SaaS providers the ability to create secure “multi-tenant” apps that can be shared across multiple customers. However, the API’s for SaaS cloud management typically flow in the other direction, reaching into the customer environment, rather than the other way around.
IAM Role – the Clear Winner
Fortunately, when AWS “strongly recommended” that we should discontinue support for AWS IAM User-based permissions, we already supported an upgrade path, allowing our customer to migrate from IAM User to IAM Role without losing any account configuration (phew!). We have found some scenarios where IAM Role cannot be used – like between the AWS partitions of AWS global, AWS China, and the AWS US GovCloud. For GovCloud, we support ParkMyCloud SaaS by running another “instance” of ParkMyCloud from within GovCloud, where cross-account IAM Role is supported.
With the additional security protections provided for cross-account access, AWS IAM Role access is the clear winner for SaaS access, both within AWS and across all the various cloud providers.
The principle of least privilege is important to understand and follow as you adopt SaaS technologies. The market for SaaS-based tools is growing rapidly, and can typically be activated much more quickly and cheaply than creating a special-purpose virtual machine within your cloud environment. In this blog, I am focusing specifically on the SaaS cloud management tool area, which can include services like cloud diagramming tools, configuration management tools, storage management and backup tools, or cost optimization tools like ParkMyCloud.
Why the Principle of Least Privilege is Important
Before you start using such tools and services, you should carefully consider how much access you are granting into your cloud. The principle of least privilege is a fundamental tenet of any identity and access control policy, and basically means a service or user should have no more permissions than absolutely required in order to do a job.
Cloud account privileges and permissions are typically granted via roles and permissions. All of the cloud providers provide numerous predefined roles, which consist of pre-packaged sets of permissions. Before granting any requested predefined role to a 3rd-party, you should really investigate the permissions or security policy embedded in that role. In many (most?) cases, you are likely to find that the predefined roles give a lot more information or capabilities away than you are really likely to want.
SaaS Onboarding – Where Least Privilege Can Get Lost
For on-boarding of new SaaS customers, the initial permissions setup is often the most complicated step, and some SaaS cloud management platforms try to simplify the process by asking for one of these predefined roles. For example, the Amazon ReadOnlyAccess role or the Azure Reader role or the GCP roles/viewer role. While this certainly makes onboarding of SaaS easier, it also exposes you to a massive data leakage problem. For example, with the Amazon ReadOnlyAccess role a cloud diagramming tool can certainly get a good enough view of your cloud to create a map…but you are also granting read access for all of your IAM Users, CloudTrail events and history, any S3 objects you have not locked-down with a distinct bucket policy, and….lots of other stuff you probably do not even know you have. It is like kinda like saying – “Here, please come on in and look at all of our confidential file cabinets – and it is OK for you to make copies of anything interesting, just please do not change any of our secrets to something else…” No problem, right?
Obviously, least privilege becomes especially critical when giving permissions to a SaaS provider, given the risk of trusting your cloud environment to some unknown party.
Custom Policies for SaaS
Because of the broad nature of many of their predefined roles, all of the major cloud providers give you the ability to assign specific permissions to both internal and external users through Policies. For example, the following policy snippets show the minimum permissions ParkMyCloud requests to list, start, and stop virtual machines on AWS, Google, and Azure.
Creating and assigning these permissions makes SaaS onboarding a bit more complicated, but it is worth the effort in terms of reducing your exposure.
Other Policy Restrictions
What if you want to give a SaaS provider permissions, but lock it down to only certain resources or certain regions? AWS and Azure allow you to specify in the policy which resources the policy can be applied to. Google Cloud….not so much. AWS takes this the farthest, allowing for very robust policies down to specific services, and the addition of tag-based caveats for the policy permissions, for example:
This policy locks down the Start and Stop permissions to only those instances that have the tag name/value parkmycloud: yes, and are located in the us-east-1 region. Similar Conditions can be used to lock this down by region, instance type, and many other situations. (This recent announcement shows another way to handle the region restriction.)
Azure has somewhat similar features, though with a slightly different JSON layout, as described here. It does not appear you can use resource tags to for Azure, nor does Azure provide easy ways to limit the geographic scope of permissions. You can get around the location and grouping of resources by using Azure Management Groups, but that is not quite as flexible as an arbitrary tag-based system, and is actually more intended to aggregate resources across subscriptions, rather than be more specific within a subscription. That said, the Azure permissions defined here are a bit more granular than AWS. This does allow for a bit more specificity in permissions if it is needed, but can no doubt grow tedious to list and manage.
Google Cloud provides a long list of predefined roles here, with an excellent listing the contained permissions. There is also an interesting page describing the taxonomy of the permissions here, but Google Cloud appears to make it a bit difficult to enumerate and understand the permissions individually, outside of the predefined roles. Google does not provide any tag or resource-based restrictions, apart from assignment at the Project level. More on user management and roles by cloud provider in this blog.
You may note that the ec2:Describe permission in our last example does not have the tag-based restriction. This is because the tag-based restriction can only be used for certain permissions, as shown in the AWS documentation. Note also that some APIs can do several different operations, some of which you may be OK with sharing, and others not. For example, the AWS ModifyInstance permission allows the API user to change the instance type. But…this one API (and associated permission) also allows the API user to modify security group assignments, shutdown behaviors, and other features – things you may not want to share with an untrusted 3rd party.
Key takeaway here? Look out for permissions that may have unexpected consequences.
Beware of SaaS cloud management providers who are asking for simple predefined roles from your cloud provider. They are either giving a LOT more functionality than you are likely to want from a single provider, or they are asking for a lot more permissions than they need. Ask for a “limited access policy” that gives the SaaS provider ONLY what they need, and look for a document that defines these permissions and how they tie back to what the SaaS provider is doing for you.
These limited access policies serve to limit your exposure to accidents or compromises at the SaaS provider.