What are AWS IAM Roles?
Within AWS Identity and Access Management system (IAM) there are a number of different identity mechanisms that can be configured to secure your AWS environment, such as Users, Groups, and AWS IAM Roles. Users are clearly the humans in the picture, and Groups are collections of Users, but Roles can be a bit more obscure. Roles are defined as a set of permissions that grant access to actions and resources in AWS. Unlike Users, which are tied to a specific Identity and a specific AWS account, an IAM Role can be used by or assumed by IAM User accounts or by services within AWS, and can give access to Users from another account altogether.
To better understand Roles, I like the metaphor of a hat. When we say a Role is assumed by a user – it is like saying someone can assume certain rights or privileges because of what hat they are wearing. In any company (especially startups), we sometimes say someone “wears a lot of hats” – meaning that person temporarily takes on a number of different Roles, depending on what is needed. Mail delivery person, phone operator, IT support, code developer, appliance repairman…all in the space of a couple hours.
IAM Roles are similar to wearing different hats this in that they temporarily let an IAM User or a service get permissions to do things they would not normally get to do. These permissions are attached to the Role itself, and are conveyed to anyone or anything that assumes the role. Like Users, Roles have credentials that can be used to authenticate the Role identity.
Here are a couple ways in which you can use IAM Roles to improve your security:
All too often, we see software products that rely on credentials (username/password) for services or accounts that are either hard-coded into an application or written into some file on disk. Frequently the developer had no choice, as the system had to be able to automatically restart and reconnect if the machine rebooted, without anyone to manually type in credentials during the rebootwhen the system rebooted. If the code is examined, or file system is compromised, then the credentials are exposed, potentially compromisingand can potentially used to compromise other systems and services. In addition, such credentials make it really difficult to periodically change the password. Even in AWS we sometimes see developers hard-code API Key IDs and Keys into apps in order to get access to some AWS service. This is a security accident waiting to happen, and can be avoided through the use of IAM Roles.
With AWS, we can assign a single IAM Role to an EC2 instance. This assignment is usually made when the instance is launched, but can also be done at runtime if needed. Applications running on the server retrieve the Role’s security credentials by pulling them out of the instance metadata through a simple web command. These credentials have an additional advantage over potentially long-lived, hard-coded credentials, in that they are changed or rotated frequently, so even if somehow compromised, they can only be used for a brief period.
Another key security advantage of Roles is that they can be limited to just the access/rights privileges needed to get a specific job done. Amazon’s documentation for roles gives the example of an application that only needs to be able to read files out of S3. In this case, one can assign a Role that contains read-only permissions for a specific S3 bucket, and the Role’s configuration can say that the role can only be used by EC2 instances. This is an example of the security principle of “least privilege,”, where the minimum privileges necessary are assigned, limiting the risk of damage if the credential is compromised. In the same sense that you would not give all of your users “Administrator” privileges, you should not create a single “Allow Everything” Role that you assign everywhere. Instead create a different Role specific to the needs of each system or group of systems.
Sometimes one company needs to give access to their resources to another company. Before IAM Roles, (and before AWS) the common ways to do that were to share account logins (with the same issues identified earlier with hardcoded credentials) or to use complicated PKI/certificate based systems. If both companies using AWS, sharing access is much easier with Role-based Delegation. There are several ways to configure IAM Roles for delegation, but for now we will just focus on delegation between accounts from two different organizations.
At ParkMyCloud, our customers use Delegation to let us read the state of their EC2, RDS, and scaling group instances, and then start and stop them per the schedules they configure in our management console.
To configure Role Delegation, a customer first creates an account with the service provider, and is given the provider’s AWS Account ID and an External ID. The External ID is a unique number for each customer generated by the service provider.
The administrator of the customer environment creates an IAM Policy with a constrained set of access (principle of “least privilege” again), and then assigns that policy to a new Role (like “ParkMyCloudAccess”), specifically assigned to the provider’s Account ID and External ID. When done, the resulting IAM Role is given a specific Amazon Resource Name (ARN), which is a unique string that identifies the role. The customer then enters that role in the service provider’s management console, which is then able to assume the role. Like the EC2 example, when the ParkMyCloud service needs to start a customer EC2 instance, it calls the AssumeRole API, which verifies our service is properly authenticated, and returns temporary security credentials needed to manage the customer environment.
AWS IAM Roles make some tasks a lot simpler by flexibly assigning roles to instances and other accounts. IAM Roles can help make your environment more secure by:
- Using the principle of Least Privilege in IAM policies to isolate the systems and services to only those needed to do a specific job.
- Prevent hard coding of credentials in code or files, minimizing danger from exposure, and removing the risk of long-unchanged passwords.
- Minimizing common accounts and passwords by allowing controlled cross-account access.
These days, there’s a huge range of companies using cloud computing, especially public cloud. While your infrastructure size and range of services used may vary, there are a few things every organization should keep in mind. Here are the top 3 we recommend for anyone in your organization who touches your cloud infrastructure.
Keep it Secure
OK, so this one is obvious, but it bears repeating every time. Keep your cloud access secure.
For one, make sure your cloud provider keys don’t end up on GitHub… it’s happened too many times.
(there are a few open source tools out there that can help search your GitHub for this very problem, check out AWSLabs’s git-secrets).
Organizations should also enforce user governance and use Role-Based Access Control (RBAC) to ensure that only the people who need access to specific resources can access them.
Keep Costs in Check
There’s an inherent problem created when you make computing a pay-as-you-go utility, as public cloud has done: it’s easy to waste money.
First of all, the default for computing resources is that they’re “always on” unless you specifically turn them off. That means you’re always paying for it.
Additionally, over-provisioning is prevalent – 55% of all public cloud resources are not correctly sized for their resources. The last is perhaps the most brutal: 15% of spend is on resources which are no longer used. It’s like discovering that you’re still paying for that gym membership you signed up for last year, despite the fact that you haven’t set foot inside. Completely wasted money.
In order to keep costs in check, companies using cloud computing need to ensure they have cost controls in place to eliminate and prevent cloud waste – which, by the way, is the problem we set out to solve when we created ParkMyCloud.
Third, companies should ensure that their IT and development teams continue their professional development on cloud computing topics, whether by taking training courses or attending local Meetup groups to network with and learn from peers. We have a soft spot in our hearts for our local AWS DC Meetup, which we help organize, but there are great meetups in cities across the world on AWS, Azure, Google Cloud, and more.
Best yet, go to the source itself. Microsoft Azure has a huge events calendar, though AWS re:Invent is probably the biggest. It’s an enormous gathering for learning, training, and announcements of new products and services (and it’s pretty fun, too).
We’re a sponsor of AWS re:Invent 2017 – let us know if you’re going and would like to book time for a conversation or demo of ParkMyCloud while you’re there, or just stop by booth #1402!
Among the variety of AWS services and functionality, AWS Lambda seems to be taking off with hackers and tinkerers. The idea of “serverless” architecture is quite a shift in the way we think about applications, tools, and services, but it’s a shift that is opening up some new ideas and approaches to problem solving.
If you haven’t had a chance to check out Lambda, it’s a “function-as-a-service” platform that allows you to run scripts or code on demand, without having to set up servers with the proper packages and environments installed. Your lambda function can trigger from a variety of sources and events, such as HTTP requests, API calls, S3 bucket changes, and more. The function can scale up automatically, so more compute resources will be used if necessary without any human intervention. The code can be written in Node.js, Python, Java, and C#.
Some pretty cool ideas already exist for lambda functions to automate processes. One example from AWS is to respond to a Github event to trigger an action, such as the next step in a build process. There’s also a guide on how to use React and Lambda to make an interactive website that has no server.
For those of you who are already using ParkMyCloud to schedule resources, you may be looking to plug in to your CI/CD pipeline to achieve Continuous Cost Control. I’ve come up with a few ideas of how to use Lambda along with ParkMyCloud to supercharge your AWS cloud savings. Let’s take a look at a few options:
Make ParkMyCloud API calls from Lambda
With ParkMyCloud’s API available to control your schedules programmatically, you could make calls to ParkMyCloud from Lambda based on events that occur. The API allows you to do things like list resources and schedules, assign schedules to resources, snooze schedules to temporarily override them, or cancel a snooze or schedule.
For instance, if a user logs in remotely to the VPN, it could trigger a Lambda call to snooze the schedules for that user’s instances. Alternatively, a Lambda function could change the schedules of your Auto Scaling Group based on average requests to your website. If you store data in S3 for batch processing, a trigger from an S3 bucket can tell Lambda to notify ParkMyCloud that the batch is ready and the processing servers need to come online.
Send notifications from ParkMyCloud to Lambda
With ParkMyCloud’s notification system, you can send events that occur in the ParkMyCloud system to a webhook or email. The events can be actions taken by schedules that are applied to resources, user actions that are done in the UI, team and schedule assignments from policies, or errors that occur during parking.
By sending schedule events, you could use a Lambda function to tell your monitoring tool when servers are being shut down from schedules. This could also be a method for letting your build server know that the build environment has fully started before the rest of your CI/CD tools take over. You could also send user events to Lambda to feed into a log tool like Splunk or Logstash. Policy events can be sent to Lambda to trigger an update to your CMDB with information on the team and schedule that’s applied to a new server.
Think outside the box!
Are you already using AWS Lambda to kick off functions and run scripts in your environment? Try combining Lambda with ParkMyCloud and let us know what cool tricks you come up with for supercharging your automation and saving on your cloud bill! Stop by Booth 1402 at AWS re:Invent this year and tell us.
Cloud Cost Optimization Platform Vendor Gears Up for Rapid Expansion with New Hire
October 16, 2017 (Dulles, VA) – ParkMyCloud, the leading enterprise platform for continuous cost control in public cloud, announced today that Bill Supernor has joined the team as Chief Technology Officer (CTO). His more than 20 years of leadership experience in engineering and management have included scaling teams and managing enterprise-grade software products, including KoolSpan’s TrustCall secure call and messaging system.
At ParkMyCloud, Supernor will be responsible for product development and software engineering as ParkMyCloud expands its platform, which currently helps enterprises like McDonald’s, Unilever, and Fox control costs on Amazon Web Services (AWS), Microsoft Azure, and Google Cloud, to more clouds and continues to add more services and integrations.
“Bill’s experience in the software industry will be a boon to us as we scale and grow the business,” said ParkMyCloud CEO Jay Chapel. “His years in the software and IT space will be a huge advantage as we grow our engineering team and continue to innovate upon the cost control platform that cloud users need.”
“This is a fast-moving company in a really hot space,” said Supernor. “I’m excited to be working with great people who have passion about what they do.”
Prior to joining ParkMyCloud, Supernor was the CTO of KoolSpan, where he led the development of a globally deployed secure voice communication system for smartphones. He has also served in engineering leadership positions at Trust Digital, Cognio, Symantec, and McAfee/Network Associates, and as an officer in the United States Navy.
ParkMyCloud is a SaaS platform that helps enterprises optimize their public cloud spend by automatically reducing resource waste — think “Nest for the cloud”. ParkMyCloud has helped customers such as McDonald’s, Capital One, Unilever, Fox, and Sage Software dramatically cut their cloud bills by up to 65%, delivering millions of dollars in savings on Amazon Web Services, Microsoft Azure, and Google Cloud Platform. For more information, visit http://www.parkmycloud.com.
Enterprise cloud management is a top priority. As the shift towards multi-cloud environments continues, so has the need to consider the potential challenges. Whether you already use the public cloud, or are considering making the switch, you probably want to know what the risks are. Here are three you should be thinking about.
1. Multi-Cloud Environments
As the ParkMyCloud platform supports AWS, Azure, and Google, we’ve noticed that multi-cloud strategies are becoming increasingly common among enterprises. There are a number of reasons why it would be beneficial to utilize more than one cloud provider. We have discussed risk mitigation as a common reason, along with price protection and workload optimization. As multi-cloud strategies become more popular, the advantages are clear. However, every strategy comes with its challenges, and it’s important for CIOs to be aware of the associated risks.
Without the use of cloud management tools, multi-cloud management is complex and sometimes difficult to navigate. Different cloud providers have different price models, product features, APIs, and terminology. Compliance requirements are also a factor that must be considered when dealing with multiple providers. Meeting and maintaining requirements for one cloud provider is complicated enough, let alone multiple. And don’t forget you need a single pane to view your multi-cloud infrastructure.
2. Cost Control
Cost control is a first priority among cloud computing trends. Enterprise Management Associates (EMA) conducted a research study and identified key reasons why there is a need for cloud cost control, among them were inefficient use of cloud resources, unpredictable billing, and contractual obligation or technological dependency.
Managing your cloud environment and controlling costs requires a great deal of time and strategy, taking away from the initiatives your enterprise really needs to be focusing on. The good news is that we offer a solution to cost control that will save 65% or more on your monthly cloud bills – just by simply parking your idle cloud resources. ParkMyCloud was one of the top three vendors recommended by EMA as a Rapid ROI Utility. If you’re interested in seeing why, we offer a 14-day free trial.
3. Security & Governance
In discussing a multi-cloud strategy and its challenges, the bigger picture also includes security and governance. As we have mentioned, a multi-cloud environment is complex, complicated, and requires native or 3rd party tools to maintain vigilance. Aside from legal compliance based on the industry your company is in, the cloud also comes with standard security issues and of course the possibility of cloud breaches. In this vein, as we talk to customers they often worry about too many users being granted console access to create and terminate cloud resources which can lead to waste. A key here is limiting user access based on roles or Role-based Access Controls (RBAC). At ParkMyCloud we recognize that visibility and control is important in today’s complex cloud world. That’s why in designing our platform, we provide the sysadmin the ability to delegate access based on a user’s role and the ability to authenticate leveraging SSO using SAML integration . This approach brings security benefits without losing the appeal of a multi-cloud strategy.
Enterprise cloud management is an inevitable priority as the shift towards a multi-cloud environment continues. Multiple cloud services add complexity to the challenges of IT and cloud management. Cost control is time consuming and needs to be automated and monitored constantly. Security and governance is a must and it’s necessary to ensure that users and resources are optimally governed. As the need for cloud management continues to grow, cloud automation tools like ParkMyCloud provide a means to effectively manage cloud resources, minimize challenges, and save you money.