AWS IAM Roles and Ways to Use them to Improve Security

AWS IAM Roles and Ways to Use them to Improve Security

What are AWS IAM Roles?

Within AWS Identity and Access Management system (IAM) there are a number of different identity mechanisms that can be configured to secure your AWS environment, such as Users, Groups, and AWS IAM Roles. Users are clearly the humans in the picture, and Groups are collections of Users, but Roles can be a bit more obscure. Roles are defined as a set of permissions that grant access to actions and resources in AWS. Unlike Users, which are tied to a specific Identity and a specific AWS account, an IAM Role can be used by or assumed by IAM User accounts or by services within AWS, and can give access to Users from another account altogether.

To better understand Roles, I like the metaphor of a hat.  When we say a Role is assumed by a user – it is like saying someone can assume certain rights or privileges because of what hat they are wearing.  In any company (especially startups), we sometimes say someone “wears a lot of hats” – meaning that person temporarily takes on a number of different Roles, depending on what is needed. Mail delivery person, phone operator, IT support, code developer, appliance repairman…all in the space of a couple hours.

IAM Roles are similar to wearing different hats this in that they temporarily let an IAM User or a service get permissions to do things they would not normally get to do.  These permissions are attached to the Role itself, and are conveyed to anyone or anything that assumes the role.  Like Users, Roles have credentials that can be used to authenticate the Role identity.

Here are a couple ways in which you can use IAM Roles to improve your security:

EC2 Instances

All too often, we see software products that rely on credentials (username/password) for services or accounts that are either hard-coded into an application or written into some file on disk. Frequently the developer had no choice, as the system had to be able to automatically restart and reconnect if the machine rebooted, without anyone to manually type in credentials during the rebootwhen the system rebooted. If the code is examined, or file system is compromised, then the credentials are exposed, potentially compromisingand can potentially used to compromise other systems and services. In addition, such credentials make it really difficult to periodically change the password. Even in AWS we sometimes see developers hard-code API Key IDs and Keys into apps in order to get access to some AWS service. This is a security accident waiting to happen, and can be avoided through the use of IAM Roles.

With AWS, we can assign a single IAM Role to an EC2 instance. This assignment is usually made when the instance is launched, but can also be done at runtime if needed. Applications running on the server retrieve the Role’s security credentials by pulling them out of the instance metadata through a simple web command. These credentials have an additional advantage over potentially long-lived, hard-coded credentials, in that they are changed or rotated frequently, so even if somehow compromised, they can only be used for a brief period.

Another key security advantage of Roles is that they can be limited to just the access/rights privileges needed to get a specific job done. Amazon’s documentation for roles gives the example of an application that only needs to be able to read files out of S3. In this case, one can assign a Role that contains read-only permissions for a specific S3 bucket, and the Role’s configuration can say that the role can only be used by EC2 instances. This is an example of the security principle of “least privilege,”, where the minimum privileges necessary are assigned, limiting the risk of damage if the credential is compromised. In the same sense that you would not give all of your users “Administrator” privileges, you should not create a single “Allow Everything” Role that you assign everywhere. Instead create a different Role specific to the needs of each system or group of systems.

Delegation

Sometimes one company needs to give access to their resources to another company. Before IAM Roles, (and before AWS) the common ways to do that were to share account logins (with the same issues identified earlier with hardcoded credentials) or to use complicated PKI/certificate based systems. If both companies using AWS, sharing access is much easier with Role-based Delegation. There are several ways to configure IAM Roles for delegation, but for now we will just focus on delegation between accounts from two different organizations.

At ParkMyCloud, our customers use Delegation to let us read the state of their EC2, RDS, and scaling group instances, and then start and stop them per the schedules they configure in our management console.

To configure Role Delegation, a customer first creates an account with the service provider, and is given the provider’s AWS Account ID and an External ID. The External ID is a unique number for each customer generated by the service provider.

The administrator of the customer environment creates an IAM Policy with a constrained set of access (principle of “least privilege” again), and then assigns that policy to a new Role (like “ParkMyCloudAccess”), specifically assigned to the provider’s Account ID and External ID.  When done, the resulting IAM Role is given a specific Amazon Resource Name (ARN), which is a unique string that identifies the role.  The customer then enters that role in the service provider’s management console, which is then able to assume the role.  Like the EC2 example, when the ParkMyCloud service needs to start a customer EC2 instance, it calls the AssumeRole API, which verifies our service is properly authenticated, and returns temporary security credentials needed to manage the customer environment.

Conclusions

AWS IAM Roles make some tasks a lot simpler by flexibly assigning roles to instances and other accounts. IAM Roles can help make your environment more secure by:

  • Using the principle of Least Privilege in IAM policies to isolate the systems and services to only those needed to do a specific job.
  • Prevent hard coding of credentials in code or files, minimizing danger from exposure, and removing the risk of long-unchanged passwords.
  • Minimizing common accounts and passwords by allowing controlled cross-account access.
AWS Lambda + ParkMyCloud = Supercharged Automation

AWS Lambda + ParkMyCloud = Supercharged Automation

Among the variety of AWS services and functionality, AWS Lambda seems to be taking off with hackers and tinkerers. The idea of “serverless” architecture is quite a shift in the way we think about applications, tools, and services, but it’s a shift that is opening up some new ideas and approaches to problem solving.  

If you haven’t had a chance to check out Lambda, it’s a “function-as-a-service” platform that allows you to run scripts or code on demand, without having to set up servers with the proper packages and environments installed. Your lambda function can trigger from a variety of sources and events, such as HTTP requests, API calls, S3 bucket changes, and more. The function can scale up automatically, so more compute resources will be used if necessary without any human intervention. The code can be written in Node.js, Python, Java, and C#.

Some pretty cool ideas already exist for lambda functions to automate processes.  One example from AWS is to respond to a Github event to trigger an action, such as the next step in a build process.  There’s also a guide on how to use React and Lambda to make an interactive website that has no server.

For those of you who are already using ParkMyCloud to schedule resources, you may be looking to plug in to your CI/CD pipeline to achieve Continuous Cost Control.  I’ve come up with a few ideas of how to use Lambda along with ParkMyCloud to supercharge your AWS cloud savings.  Let’s take a look at a few options:

Make ParkMyCloud API calls from Lambda

With ParkMyCloud’s API available to control your schedules programmatically, you could make calls to ParkMyCloud from Lambda based on events that occur.  The API allows you to do things like list resources and schedules, assign schedules to resources, snooze schedules to temporarily override them, or cancel a snooze or schedule.

For instance, if a user logs in remotely to the VPN, it could trigger a Lambda call to snooze the schedules for that user’s instances.  Alternatively, a Lambda function could change the schedules of your Auto Scaling Group based on average requests to your website.  If you store data in S3 for batch processing, a trigger from an S3 bucket can tell Lambda to notify ParkMyCloud that the batch is ready and the processing servers need to come online.

Send notifications from ParkMyCloud to Lambda

With ParkMyCloud’s notification system, you can send events that occur in the ParkMyCloud system to a webhook or email.  The events can be actions taken by schedules that are applied to resources, user actions that are done in the UI, team and schedule assignments from policies, or errors that occur during parking.

By sending schedule events, you could use a Lambda function to tell your monitoring tool when servers are being shut down from schedules.  This could also be a method for letting your build server know that the build environment has fully started before the rest of your CI/CD tools take over.  You could also send user events to Lambda to feed into a log tool like Splunk or Logstash.  Policy events can be sent to Lambda to trigger an update to your CMDB with information on the team and schedule that’s applied to a new server.

Think outside the box!

Are you already using AWS Lambda to kick off functions and run scripts in your environment?  Try combining Lambda with ParkMyCloud and let us know what cool tricks you come up with for supercharging your automation and saving on your cloud bill! Stop by Booth 1402 at AWS re:Invent this year and tell us.

3 Enterprise Cloud Management Challenges You Should Be Thinking About

3 Enterprise Cloud Management Challenges You Should Be Thinking About

Enterprise cloud management is a top priority. As the shift towards multi-cloud environments continues, so has the need to consider the potential challenges. Whether you already use the public cloud, or are considering making the switch, you probably want to know what the risks are. Here are three you should be thinking about.

1. Multi-Cloud Environments

As the ParkMyCloud platform supports AWS, Azure, and Google, we’ve noticed that multi-cloud strategies are becoming increasingly common among enterprises. There are a number of reasons why it would be beneficial to utilize more than one cloud provider. We have discussed risk mitigation as a common reason, along with price protection and workload optimization. As multi-cloud strategies become more popular, the advantages are clear. However, every strategy comes with its challenges, and it’s important for CIOs to be aware of the associated risks.

Without the use of cloud management tools, multi-cloud management is complex and sometimes difficult to navigate. Different cloud providers have different price models, product features, APIs, and terminology. Compliance requirements are also a factor that must be considered when dealing with multiple providers. Meeting and maintaining requirements for one cloud provider is complicated enough, let alone multiple. And don’t forget you need a single pane to view your multi-cloud infrastructure.

2. Cost Control

Cost control is a first priority among cloud computing trends. Enterprise Management Associates (EMA) conducted a research study and identified key reasons why there is a need for cloud cost control, among them were inefficient use of cloud resources, unpredictable billing, and contractual obligation or technological dependency.

Managing your cloud environment and controlling costs requires a great deal of time and strategy, taking away from the initiatives your enterprise really needs to be focusing on. The good news is that we offer a solution to cost control that will save 65% or more on your monthly cloud bills – just by simply parking your idle cloud resources. ParkMyCloud was one of the top three vendors recommended by EMA as a Rapid ROI Utility. If you’re interested in seeing why, we offer a 14-day free trial.

3. Security & Governance

In discussing a multi-cloud strategy and its challenges, the bigger picture also includes security and governance. As we have mentioned, a multi-cloud environment is complex, complicated, and requires native or 3rd party tools to maintain vigilance. Aside from legal compliance based on the industry your company is in, the cloud also comes with standard security issues and of course the possibility of cloud breaches. In this vein, as we talk to customers they often worry about too many users being granted console access to create and terminate cloud resources which can lead to waste. A key here is limiting user access based on roles or Role-based Access Controls (RBAC). At ParkMyCloud we recognize that visibility and control is important in today’s complex cloud world. That’s why in designing our platform, we provide the sysadmin the ability to delegate access based on a user’s role and the ability to authenticate leveraging SSO using SAML integration . This approach brings security benefits without losing the appeal of a multi-cloud strategy.

Our Solution

Enterprise cloud management is an inevitable priority as the shift towards a multi-cloud environment continues. Multiple cloud services add complexity to the challenges of IT and cloud management. Cost control is time consuming and needs to be automated and monitored constantly. Security and governance is a must and it’s necessary to ensure that users and resources are optimally governed. As the need for cloud management continues to grow, cloud automation tools like ParkMyCloud provide a means to effectively manage cloud resources, minimize challenges, and save you money.

Reduce RDS Costs with ParkMyCloud

Reduce RDS Costs with ParkMyCloud

Thanks to the ability to shut down instances with a start/stop scheduler, users of Amazon’s database service can finally save time and reduce RDS costs. Until June 2017, the only way to accomplish this feat was by copying and deleting instances, running the risk of losing transaction logs and automatic backups. While Amazon’s development of the start/stop scheduler is useful and provides a level of cost savings, it also comes with issues of its own.

For one, the start/stop scheduler is not foolproof. The process for stopping and starting non-production RDS instances is manual, relying on the user to create and consistently manage the schedule. Having to manually switch off when instances are not in use, and then restart when access is needed again, is a helpful advantage but also leaves room for human error. Complicating things further, RDS instances that have been shut down will automatically be restarted after seven days, again relying on the user to switch those instances back off if they’re not needed at the time.

Why Scripting is not the Best Answer

One way of minimizing potential for error is by automating the stop/start schedule yourself with writing your own scripts. While that could work, you would need to consider the number of non-production instances deployed on AWS RDS, and plan for a schedule that would allow developers to have access when needed, which could very well be at varying times throughout the day. All factors considered, the process of writing and maintaining scheduling scripts is one that takes extra time and costs money as well. Ultimately, setting up and maintaining your own schedule could increase your cloud spend more than it reduces RDS costs.

When you start thinking about the cost of paying developers, the amount of scripts that would have to be written, and the ongoing maintenance required, buying into an automated scheduling process is a no-brainer.

How ParkMyCloud Reduces RDS Costs

Automated Scheduling

ParkMyCloud saves you time and money by automating the scheduling process of stopping and starting AWS RDS instances (in addition to Microsoft Azure VMs and Google Cloud Compute instances, but that’s another post). At the same time, you get total visibility and full autonomy over your account.

The process is simple. With you as the account manager, ParkMyCloud conducts a discovery of all the company accounts, and determines which instances are most suitable for parking. From there, you have the option of implementing company-wide schedules for non-production instances, or giving each development team the ability to create schedules of their own.

Flexible Parking

ParkMyCloud takes saving on RDS costs to a whole new level with parking schedules. Different schedules can be applied to different instances, or they can be parked permanently and put on “snooze” when access is needed. Amazon’s seven-day automatic restart of switched off instances is a non-issue with our platform, and snoozed instances can be re-parked when access is no longer needed, so there’s no more relying on the user to do it manually.

For the most part, we find that companies will want to park their non-production instances outside the normal working hours of Monday to Friday, let’s say from 8:00am to 8:00pm. By parking your instances outside of those days and hours, ParkMyCloud can reduce your cloud spend by 65% – even more if you implement a parking schedule and use the snooze option.

Valuable Insight

Because you have total visibility over the account, you can reduce RDS costs even further by having a bird’s eye view of your company’s cloud use. You’ll be able to tell which of your instances are underused, terminate them, and possibly benefit further from selecting a cheaper plan (really soon). You’ll be able to see all RDS instances across all regions and AWS accounts in one simple view. You can also view the parking schedules for each instance and see how much each schedule is saving, potentially reducing costs even further. The viewability of your account and access to information provides a great resource for budgeting and planning.

Conclusion

The AWS start/stop scheduler is useful, but has to be done manually. Writing your own scripts sounds helpful, but it’s actually time consuming, and not fully cost-effective. ParkMyCloud automates the process while still putting you in control, reducing RDS costs and saving you time and money.

See the benefits of ParkMyCloud for yourself by taking advantage of our two-week free trial. Test our cloud cost control platform in your own environment, without any need for a credit card or signed contract, and see why our simple, cost-effective tool is the key to reducing RDS costs. We offer a variety of competitive pricing plans to choose from, or a limited-function version that you can continue to use for free after the trial ends.

To start your free trial today, sign up here.

Cloud Optimization Tools = Cloud Cost Control (Part II)

Cloud Optimization Tools = Cloud Cost Control (Part II)

A couple of weeks ago in Part 1 of this blog topic we discussed the need for cloud optimization tools to help enterprises with the problem of cloud cost control. Amazon Web Services (AWS) even goes as far as suggesting the following simple steps to control their costs (which can also be applied  to Microsoft Azure and Google Cloud Platform, but of course with slightly different terminology):

    1. Right-size your services to meet capacity needs at the lowest cost;
    2. Save money when you reserve;
    3. Use the spot market;
    4. Monitor and track service usage;
    5. Use Cost Explorer to optimize savings; and
    6. Turn off idle instances (we added this one).

A variety of third-party tools and services have popped up in the market over the past few years to help with cloud cost optimization – why? Because upwards of $23B was spent on public cloud infrastructure in 2016, and spending continues to grow at a rate of 40% per year. Furthermore, depending on who you talk to, roughly 25% of public cloud spend is wasted or not optimized — that’s a huge market! If left unchecked, this waste problem is supposed to triple to over $20B by 2020 – enter the vultures (full disclosure, we are also a vulture, but the nice kind). Most of these tools are lumped under the Cloud Management category, which includes subcategories like Cost Visibility and Governance, Cost Optimization, and Cost Control vendors – we are a cost control vendor to be sure.

Why do you, an enterprise, care? Because there are very unique and subtle differences between the tools that fit into these categories, so your use case should dictate where you go for what – and that’s what I am trying to help you with. So, why am I a credible source to write about this (and not just because ParkMyCloud is the best thing since sliced bread)?

Well, yesterday we had a demo with a FinTech company in California that was interested in Cost Control, or thought they were. It turns out that what they were actually interested in was Cost Visibility and Reporting; the folks we talked to were in Engineering Finance, so their concerns were primarily with billing metrics, business unit chargeback for cloud usage, RI management, and dials and widgets to view all stuff AWS and GCP billing related. Instead of trying to force a square peg into a round hole, we passed them on to a company in this space who’s better suited to solve their immediate needs. In response, the Finance folks are going to put us in touch with the FinTech Cloud Ops folks who care about automating their cloud cost control as part of their DevOps processes.

This type of situation happens more often than not. We have a lot of enterprise customers using ParkMyCloud along with CloudHealth, CloudChekr, Cloudability, and Cloudyn because in general, they provide Cost Visibility and Governance, and we provide actionable, automated Cost Control.

As this is our blog, and my view from the street – we have 200+ customers now using ParkMyCloud, and we demo to 5-10 enterprises per week. Based on a couple of generic customer uses cases where we have strong familiarity, here’s what you need to know to stay ahead of the game:

  • Cost Visibility and Governance: CloudHealth, CloudChekr, Cloudability and Cloudyn (now owned by Microsoft)
  • Reserved Instance (RI) management – all of the above
  • Spot Instance management – SpotInst
  • Monitor and Track Usage: CloudHealth, CloudChekr, Cloudability and Cloudyn
  • Turn off (park) Idle Resources – ParkMyCloud, Skeddly, Gorilla Stack, BotMetric
  • Automate Cost Control as part of your DevOps Process: ParkMyCloud
  • Govern User Access to Cloud Console for Start/Stop: ParkMyCloud
  • Integrate with Single Sign-On (SSO) for Federated User Access: ParkMyCloud

To summarize, cloud cost control is important, and there are many cloud optimization tools available to assist with visibility, governance, management, and control of your single or multi-cloud environments. However, there are very few tools which allow you to set up automated actions leveraging your existing enterprise tools like Ping, Okta, Atlassian, Jenkins, and Slack.  Make sure you are not only focusing on cost visibility and recommendations, but also on action-oriented platforms to really get the best bang for your buck.

How to Optimize Cloud Spend with ParkMyCloud

How to Optimize Cloud Spend with ParkMyCloud

The focus on how to optimize cloud spend is now as relentless as the initial surge was to migrate workloads from ‘on-prem’ to public cloud. A lot of this focus, and resultant discussions, were in regards to  options related to the use of Reserved Instances (RI’s), Spot Instances,or other pre-pay options. The pay-up-front discount plan makes sense  when you have some degree of visibility on future needs, and when there is no ‘turn-if-off’ option, which we here at ParkMyCloud call “parking”.

When it comes to the ability to ‘park instances’ we like to divide the world into two halves. There are those Production Systems, which typically need to be running 24/7/365, and then there are Non-Production Systems, which at least in theory have the potential to be parked when not in use. The former are typically  your end-customer or enterprise facing systems, which need to be online and available at all times.In this case, RI’s typically make sense. When it comes to those non-production systems, that’s where a tool such as ParkMyCloud comes into play. Here you have an opportunity to review the usage patterns and needs of your organization and how to optimize cloud spend accordingly. For example, you may well discover that your QA team never works on weekends, so you can turn their EC2 instances off on a Friday night and turn them back on first thing on Monday morning. Elsewhere, you might find other workloads that can be turned off in the small hours or even workloads which can be left off for extended periods.

Our customers typically like to view both their production and non-production systems in our simple dashboard. Here they can view all their public cloud infrastructure and  simply lock those production systems which cannot be touched. Once within the dashboard the different non-production workloads can then be reviewed and either centrally managed by an admin or have their management delegated to individual business units or teams.

Based on our customer usage we track, we see these non-production systems typically accounting for about 50% of what the companies spend on compute (i.e. instances / VMs). We then see those who aggressively manage these non-production instances saving up to 65% of their cost, which then makes a large dent in their overall cloud bill.

So, when you are thinking about how to optimize cloud spend, there’s a lot more opportunities than just committing to purchase in advance, especially for your non-production workloads.