Amazon’s EC2 Scheduler – How Does it Compare with ParkMyCloud?

Amazon’s EC2 Scheduler – How Does it Compare with ParkMyCloud?

Amazon recently announced updates to their EC2 scheduler, responding to the already-answered question: “How do I automatically start and stop my Amazon EC2 instances?”

Been there, done that. ParkMyCloud has been scheduling instances and saving our customers 65% or more on their monthly cloud bills from Amazon, Azure, and Google since 2015. It looks like Amazon is stepping up to the plate with their EC2 scheduler, but are they?

The premise is simple: pay for what you use. It’s what we’ve been saying all along – cloud services are like any other utility (electricity, water, gas) – you should use them only when needed to avoid paying more than necessary. You wouldn’t leave your lights on all night, so why leave your instances running when you’re not using them?

Until now, Amazon had basic scripting suggestions for starting and stopping your instances. With the EC2 scheduler, you’re getting instructions for how to configure a custom start and stop scheduler for your EC2 instances. Implementing the solution will require some work on your part, but will inevitably reduce costs. Welcome to the club, Amazon? Sort of, not really.

EC2 Scheduler vs ParkMyCloud

While the EC2 scheduler sounds good, we think ParkMyCloud is better, and not just because we’re biased. We took a look at the deployment guide for the EC2 scheduler and noticed a few things we offer that Amazon still doesn’t

  • This solution requires knowledge and operation of DynamoDB, Lambda, CloudWatch custom metrics, and Cloudformation templates, including Python scripting and Cloudformation coding.
    • None of that is required with our simple, easy-to-use platform. You don’t need a developer background to use ParkMyCloud, in fact you can use your mobile phone (insert link) or tablet.
  • There’s no UI, so it’s not obvious which instances are on what schedules.
    • ParkMyCloud has a simple UI with an icon driven operational dashboard and reporting so you can easily see and manage not only your AWS resources in a single-pane but your Azure and Google resources as well.
  • Modifications require code changes and CloudFormation deployments, including simple overrides of schedules.
    • Again, ParkMyCloud is easy to use, no coding or custom scripting required. Users can also temporarily override schedules if they need to use an instance on short notice, but will only have access to the resources you grant. And you can use our API and Policy Engine to automate scheduling as part of your DevOps process.
  • No SSO, reporting, notifications
    • Check, check, check. Did we mention that ParkMyCloud added some new features recently? You can now see resource utilization data for EC2 instances, viewable through animated heatmaps.
  • Doesn’t have SmartParking – automated parking recommendations based on usage data.
    • We do.
  • You cannot “snooze” (temporarily override) schedules on parked instances. You would have to do that manually through the AWS interface.
    • You can snooze schedules in ParkMyCloud with a button click.
  • Doesn’t work with Azure or Google.
    • We do.
  • Doesn’t park ASG
    • We do.
  • No Slack integration.
    • We do.

Conclusion

Amazon – nice try.

If you’re looking for an alternative to writing your own scripts (which we’ve known for a long time is not the best answer), you’re purely using AWS and EC2 instances, and are comfortable with all the PaaS offerings mentioned, then you might be okay with the EC2 scheduler. The solution works, although it comes with a lot of the same drawbacks that custom scripting has when compared to ParkMyCloud.

If you’re using more than just EC2 instances or even working with multiple providers, if you’re looking for a solution where you don’t need to be scripting, and if you’d prefer an automated tool that will cut your cloud costs with ease of use, reporting, and parking recommendations, then it’s a no-brainer. Give ParkMyCloud a try.

New in ParkMyCloud: Visualize AWS Usage Data Trends

We are excited to share the latest release in ParkMyCloud: animated heat map displays. This builds on our previous release of static heat maps displaying AWS EC2 instance utilization metrics from CloudWatch. Now, this utilization data is animated to help you better identify usage patterns over time and create automated parking schedules.

The heatmaps will display data from a sequence of weeks, in the form of an animated “video”, letting you see patterns of usage over a period of time. You can take advantage of this feature to better plan ParkMyCloud parking schedules based on your actual instance utilization.

Here is an example of an animated heatmap, which allows you to visualize when instances are used over a period of eight weeks:

The latest ParkMyCloud update also includes:

  • CloudWatch data collection improvements to reduce the number of API calls required to pull instance utilization metrics data
  • Various user interface improvements to a number of screens in the ParkMyCloud console.

As noted in our last release, utilization data also provides the necessary information that will allow ParkMyCloud to make optimal parking and rightsizing recommendations (SmartParking) when this feature is released next month, part of our ongoing efforts to do what we do best – save you money, automatically.

AWS users who sign up now can take advantage of the latest release as we ramp up for automated SmartParking. In order to give you the most optimal cost control over your cloud bill, start your ParkMyCloud trial today to collect several weeks’ worth of CloudWatch data, track your usage patterns, and get recommendations as soon as the SmartParking feature becomes available in a few weeks.

If you are an existing customer, be sure to update your AWS policies to enable ParkMyCloud to access your AWS CloudWatch data. Detailed instructions can be found in our support portal.

Feedback? Anything else you’d like to see ParkMyCloud do? Let us know

New in ParkMyCloud: AWS Utilization Metric Tracking

New in ParkMyCloud: AWS Utilization Metric Tracking

We are happy to share the latest release in ParkMyCloud: you can now see resource utilization data for your AWS EC2 instances! This data is viewable through customizable heatmaps.

This update gives you information about how your resources are being used – and it also provides the necessary information that will allow ParkMyCloud to make optimal parking and rightsizing recommendations when this feature is released next month. This is part of our ongoing efforts to do what we do best – save you money, automatically.

Utilization metrics that ParkMyCloud will now report on include:

  • Average CPU utilization
  • Peak CPU utilization
  • Total instance store read operations
  • Total instance store write operations
  • Average network data in
  • Average network data out
  • Average network packets in
  • Average network packets out

Here is an example of an instance utilization heatmap, which allows you to see when your instances are used most often:

In a few weeks, we will release the ability for ParkMyCloud to recommend parking schedules for your instances based on these metrics. In order to take advantage of this, you will need to have several weeks’ worth of CloudWatch data already logged, so that we can recommend based on your typical usage. Start your ParkMyCloud trial today to start tracking your usage patterns so you can get usage-based parking recommendations.

If you are an existing customer, you will need to update your AWS policies to enable ParkMyCloud to access your AWS CloudWatch data. Detailed instructions can be found in our support portal.

Feedback? Anything else you’d like to see ParkMyCloud do? Let us know!

Cloud Per-Second Billing – How Much Does It Really Save?

Cloud Per-Second Billing – How Much Does It Really Save?

It has been a little over a month since Amazon and Google switched some of their cloud services to per-second billing and so the first invoices with the revised billing are hitting your inboxes right about now. If you are not seeing the cost savings you hoped for, it may be a good time to look again at what services were slated for the pricing change, and how you are using them.

Google Cloud Platform

Starting with the easiest one, Google Cloud Platform (GCP), you may not be seeing a significant change, as most of their services were already billing at the per-minute level, and some were already at the per-second level. The services moved to per-second billing (with a one-minute minimum) included Compute Engine, Container Engine, Cloud Dataproc, and App Engine VMs.  Moving from per-minute billing to per-second billing is not likely to change a GCP service bill by more than a fraction of a percent.

Let’s consider the example of an organization that has ten GCP n1-standard-8 Compute Engine machines in Oregon at a base cost of $0.3800 per hour as of the date of this blog. Under per-minute billing, the worst-case scenario would be to shut a system down one second into the next minute, for a cost difference of about $0.0063. Even if each of the ten systems were assigned to the QA or development organization, and they were shut down at the end of every work day, say 22 days out of the month, your worst-case scenario would be an extra charge of 22 days x 10 systems x $0.0063 = $1.3860. Under per-second billing, the worst case is to shut down at the beginning of a second, with a highest possible cost for these same machines (sparing you the math) being about $0.02. So, the best this example organization can hope to save over a month with these machine with per-second billing is $1.39.

Amazon Web Services

On the Amazon Web Services (AWS) side of the fence, the change is both bigger and smaller.  It is bigger in that they took the leap from per-hour to per-second billing for On-Demand, Reserved, and Spot EC2 instances and provisioned EBS, but smaller in that it is only for Linux-based instances; Windows instances are still at per-hour.

Still, if you are running a lot of Linux instances, this change can be significant enough to notice.  Looking at the same example as before, let’s run the same calculation with the roughly equivalent t2.2xlarge instance type, charged at $0.3712 per hour. Under per-hour billing, the worst-case scenario is to shut a system down even a second into the next higher hour. In this example, the cost would be an extra charge of 22 days x 10 systems x $0.3712 = $81.664. Under per-second billing, the worst case is the same $0.02 as with GCP (with fractions of cents difference lost in the noise). So, under AWS, one can hope to see significantly different numbers in the bill.

The scenario above is equally relevant to other situations where instances get turned on and off on a frequent basis, driving those fractions of an hour or a minute of “lost” time. Another common example would be auto-scaling groups that dynamically resize based on load, and see enough change over time to bring instances in and out of the group. (Auto-scale groups are frequently used as a high-availability mechanism, so their elastic growth capabilities are not always used, and so savings will not always be seen.) Finally, Spot instances are built on the premise of bringing them up and down frequently, and they will also enjoy the shift to per-second billing.

However, as you look at your cloud service bill, do keep in mind some of the nuances that still apply:

  • Windows: GCP applies per-second billing to Windows; AWS is still on one-hour billing for Windows.
  • Marketplace Linux: Some Linux instances in the AWS Marketplace that have a separate hourly charge are also still on hourly billing (perhaps due to contracts or licensing arrangements with the vendors?), so you may want to reconsider which flavor of Linux you want to use.
  • Reserved instances: AWS does strive to “use up” all of the pre-purchased time for reserved instances, spreading it across multiple machines with fractions of usage time, and per-second billing can really stretch the value of these instances.
  • Minimum of one-minute charge: Both GCP and AWS will charge for at least a minute from instance start before per-second billing comes into play.

Overall, per-second billing is a great improvement for consumers of cloud resources…and will probably drive us all more than ever to make each second count.

AWS IAM Roles and Ways to Use them to Improve Security

AWS IAM Roles and Ways to Use them to Improve Security

What are AWS IAM Roles?

Within AWS Identity and Access Management system (IAM) there are a number of different identity mechanisms that can be configured to secure your AWS environment, such as Users, Groups, and AWS IAM Roles. Users are clearly the humans in the picture, and Groups are collections of Users, but Roles can be a bit more obscure. Roles are defined as a set of permissions that grant access to actions and resources in AWS. Unlike Users, which are tied to a specific Identity and a specific AWS account, an IAM Role can be used by or assumed by IAM User accounts or by services within AWS, and can give access to Users from another account altogether.

To better understand Roles, I like the metaphor of a hat.  When we say a Role is assumed by a user – it is like saying someone can assume certain rights or privileges because of what hat they are wearing.  In any company (especially startups), we sometimes say someone “wears a lot of hats” – meaning that person temporarily takes on a number of different Roles, depending on what is needed. Mail delivery person, phone operator, IT support, code developer, appliance repairman…all in the space of a couple hours.

IAM Roles are similar to wearing different hats this in that they temporarily let an IAM User or a service get permissions to do things they would not normally get to do.  These permissions are attached to the Role itself, and are conveyed to anyone or anything that assumes the role.  Like Users, Roles have credentials that can be used to authenticate the Role identity.

Here are a couple ways in which you can use IAM Roles to improve your security:

EC2 Instances

All too often, we see software products that rely on credentials (username/password) for services or accounts that are either hard-coded into an application or written into some file on disk. Frequently the developer had no choice, as the system had to be able to automatically restart and reconnect if the machine rebooted, without anyone to manually type in credentials during the rebootwhen the system rebooted. If the code is examined, or file system is compromised, then the credentials are exposed, potentially compromisingand can potentially used to compromise other systems and services. In addition, such credentials make it really difficult to periodically change the password. Even in AWS we sometimes see developers hard-code API Key IDs and Keys into apps in order to get access to some AWS service. This is a security accident waiting to happen, and can be avoided through the use of IAM Roles.

With AWS, we can assign a single IAM Role to an EC2 instance. This assignment is usually made when the instance is launched, but can also be done at runtime if needed. Applications running on the server retrieve the Role’s security credentials by pulling them out of the instance metadata through a simple web command. These credentials have an additional advantage over potentially long-lived, hard-coded credentials, in that they are changed or rotated frequently, so even if somehow compromised, they can only be used for a brief period.

Another key security advantage of Roles is that they can be limited to just the access/rights privileges needed to get a specific job done. Amazon’s documentation for roles gives the example of an application that only needs to be able to read files out of S3. In this case, one can assign a Role that contains read-only permissions for a specific S3 bucket, and the Role’s configuration can say that the role can only be used by EC2 instances. This is an example of the security principle of “least privilege,”, where the minimum privileges necessary are assigned, limiting the risk of damage if the credential is compromised. In the same sense that you would not give all of your users “Administrator” privileges, you should not create a single “Allow Everything” Role that you assign everywhere. Instead create a different Role specific to the needs of each system or group of systems.

Delegation

Sometimes one company needs to give access to their resources to another company. Before IAM Roles, (and before AWS) the common ways to do that were to share account logins (with the same issues identified earlier with hardcoded credentials) or to use complicated PKI/certificate based systems. If both companies using AWS, sharing access is much easier with Role-based Delegation. There are several ways to configure IAM Roles for delegation, but for now we will just focus on delegation between accounts from two different organizations.

At ParkMyCloud, our customers use Delegation to let us read the state of their EC2, RDS, and scaling group instances, and then start and stop them per the schedules they configure in our management console.

To configure Role Delegation, a customer first creates an account with the service provider, and is given the provider’s AWS Account ID and an External ID. The External ID is a unique number for each customer generated by the service provider.

The administrator of the customer environment creates an IAM Policy with a constrained set of access (principle of “least privilege” again), and then assigns that policy to a new Role (like “ParkMyCloudAccess”), specifically assigned to the provider’s Account ID and External ID.  When done, the resulting IAM Role is given a specific Amazon Resource Name (ARN), which is a unique string that identifies the role.  The customer then enters that role in the service provider’s management console, which is then able to assume the role.  Like the EC2 example, when the ParkMyCloud service needs to start a customer EC2 instance, it calls the AssumeRole API, which verifies our service is properly authenticated, and returns temporary security credentials needed to manage the customer environment.

Conclusions

AWS IAM Roles make some tasks a lot simpler by flexibly assigning roles to instances and other accounts. IAM Roles can help make your environment more secure by:

  • Using the principle of Least Privilege in IAM policies to isolate the systems and services to only those needed to do a specific job.
  • Prevent hard coding of credentials in code or files, minimizing danger from exposure, and removing the risk of long-unchanged passwords.
  • Minimizing common accounts and passwords by allowing controlled cross-account access.
AWS Lambda + ParkMyCloud = Supercharged Automation

AWS Lambda + ParkMyCloud = Supercharged Automation

Among the variety of AWS services and functionality, AWS Lambda seems to be taking off with hackers and tinkerers. The idea of “serverless” architecture is quite a shift in the way we think about applications, tools, and services, but it’s a shift that is opening up some new ideas and approaches to problem solving.  

If you haven’t had a chance to check out Lambda, it’s a “function-as-a-service” platform that allows you to run scripts or code on demand, without having to set up servers with the proper packages and environments installed. Your lambda function can trigger from a variety of sources and events, such as HTTP requests, API calls, S3 bucket changes, and more. The function can scale up automatically, so more compute resources will be used if necessary without any human intervention. The code can be written in Node.js, Python, Java, and C#.

Some pretty cool ideas already exist for lambda functions to automate processes.  One example from AWS is to respond to a Github event to trigger an action, such as the next step in a build process.  There’s also a guide on how to use React and Lambda to make an interactive website that has no server.

For those of you who are already using ParkMyCloud to schedule resources, you may be looking to plug in to your CI/CD pipeline to achieve Continuous Cost Control.  I’ve come up with a few ideas of how to use Lambda along with ParkMyCloud to supercharge your AWS cloud savings.  Let’s take a look at a few options:

Make ParkMyCloud API calls from Lambda

With ParkMyCloud’s API available to control your schedules programmatically, you could make calls to ParkMyCloud from Lambda based on events that occur.  The API allows you to do things like list resources and schedules, assign schedules to resources, snooze schedules to temporarily override them, or cancel a snooze or schedule.

For instance, if a user logs in remotely to the VPN, it could trigger a Lambda call to snooze the schedules for that user’s instances.  Alternatively, a Lambda function could change the schedules of your Auto Scaling Group based on average requests to your website.  If you store data in S3 for batch processing, a trigger from an S3 bucket can tell Lambda to notify ParkMyCloud that the batch is ready and the processing servers need to come online.

Send notifications from ParkMyCloud to Lambda

With ParkMyCloud’s notification system, you can send events that occur in the ParkMyCloud system to a webhook or email.  The events can be actions taken by schedules that are applied to resources, user actions that are done in the UI, team and schedule assignments from policies, or errors that occur during parking.

By sending schedule events, you could use a Lambda function to tell your monitoring tool when servers are being shut down from schedules.  This could also be a method for letting your build server know that the build environment has fully started before the rest of your CI/CD tools take over.  You could also send user events to Lambda to feed into a log tool like Splunk or Logstash.  Policy events can be sent to Lambda to trigger an update to your CMDB with information on the team and schedule that’s applied to a new server.

Think outside the box!

Are you already using AWS Lambda to kick off functions and run scripts in your environment?  Try combining Lambda with ParkMyCloud and let us know what cool tricks you come up with for supercharging your automation and saving on your cloud bill! Stop by Booth 1402 at AWS re:Invent this year and tell us.