September 2017 - ParkMyCloud

7 AWS Security Best Practices with ParkMyCloud

Besides cost control, one of the biggest concerns from IT administrators is utilizing AWS security best practices to keep their infrastructure safe.  While there are some great tools that specialize in cloud and information security, there are some security benefits of ParkMyCloud that are not often considered when hardening a cloud infrastructure.

1. Keep Instances Off When Not In Use

Scheduling your instances to be turned off on nights and weekends when you aren’t using them saves you a ton of money on your cloud bill, but  also provides security and protection.  Leaving servers and databases on 24/7 is just asking for someone to try to break in and connect to servers within your infrastructure, especially during off-hours when you don’t have as many IT staff keeping an eye on things.  By aggressively scheduling your resources to be off as much as possible, you minimize the opportunity for outside attacks on those servers.

2. User Governance

Your users are trustworthy and need to access lots of servers to do their job, but why give them more access than necessary?  Limiting what servers, databases, and auto scaling groups everyone can see to only what they need keeps accidents from happening and limits mistakes.  ParkMyCloud lets you separate users into teams, with designated Team Leads to manage the individual Team Members and limits their control to just start / stop.

3. Single Sign On

In addition to governing user access to resources, ParkMyCloud integrates with all major SSO providers for SAML authentication for your users.  This includes Okta, Ping Identity, OneLogin, Centrify, Azure AD, ADFS, and Google Apps.  By using one of these providers, you can keep identity management centralized and offer multi-factor authentication through those SAML connections.

4. Audit Logs and Notifications

Every user action in ParkMyCloud is tracked in an Audit Log that is available to super admins.  These audit logs can also be downloaded as a CSV if you want to import them into something like Splunk or Logstash for log management.  Audit logs can help you see when schedules are snoozed or changed, policies are updated, or teams are created or changed.

In addition, those audit log entries can be sent as notifications to Slack channels, email addresses, or through webhooks to other tools.  This lets you keep an eye on either specific teams or the entire organization within ParkMyCloud.

5. Minimal Connection Permissions

ParkMyCloud connects to AWS through an IAM Role (preferred) or an IAM User.  The AWS policy that is required uses the bare minimum of necessary actions, which boils down to Describe, Start, and Stop for each resource type (EC2, ASG, and RDS). This means you don’t have to worry about ParkMyCloud doing something to your AWS account that you don’t intend.  For Azure connections, ParkMyCloud requires a similarly-limited Limited Access Role, and the connection to Google Cloud requires a limited Service Account.

6. Restrict Scheduling Based on Names or Tags

The ParkMyCloud policy engine is a powerful way to automate your resource scheduling and team management, but it can also be used to prevent schedules from being applied to certain systems. For instance, if you have a prod database that you want to keep up 24/7, you can use a policy to never let any user apply a schedule (even if they wanted to).  These policies can be applied based on tags, naming conventions, AWS regions, or account names.

7. Full Cloud Visibility

One great benefit of ParkMyCloud is the ability to see across all of your cloud providers (AWS, Microsoft Azure, and Google Cloud), cloud accounts, and regions within a cloud. This viewability not only provides management benefits, but helps with security by keeping all resources in one list. This prevents rogue instances from running in regions you don’t normally look at, and can help you identify resources that don’t need to be running or even stopped.

Conclusion

As you continue to strive to follow AWS security best practices, consider adding ParkMyCloud to your security toolkit.  While you’re saving money for your team, you can also get these 7 benefits to help secure your infrastructure and sleep better at night.  Start a free trial of ParkMyCloud today to start reaping the benefits!

Read more ›

Reduce RDS Costs with ParkMyCloud

Thanks to the ability to shut down instances with a start/stop scheduler, users of Amazon’s database service can finally save time and reduce RDS costs. Until June 2017, the only way to accomplish this feat was by copying and deleting instances, running the risk of losing transaction logs and automatic backups. While Amazon’s development of the start/stop scheduler is useful and provides a level of cost savings, it also comes with issues of its own.

For one, the start/stop scheduler is not foolproof. The process for stopping and starting non-production RDS instances is manual, relying on the user to create and consistently manage the schedule. Having to manually switch off when instances are not in use, and then restart when access is needed again, is a helpful advantage but also leaves room for human error. Complicating things further, RDS instances that have been shut down will automatically be restarted after seven days, again relying on the user to switch those instances back off if they’re not needed at the time.

Why Scripting is not the Best Answer

One way of minimizing potential for error is by automating the stop/start schedule yourself with writing your own scripts. While that could work, you would need to consider the number of non-production instances deployed on AWS RDS, and plan for a schedule that would allow developers to have access when needed, which could very well be at varying times throughout the day. All factors considered, the process of writing and maintaining scheduling scripts is one that takes extra time and costs money as well. Ultimately, setting up and maintaining your own schedule could increase your cloud spend more than it reduces RDS costs.

When you start thinking about the cost of paying developers, the amount of scripts that would have to be written, and the ongoing maintenance required, buying into an automated scheduling process is a no-brainer.

How ParkMyCloud Reduces RDS Costs

Automated Scheduling

ParkMyCloud saves you time and money by automating the scheduling process of stopping and starting AWS RDS instances (in addition to Microsoft Azure VMs and Google Cloud Compute instances, but that’s another post). At the same time, you get total visibility and full autonomy over your account.

The process is simple. With you as the account manager, ParkMyCloud conducts a discovery of all the company accounts, and determines which instances are most suitable for parking. From there, you have the option of implementing company-wide schedules for non-production instances, or giving each development team the ability to create schedules of their own.

Flexible Parking

ParkMyCloud takes saving on RDS costs to a whole new level with parking schedules. Different schedules can be applied to different instances, or they can be parked permanently and put on “snooze” when access is needed. Amazon’s seven-day automatic restart of switched off instances is a non-issue with our platform, and snoozed instances can be re-parked when access is no longer needed, so there’s no more relying on the user to do it manually.

For the most part, we find that companies will want to park their non-production instances outside the normal working hours of Monday to Friday, let’s say from 8:00am to 8:00pm. By parking your instances outside of those days and hours, ParkMyCloud can reduce your cloud spend by 65% – even more if you implement a parking schedule and use the snooze option.

Valuable Insight

Because you have total visibility over the account, you can reduce RDS costs even further by having a bird’s eye view of your company’s cloud use. You’ll be able to tell which of your instances are underused, terminate them, and possibly benefit further from selecting a cheaper plan (really soon). You’ll be able to see all RDS instances across all regions and AWS accounts in one simple view. You can also view the parking schedules for each instance and see how much each schedule is saving, potentially reducing costs even further. The viewability of your account and access to information provides a great resource for budgeting and planning.

Conclusion

The AWS start/stop scheduler is useful, but has to be done manually. Writing your own scripts sounds helpful, but it’s actually time consuming, and not fully cost-effective. ParkMyCloud automates the process while still putting you in control, reducing RDS costs and saving you time and money.

See the benefits of ParkMyCloud for yourself by taking advantage of our two-week free trial. Test our cloud cost control platform in your own environment, without any need for a credit card or signed contract, and see why our simple, cost-effective tool is the key to reducing RDS costs. We offer a variety of competitive pricing plans to choose from, or a limited-function version that you can continue to use for free after the trial ends.

To start your free trial today, sign up here.

Read more ›

Cloud Optimization Tools = Cloud Cost Control (Part II)

A couple of weeks ago in Part 1 of this blog topic we discussed the need for cloud optimization tools to help enterprises with the problem of cloud cost control. Amazon Web Services (AWS) even goes as far as suggesting the following simple steps to control their costs (which can also be applied  to Microsoft Azure and Google Cloud Platform, but of course with slightly different terminology):

    1. Right-size your services to meet capacity needs at the lowest cost;
    2. Save money when you reserve;
    3. Use the spot market;
    4. Monitor and track service usage;
    5. Use Cost Explorer to optimize savings; and
    6. Turn off idle instances (we added this one).

A variety of third-party tools and services have popped up in the market over the past few years to help with cloud cost optimization – why? Because upwards of $23B was spent on public cloud infrastructure in 2016, and spending continues to grow at a rate of 40% per year. Furthermore, depending on who you talk to, roughly 25% of public cloud spend is wasted or not optimized — that’s a huge market! If left unchecked, this waste problem is supposed to triple to over $20B by 2020 – enter the vultures (full disclosure, we are also a vulture, but the nice kind). Most of these tools are lumped under the Cloud Management category, which includes subcategories like Cost Visibility and Governance, Cost Optimization, and Cost Control vendors – we are a cost control vendor to be sure.

Why do you, an enterprise, care? Because there are very unique and subtle differences between the tools that fit into these categories, so your use case should dictate where you go for what – and that’s what I am trying to help you with. So, why am I a credible source to write about this (and not just because ParkMyCloud is the best thing since sliced bread)?

Well, yesterday we had a demo with a FinTech company in California that was interested in Cost Control, or thought they were. It turns out that what they were actually interested in was Cost Visibility and Reporting; the folks we talked to were in Engineering Finance, so their concerns were primarily with billing metrics, business unit chargeback for cloud usage, RI management, and dials and widgets to view all stuff AWS and GCP billing related. Instead of trying to force a square peg into a round hole, we passed them on to a company in this space who’s better suited to solve their immediate needs. In response, the Finance folks are going to put us in touch with the FinTech Cloud Ops folks who care about automating their cloud cost control as part of their DevOps processes.

This type of situation happens more often than not. We have a lot of enterprise customers using ParkMyCloud along with CloudHealth, CloudChekr, Cloudability, and Cloudyn because in general, they provide Cost Visibility and Governance, and we provide actionable, automated Cost Control.

As this is our blog, and my view from the street – we have 200+ customers now using ParkMyCloud, and we demo to 5-10 enterprises per week. Based on a couple of generic customer uses cases where we have strong familiarity, here’s what you need to know to stay ahead of the game:

  • Cost Visibility and Governance: CloudHealth, CloudChekr, Cloudability and Cloudyn (now owned by Microsoft)
  • Reserved Instance (RI) management – all of the above
  • Spot Instance management – SpotInst
  • Monitor and Track Usage: CloudHealth, CloudChekr, Cloudability and Cloudyn
  • Turn off (park) Idle Resources – ParkMyCloud, Skeddly, Gorilla Stack, BotMetric
  • Automate Cost Control as part of your DevOps Process: ParkMyCloud
  • Govern User Access to Cloud Console for Start/Stop: ParkMyCloud
  • Integrate with Single Sign-On (SSO) for Federated User Access: ParkMyCloud

To summarize, cloud cost control is important, and there are many cloud optimization tools available to assist with visibility, governance, management, and control of your single or multi-cloud environments. However, there are very few tools which allow you to set up automated actions leveraging your existing enterprise tools like Ping, Okta, Atlassian, Jenkins, and Slack.  Make sure you are not only focusing on cost visibility and recommendations, but also on action-oriented platforms to really get the best bang for your buck.

Read more ›

How to Optimize Cloud Spend with ParkMyCloud

The focus on how to optimize cloud spend is now as relentless as the initial surge was to migrate workloads from ‘on-prem’ to public cloud. A lot of this focus, and resultant discussions, were in regards to  options related to the use of Reserved Instances (RI’s), Spot Instances,or other pre-pay options. The pay-up-front discount plan makes sense  when you have some degree of visibility on future needs, and when there is no ‘turn-if-off’ option, which we here at ParkMyCloud call “parking”.

When it comes to the ability to ‘park instances’ we like to divide the world into two halves. There are those Production Systems, which typically need to be running 24/7/365, and then there are Non-Production Systems, which at least in theory have the potential to be parked when not in use. The former are typically  your end-customer or enterprise facing systems, which need to be online and available at all times.In this case, RI’s typically make sense. When it comes to those non-production systems, that’s where a tool such as ParkMyCloud comes into play. Here you have an opportunity to review the usage patterns and needs of your organization and how to optimize cloud spend accordingly. For example, you may well discover that your QA team never works on weekends, so you can turn their EC2 instances off on a Friday night and turn them back on first thing on Monday morning. Elsewhere, you might find other workloads that can be turned off in the small hours or even workloads which can be left off for extended periods.

Our customers typically like to view both their production and non-production systems in our simple dashboard. Here they can view all their public cloud infrastructure and  simply lock those production systems which cannot be touched. Once within the dashboard the different non-production workloads can then be reviewed and either centrally managed by an admin or have their management delegated to individual business units or teams.

Based on our customer usage we track, we see these non-production systems typically accounting for about 50% of what the companies spend on compute (i.e. instances / VMs). We then see those who aggressively manage these non-production instances saving up to 65% of their cost, which then makes a large dent in their overall cloud bill.

So, when you are thinking about how to optimize cloud spend, there’s a lot more opportunities than just committing to purchase in advance, especially for your non-production workloads.

Read more ›

Shutting Down RDS Instances in AWS – Introducing the Start/Stop Scheduler

Users of Amazon’s database service have been clamoring for a solution to shutting down RDS instances with an automatic schedule ever since 2009, when the PaaS service was first released.  Once Amazon announced the ability to power off and on RDS instances earlier this year, AWS users started planning out ways to schedule these instances using scripts or home-grown tools.  However, users of ParkMyCloud were happy to find out that support for RDS scheduling was immediately available in the platform.  If you were planning on writing your own scripts for RDS parking, let’s take a look at some of the additional features that ParkMyCloud could provide for you.

Schedule EC2 and ASG in addition to RDS

Very few AWS users are utilizing RDS databases without simultaneously running EC2 instances as compute resources.  This means that writing your own scheduling scripts for shutting down RDS instances would involve scheduling EC2 instances as well.

ParkMyCloud has support for parking EC2 resources, RDS databases, and Auto Scaling Groups all from the same interface, so it’s easy to apply on/off schedules to all of your cloud resources.

Logical Groups to tie instances together

Let’s say you have a QA environment with a couple of RDS databases and multiple EC2 instances running a specific version of your software. With custom scripts, you have to implement logic that will shut down and start up all of those instances together, and potentially in a specific order.  ParkMyCloud allows users to create Logical Groups, which shows up as one logical entity in the interface but is scheduling multiple instances behind it.  You can also set start or stop delays within the Logical Group to customize the order, so if databases need to be started first and stopped last, then you can set that level of granularity.

Govern user access to databases

If your AWS account includes RDS databases that relate to dev, QA, staging, production, test, and UAT, then you’ll want to allow different users to access different databases based on their role or current project.  Implementing user governance in your own scripts can be a huge hassle, but ParkMyCloud makes it easy to split your user base into teams.  Users can be part of multiple teams if necessary, but by default they will only see the RDS databases that are in the teams they have access to.

High visibility into all AWS accounts and regions

Scripting your own schedules can be a challenge with a single region or account, but once you’re using RDS databases from around the world or across AWS accounts, you’re in for a challenge.  ParkMyCloud pulls all resources from all accounts and all AWS regions into one pane of glass, so it’s easy to apply schedules and keep an eye on all your RDS databases.

RDS DevOps automation

It can be a challenge to integrate your own custom scripts with your devops processes.  With ParkMyCloud, you have multiple options for automation.  With the Policy Engine, RDS instances can have schedules applied automatically based on tags, names, or locations.  Also, the ParkMyCloud API makes it easy to override schedules and toggle instances from your Slack channels, CI/CD tools, load-testing apps, and any other automated processes that might need a database instance powered on for a brief time.

Conclusion

Shutting down RDS instances is a huge money-saver.  Anyone who is looking to implement their own enterprise-grade AWS RDS start/stop scheduler is going to run into many challenges along the way.  Luckily, ParkMyCloud is on top of things and has implemented RDS parking alongside the other robust feature set that you already used for cost savings.  Sign up for a free trial today to supercharge your RDS database scheduling!

Read more ›

Interview: Hybrid Events Group + ParkMyCloud to Automate EC2 Instance Scheduling and Optimize AWS Infrastructure

We talked with Jedidiah Hurt, DevOps and technical lead at Hybrid Events Group, about how his company is using ParkMyCloud to automate EC2 instance scheduling, saving hours of development work. Below is a transcript of our conversation.

Appreciate you taking the time to speak with us today. Can you start off by giving us some background on your role, what Hybrid Events Group does, and why you got into doing what you do?

I do freelance work for Hybrid Events Group and am now moving into the role of technical lead. We had a big client we were working with this spring and we needed to fire up several EC2 instances. We were doing live broadcasting events across the U.S., which is what the company specializes in – events A/V services. So we do live webcasting, and we can do CapturePro, another service we offer where we basically just show up to any event that someone would want to record, which usually is workshops and keynotes at tech conferences, and we record on video and also capture the presenter’s presentation in video in real time.

ParkMyCloud, what we used it for, was just to automate EC2 instances for doing live broadcasts.

Was there any reason you chose AWS over others like Azure or Google Cloud, out of curiosity?

I just had the most experience with AWS; I was using AWS before Azure and Google Cloud existed. So I haven’t, or I can’t say that I’ve actually really given much of a trial to Azure or Google Cloud. I might have to give them a look here sometime in the future.

Do you use any PaaS services in AWS, or do you focus on compute databases and storage?

Yeah, not a whole lot right now. Just your basic S3, EC2, and I think we are probably going to move into elastic load balancing and auto scaling groups within the next few months or so as we build out our platform.

Do you use Agile development process to build out your platform and provide continuous delivery?

So, I am an agile practitioner, but we are just kind of brown fielding the platform. We are in the architecture stage right now, so we will be doing all of that, as far as continuous deployment, and hopefully continuous integration where we actually have some automated testing.

As far as tools, I’m the only developer on the team right now, so we won’t really have a full Agile or be fully into Agile. We haven’t got boards and prints and planning, weekly meetings, and all those things, because it’s just me. But we integrate portions of it, as far as having stakeholders kind of figuring out what our minimum viable product is.

What drove you to look for something like ParkMyCloud, and how did you come across it?

ParkMyCloud enabled us to automate a process that we were going to do manually, or that I was going to have to write scripts for and maintain. I think initially I was looking into just using the AWS CLI, and some other kind of test scheduler, to bring up the instances and then turn them off after our daily broadcast session was over. I did a little bit of googling to see if there were any time-based solutions available and found ParkMyCloud, and this platform does exactly what’s needed and more.

And you are using the free tier ParkMyCloud, correct?

Yes. I don’t remember what the higher tiers offered, but this was all we really needed. We just had three or four large EC2 instances that we wanted to bring up for four to five hours a day, Monday through Friday, so it had all the core features that we currently need.

Anything that stood out for you in terms of using the product?

I’d say on the plus side I was a little bit concerned at the beginning as far as the reliability of the tool, because we would have been in big trouble with our client if ParkMyCloud failed to bring up an instance at a scheduled start time. We used it, or I guess I would say we relied on it, every day for 2 months solid, and never saw any issues as far as instances not coming up when they were supposed to, or shutting down when they were not supposed to. I was really pleased with, what I would say, the reliability of the tool – that definitely stuck out to me.

From an ROI standpoint, are you satisfied with savings and the way the information is presented to you?

Yeah, absolutely. And I think for us, the ROI wasn’t so much the big difference between having the instances running all the time, or having the instances on a schedule. The ROI was more from the fact that I didn’t have to build the utility to accomplish that because you guys already did that. So in that sense, it probably saved me many hours of development work.

Also, that kind of uneasy feeling you get when you hack up a little script and put it into production versus having a well-tested, fully-automated platform. I’m really happy that we found ParkMyCloud, it has definitely become an important part of our infrastructure management over last few months.

As our final question, how much overhead or time did you have to spend in getting ParkMyCloud set up to manage your environment, and did you have to do anything on a daily or weekly basis to maintain it?

So, as I said, our particular use case was very basic, so it ended up being three instances that we needed to bring up for three or four hours a day and then shut them down. I’d say it took me ten to fifteen minutes to get rolling with ParkMyCloud and automate EC2 instance scheduling. And now we save thousands of dollars per month on our AWS bill.

Read more ›

Cloud Optimization Tools = Cloud Cost Control

Over the past couple of years we have had a lot of conversations with large and small enterprises regarding cloud management and cloud optimization tools, all of whom were looking for cost control. They wanted to reduce their bills, just like any utility you might run at home — why spend more than you need to? Amazon Web Services (AWS) actively promotes optimizing cloud infrastructure, and where they lead, others follow. AWS even goes so far as to suggest the following simple steps to control AWS costs:

  1. Right-size your services to meet capacity needs at the lowest cost;
  2. Save money when you reserve;
  3. Use the spot market;
  4. Monitor and track service usage;
  5. Use Cost Explorer to optimize savings; and
  6. Turn off idle instances (we added this one).

Its interesting to note use of the word ‘control’ even though the section is labeled Cost Optimization.

So where is all of this headed? It’s great that AWS offers their own solutions but what if you want automation into your DevOps processes, multi-cloud support (or plan to be multi cloud), real-time reporting on these savings, and to turn stuff off when you are not using it? Well then you likely need to use a third-party tool to help with these tasks.

Let’s take a quick look at a description of each AWS recommendation above, and get a better understanding of each offering. Following this we will then explore if these cost optimization options can be automated as part of a continuous cost control process:

  1. Right-sizing – Both the EC2 Right Sizing solution and AWS Trusted Advisor analyze utilization of EC2 instances running during the prior two weeks. The EC2 Right Sizing solution analyzes all instances with a max CPU utilization less than 50% and determines a more cost-effective instance type for that workload, if available.
  2. Reserved Instances (RI) – For certain services like Amazon EC2 and Amazon RDS, you can invest in reserved capacity. With RI’s, you can save up to 75% over equivalent ‘on-demand’ capacity. RI’s are available in three options – (1) All up-front, (2) Partial up-front or (3) No upfront payments.
  3. Spot – Amazon EC2 Spot instances allow you to bid on spare Amazon EC2 computing capacity. Since Spot instances are often available at a discount compared to On-Demand pricing, you can significantly reduce the cost of running your applications, grow your application’s compute capacity and throughput for the same budget, and enable new types of cloud computing applications.
  4. Monitor and Track Usage – You can use Amazon CloudWatch to collect and track metrics, monitor log files, set alarms, and automatically react to changes in your AWS resources. You can also use Amazon CloudWatch to gain system-wide visibility into resource utilization, application performance, and operational health.
  5. Cost Explorer – AWS Cost Explorer gives you the ability to analyze your costs and usage. Using a set of default reports, you can quickly get started with identifying your underlying cost drivers and usage trends. From there, you can slice and dice your data along numerous dimensions to dive deeper into your costs.
  6. Turn off Idle Instances – To “park” your cloud resources by assigning them schedules of operating hours they will run or be temporarily stopped – i.e. parked. Most non-production resources (dev, test, staging, and QA) can be parked at nights and on weekends, when they are not being used. On the flip side of this, some batch processing or load testing type applications can only run during non-business hours, so they can be shut down during the day.

Many of these AWS solutions offer recommendations, but do require manual efforts to gain the benefits. This is why third party solutions have have seen widespread adoption and include cloud management, cloud governance and visibility, and cloud optimization tools. In part two of this this blog we will have a look at some of those tools, the benefits of each, approach and the level of automation to be gained.

Read more ›
Copyright © ParkMyCloud 2016. All rights reserved|Privacy Policy