Blog - ParkMyCloud

Cloud Optimization Tools = Cloud Cost Control (Part II)

A couple of weeks ago in Part 1 of this blog topic we discussed the need for cloud optimization tools to help enterprises with the problem of cloud cost control. Amazon Web Services (AWS) even goes as far as suggesting the following simple steps to control their costs (which can also be applied  to Microsoft Azure and Google Cloud Platform, but of course with slightly different terminology):

    1. Right-size your services to meet capacity needs at the lowest cost;
    2. Save money when you reserve;
    3. Use the spot market;
    4. Monitor and track service usage;
    5. Use Cost Explorer to optimize savings; and
    6. Turn off idle instances (we added this one).

A variety of third-party tools and services have popped up in the market over the past few years to help with cloud cost optimization – why? Because upwards of $23B was spent on public cloud infrastructure in 2016, and spending continues to grow at a rate of 40% per year. Furthermore, depending on who you talk to, roughly 25% of public cloud spend is wasted or not optimized — that’s a huge market! If left unchecked, this waste problem is supposed to triple to over $20B by 2020 – enter the vultures (full disclosure, we are also a vulture, but the nice kind). Most of these tools are lumped under the Cloud Management category, which includes subcategories like Cost Visibility and Governance, Cost Optimization, and Cost Control vendors – we are a cost control vendor to be sure.

Why do you, an enterprise, care? Because there are very unique and subtle differences between the tools that fit into these categories, so your use case should dictate where you go for what – and that’s what I am trying to help you with. So, why am I a credible source to write about this (and not just because ParkMyCloud is the best thing since sliced bread)?

Well, yesterday we had a demo with a FinTech company in California that was interested in Cost Control, or thought they were. It turns out that what they were actually interested in was Cost Visibility and Reporting; the folks we talked to were in Engineering Finance, so their concerns were primarily with billing metrics, business unit chargeback for cloud usage, RI management, and dials and widgets to view all stuff AWS and GCP billing related. Instead of trying to force a square peg into a round hole, we passed them on to a company in this space who’s better suited to solve their immediate needs. In response, the Finance folks are going to put us in touch with the FinTech Cloud Ops folks who care about automating their cloud cost control as part of their DevOps processes.

This type of situation happens more often than not. We have a lot of enterprise customers using ParkMyCloud along with CloudHealth, CloudChekr, Cloudability, and Cloudyn because in general, they provide Cost Visibility and Governance, and we provide actionable, automated Cost Control.

As this is our blog, and my view from the street – we have 200+ customers now using ParkMyCloud, and we demo to 5-10 enterprises per week. Based on a couple of generic customer uses cases where we have strong familiarity, here’s what you need to know to stay ahead of the game:

  • Cost Visibility and Governance: CloudHealth, CloudChekr, Cloudability and Cloudyn (now owned by Microsoft)
  • Reserved Instance (RI) management – all of the above
  • Spot Instance management – SpotInst
  • Monitor and Track Usage: CloudHealth, CloudChekr, Cloudability and Cloudyn
  • Turn off (park) Idle Resources – ParkMyCloud, Skeddly, Gorilla Stack, BotMetric
  • Automate Cost Control as part of your DevOps Process: ParkMyCloud
  • Govern User Access to Cloud Console for Start/Stop: ParkMyCloud
  • Integrate with Single Sign-On (SSO) for Federated User Access: ParkMyCloud

To summarize, cloud cost control is important, and there are many cloud optimization tools available to assist with visibility, governance, management, and control of your single or multi-cloud environments. However, there are very few tools which allow you to set up automated actions leveraging your existing enterprise tools like Ping, Okta, Atlassian, Jenkins, and Slack.  Make sure you are not only focusing on cost visibility and recommendations, but also on action-oriented platforms to really get the best bang for your buck.

Read more ›

How to Optimize Cloud Spend with ParkMyCloud

The focus on how to optimize cloud spend is now as relentless as the initial surge was to migrate workloads from ‘on-prem’ to public cloud. A lot of this focus, and resultant discussions, were in regards to  options related to the use of Reserved Instances (RI’s), Spot Instances,or other pre-pay options. The pay-up-front discount plan makes sense  when you have some degree of visibility on future needs, and when there is no ‘turn-if-off’ option, which we here at ParkMyCloud call “parking”.

When it comes to the ability to ‘park instances’ we like to divide the world into two halves. There are those Production Systems, which typically need to be running 24/7/365, and then there are Non-Production Systems, which at least in theory have the potential to be parked when not in use. The former are typically  your end-customer or enterprise facing systems, which need to be online and available at all times.In this case, RI’s typically make sense. When it comes to those non-production systems, that’s where a tool such as ParkMyCloud comes into play. Here you have an opportunity to review the usage patterns and needs of your organization and how to optimize cloud spend accordingly. For example, you may well discover that your QA team never works on weekends, so you can turn their EC2 instances off on a Friday night and turn them back on first thing on Monday morning. Elsewhere, you might find other workloads that can be turned off in the small hours or even workloads which can be left off for extended periods.

Our customers typically like to view both their production and non-production systems in our simple dashboard. Here they can view all their public cloud infrastructure and  simply lock those production systems which cannot be touched. Once within the dashboard the different non-production workloads can then be reviewed and either centrally managed by an admin or have their management delegated to individual business units or teams.

Based on our customer usage we track, we see these non-production systems typically accounting for about 50% of what the companies spend on compute (i.e. instances / VMs). We then see those who aggressively manage these non-production instances saving up to 65% of their cost, which then makes a large dent in their overall cloud bill.

So, when you are thinking about how to optimize cloud spend, there’s a lot more opportunities than just committing to purchase in advance, especially for your non-production workloads.

Read more ›

Shutting Down RDS Instances in AWS – Introducing the Start/Stop Scheduler

Users of Amazon’s database service have been clamoring for a solution to shutting down RDS instances with an automatic schedule ever since 2009, when the PaaS service was first released.  Once Amazon announced the ability to power off and on RDS instances earlier this year, AWS users started planning out ways to schedule these instances using scripts or home-grown tools.  However, users of ParkMyCloud were happy to find out that support for RDS scheduling was immediately available in the platform.  If you were planning on writing your own scripts for RDS parking, let’s take a look at some of the additional features that ParkMyCloud could provide for you.

Schedule EC2 and ASG in addition to RDS

Very few AWS users are utilizing RDS databases without simultaneously running EC2 instances as compute resources.  This means that writing your own scheduling scripts for shutting down RDS instances would involve scheduling EC2 instances as well.

ParkMyCloud has support for parking EC2 resources, RDS databases, and Auto Scaling Groups all from the same interface, so it’s easy to apply on/off schedules to all of your cloud resources.

Logical Groups to tie instances together

Let’s say you have a QA environment with a couple of RDS databases and multiple EC2 instances running a specific version of your software. With custom scripts, you have to implement logic that will shut down and start up all of those instances together, and potentially in a specific order.  ParkMyCloud allows users to create Logical Groups, which shows up as one logical entity in the interface but is scheduling multiple instances behind it.  You can also set start or stop delays within the Logical Group to customize the order, so if databases need to be started first and stopped last, then you can set that level of granularity.

Govern user access to databases

If your AWS account includes RDS databases that relate to dev, QA, staging, production, test, and UAT, then you’ll want to allow different users to access different databases based on their role or current project.  Implementing user governance in your own scripts can be a huge hassle, but ParkMyCloud makes it easy to split your user base into teams.  Users can be part of multiple teams if necessary, but by default they will only see the RDS databases that are in the teams they have access to.

High visibility into all AWS accounts and regions

Scripting your own schedules can be a challenge with a single region or account, but once you’re using RDS databases from around the world or across AWS accounts, you’re in for a challenge.  ParkMyCloud pulls all resources from all accounts and all AWS regions into one pane of glass, so it’s easy to apply schedules and keep an eye on all your RDS databases.

RDS DevOps automation

It can be a challenge to integrate your own custom scripts with your devops processes.  With ParkMyCloud, you have multiple options for automation.  With the Policy Engine, RDS instances can have schedules applied automatically based on tags, names, or locations.  Also, the ParkMyCloud API makes it easy to override schedules and toggle instances from your Slack channels, CI/CD tools, load-testing apps, and any other automated processes that might need a database instance powered on for a brief time.

Conclusion

Shutting down RDS instances is a huge money-saver.  Anyone who is looking to implement their own enterprise-grade AWS RDS start/stop scheduler is going to run into many challenges along the way.  Luckily, ParkMyCloud is on top of things and has implemented RDS parking alongside the other robust feature set that you already used for cost savings.  Sign up for a free trial today to supercharge your RDS database scheduling!

Read more ›

Interview: Hybrid Events Group + ParkMyCloud to Automate EC2 Instance Scheduling and Optimize AWS Infrastructure

We talked with Jedidiah Hurt, DevOps and technical lead at Hybrid Events Group, about how his company is using ParkMyCloud to automate EC2 instance scheduling, saving hours of development work. Below is a transcript of our conversation.

Appreciate you taking the time to speak with us today. Can you start off by giving us some background on your role, what Hybrid Events Group does, and why you got into doing what you do?

I do freelance work for Hybrid Events Group and am now moving into the role of technical lead. We had a big client we were working with this spring and we needed to fire up several EC2 instances. We were doing live broadcasting events across the U.S., which is what the company specializes in – events A/V services. So we do live webcasting, and we can do CapturePro, another service we offer where we basically just show up to any event that someone would want to record, which usually is workshops and keynotes at tech conferences, and we record on video and also capture the presenter’s presentation in video in real time.

ParkMyCloud, what we used it for, was just to automate EC2 instances for doing live broadcasts.

Was there any reason you chose AWS over others like Azure or Google Cloud, out of curiosity?

I just had the most experience with AWS; I was using AWS before Azure and Google Cloud existed. So I haven’t, or I can’t say that I’ve actually really given much of a trial to Azure or Google Cloud. I might have to give them a look here sometime in the future.

Do you use any PaaS services in AWS, or do you focus on compute databases and storage?

Yeah, not a whole lot right now. Just your basic S3, EC2, and I think we are probably going to move into elastic load balancing and auto scaling groups within the next few months or so as we build out our platform.

Do you use Agile development process to build out your platform and provide continuous delivery?

So, I am an agile practitioner, but we are just kind of brown fielding the platform. We are in the architecture stage right now, so we will be doing all of that, as far as continuous deployment, and hopefully continuous integration where we actually have some automated testing.

As far as tools, I’m the only developer on the team right now, so we won’t really have a full Agile or be fully into Agile. We haven’t got boards and prints and planning, weekly meetings, and all those things, because it’s just me. But we integrate portions of it, as far as having stakeholders kind of figuring out what our minimum viable product is.

What drove you to look for something like ParkMyCloud, and how did you come across it?

ParkMyCloud enabled us to automate a process that we were going to do manually, or that I was going to have to write scripts for and maintain. I think initially I was looking into just using the AWS CLI, and some other kind of test scheduler, to bring up the instances and then turn them off after our daily broadcast session was over. I did a little bit of googling to see if there were any time-based solutions available and found ParkMyCloud, and this platform does exactly what’s needed and more.

And you are using the free tier ParkMyCloud, correct?

Yes. I don’t remember what the higher tiers offered, but this was all we really needed. We just had three or four large EC2 instances that we wanted to bring up for four to five hours a day, Monday through Friday, so it had all the core features that we currently need.

Anything that stood out for you in terms of using the product?

I’d say on the plus side I was a little bit concerned at the beginning as far as the reliability of the tool, because we would have been in big trouble with our client if ParkMyCloud failed to bring up an instance at a scheduled start time. We used it, or I guess I would say we relied on it, every day for 2 months solid, and never saw any issues as far as instances not coming up when they were supposed to, or shutting down when they were not supposed to. I was really pleased with, what I would say, the reliability of the tool – that definitely stuck out to me.

From an ROI standpoint, are you satisfied with savings and the way the information is presented to you?

Yeah, absolutely. And I think for us, the ROI wasn’t so much the big difference between having the instances running all the time, or having the instances on a schedule. The ROI was more from the fact that I didn’t have to build the utility to accomplish that because you guys already did that. So in that sense, it probably saved me many hours of development work.

Also, that kind of uneasy feeling you get when you hack up a little script and put it into production versus having a well-tested, fully-automated platform. I’m really happy that we found ParkMyCloud, it has definitely become an important part of our infrastructure management over last few months.

As our final question, how much overhead or time did you have to spend in getting ParkMyCloud set up to manage your environment, and did you have to do anything on a daily or weekly basis to maintain it?

So, as I said, our particular use case was very basic, so it ended up being three instances that we needed to bring up for three or four hours a day and then shut them down. I’d say it took me ten to fifteen minutes to get rolling with ParkMyCloud and automate EC2 instance scheduling. And now we save thousands of dollars per month on our AWS bill.

Read more ›

Cloud Optimization Tools = Cloud Cost Control

Over the past couple of years we have had a lot of conversations with large and small enterprises regarding cloud management and cloud optimization tools, all of whom were looking for cost control. They wanted to reduce their bills, just like any utility you might run at home — why spend more than you need to? Amazon Web Services (AWS) actively promotes optimizing cloud infrastructure, and where they lead, others follow. AWS even goes so far as to suggest the following simple steps to control AWS costs:

  1. Right-size your services to meet capacity needs at the lowest cost;
  2. Save money when you reserve;
  3. Use the spot market;
  4. Monitor and track service usage;
  5. Use Cost Explorer to optimize savings; and
  6. Turn off idle instances (we added this one).

Its interesting to note use of the word ‘control’ even though the section is labeled Cost Optimization.

So where is all of this headed? It’s great that AWS offers their own solutions but what if you want automation into your DevOps processes, multi-cloud support (or plan to be multi cloud), real-time reporting on these savings, and to turn stuff off when you are not using it? Well then you likely need to use a third-party tool to help with these tasks.

Let’s take a quick look at a description of each AWS recommendation above, and get a better understanding of each offering. Following this we will then explore if these cost optimization options can be automated as part of a continuous cost control process:

  1. Right-sizing – Both the EC2 Right Sizing solution and AWS Trusted Advisor analyze utilization of EC2 instances running during the prior two weeks. The EC2 Right Sizing solution analyzes all instances with a max CPU utilization less than than 50% and determines a more cost-effective instance type for that workload, if available.
  2. Reserved Instances (RI) – For certain services like Amazon EC2 and Amazon RDS, you can invest in reserved capacity. With RI’s, you can save up to 75% over equivalent ‘on-demand’ capacity. RI’s are available in three options – (1) All up-front, (2) Partial up-front or (3) No upfront payments.
  3. Spot – Amazon EC2 Spot instances allow you to bid on spare Amazon EC2 computing capacity. Since Spot instances are often available at a discount compared to On-Demand pricing, you can significantly reduce the cost of running your applications, grow your application’s compute capacity and throughput for the same budget, and enable new types of cloud computing applications.
  4. Monitor and Track Usage – You can use Amazon CloudWatch to collect and track metrics, monitor log files, set alarms, and automatically react to changes in your AWS resources. You can also use Amazon CloudWatch to gain system-wide visibility into resource utilization, application performance, and operational health.
  5. Cost Explorer – AWS Cost Explorer gives you the ability to analyze your costs and usage. Using a set of default reports, you can quickly get started with identifying your underlying cost drivers and usage trends. From there, you can slice and dice your data along numerous dimensions to dive deeper into your costs.
  6. Turn off Idle Instances – To “park” your cloud resources by assigning them schedules of operating hours they will run or be temporarily stopped – i.e. parked. Most non-production resources (dev, test, staging, and QA) can be parked at nights and on weekends, when they are not being used. On the flip side of this, some batch processing or load testing type applications can only run during non-business hours, so they can be shut down during the day.

Many of these AWS solutions offer recommendations, but do require manual efforts to gain the benefits. This is why third party solutions have have seen widespread adoption and include cloud management, cloud governance and visibility, and cloud optimization tools. In part two of this this blog we will have a look at some of those tools, the benefits of each, approach and the level of automation to be gained.

Read more ›

Cloud Cost Management Tool Comparison

Not only has it become apparent that public cloud is here to stay, it’s also growing faster as time goes on (by 2020, it is estimated that more than 40% of enterprise workloads will be in the cloud). IT infrastructure has changed permanently, and enterprise organizations are coming to terms with some of the side effects of this shift.  One of those side effects is the need for tools and processes (and even teams in larger organizations) dedicated to cloud cost management and cost control.  Executives from all teams within an organization want to see costs, projections, usage, savings, and quantifiable efforts to save the company money while maximizing IT throughput as enterprises shift to resources to the cloud.  

There’s a variety of tools to solve some of these problems, so let’s take a look at a few of the major ones.  All of the tools mentioned below support Amazon AWS, Microsoft Azure, and Google Cloud Platform.

CloudHealth

CloudHealth provides detailed analytics and reporting on your overall cloud spend, with the ability to slice-and-dice that data in a variety of ways.  Recommendations about your instances are made based on a score driven by instance utilization and cloud provider best practices. This data is collected from agents that are installed on the instances, along with cloud-level information.  Analysis and business intelligence tools for cloud spend and infrastructure utilization are featured prominently in the dashboard, with governance provided through policies driven by teams for alerts and thresholds.  Some actions can be scripted, such as deleting elastic IPs/snapshots and managing EC2 instances, but reporting and dashboards are the main focus.

Overall, the platform seems to be a popular choice for large enterprises wanting cost and governance visibility across their cloud infrastructure.  Pricing is based on a percentage of your monthly cloud spend.

CloudCheckr

Cloudcheckr provides visibility into governance, security, compliance, and cost problems based on doing analytics and checks against logic built into their platform. It relies on non-native tools and integrations to take action on the recommendations, such as Spotinst, Ansible, or Chef.  CloudCheckr’s reports cover a wide range of topics, including inventory, utilization, security, costs, and overall best-practices. The UI is simple and is likely equally well regarded by technical and non-technical users.

The platform seems to be a popular choice with small and medium sized enterprises looking for greater overall visibility and recommendations to help optimize their use of cloud.  Given their SMB focus customers are often provided this service through MSPs. Pricing is based on your cloud spend, but a free tier is also available.

Cloudyn

Cloudyn (recently acquired by Microsoft) is focused on providing advice and recommendations along with chargeback and showback capabilities for enterprise organizations. Cloud resources and costs can be managed through their hierarchical team structure.  Visibility, alerting, and recommendations are made in real time to assist in right-sizing instances and identifying outlying resources.  Like CloudCheckr, it relies on external tools or people to act upon recommendations and lacks automation

Their platform options include supporting MSPs in the management of their end customer’s cloud environments as well as an interesting cloud benchmarking service called Cloudyndex.  Pricing for Cloudyn is also based on your monthly cloud spend.  Much of the focus seems to be on current Microsoft Azure customers and users.

ParkMyCloud

Unlike the other tools mentioned, ParkMyCloud focuses on actions and automated scheduling of resources to provide optimization and immediate ROI.  Reports and dashboards are available to show the cost savings provided by these schedules and recommendations on which instances to park.  The schedules can be manually attached to instances, or automatically assigned based on tags or naming schemes through its Policy Engine.  It pairs well with the other previously mentioned recommendation-based tools in this space to provide total cost control through both actions and reporting.

ParkMyCloud is widely used by DevOps and IT Ops in organizations from small startups to global multinationals, all who are keen to automate cost control by leveraging ParkMyCloud’s native API and pre-built integration with tools like Slack, Atlassian, and Jenkins.  Pricing is based on a cost per-instance, with a free tier available.

Conclusion

Cloud cost management isn’t just a “should think about” item, it’s a “must have in place” item, regardless of the size of a company’s cloud bill.  Specialized tools can help you view, manage, and project your cloud costs no matter which provider you choose.  The right toolkit can supercharge your IT infrastructure, so consider a combination of some of the tools above to really get the most out of your AWS, Azure, or Google environment.

Read more ›

Cloud Webhooks – Notification Options for System Level Alerts to Improve your Cloud Operations

Webhooks are user-defined HTTP POST callbacks. They provide a lightweight mechanism for letting remote applications receive push notifications from a service or application, without requiring polling. In today’s IT infrastructure that includes monitoring tools, cloud providers, DevOps processes, and internally-developed applications, webhooks are a crucial way to communicate between individual systems for a cohesive service delivery. Now, in ParkMyCloud, webhooks are available for even more powerful cost control.

For example, you may want to let a monitoring solution like DataDog or New Relic know that ParkMyCloud is stopping a server for some period of time and therefore suppress alerts to that monitoring system for the period the server will be parked, and vice versa enable the monitoring once the server is unparked (turned on). Another example would be to have ParkMyCloud post to a chatroom or dashboard when schedules have been overridden by users. We do this by enabling systems notifications to our cloud webhooks.

Previously only two options were provided when configuring system level and user notifications in ParkMyCloud: System Errors and Parking Actions. We have added 3 new notification options for both system level and user notifications. Descriptions for all five options are provided below:

  • System Errors – These are errors occurring within the system itself such as discovery errors, parking errors, invalid credential permissions, etc.
  • System Maintenance and Updates – These are the notifications provided via the banner at the top of the dashboard.
  • User Actions – These are actions performed by users in ParkMyCloud such as manual resource state toggles, attachment or detachment of schedules, credential updates, etc.
  • Parking Actions – These are actions specifically related to parking such as automatic starting or stopping of resources based on defined parking schedules.
  • Policy Actions – These are actions specifically related to configured policies in ParkMyCloud such as automatic schedule attachments based on a set rule.

We have made the options more granular to provide you better control on events you want to see or not see.

These options can be seen when adding or modifying a channel for system level notifications (Settings > System Level Notifications). In the image shown below, a channel is being added.

Note: For additional information regarding these options, click on the Info Icon to the right of Notify About.

The new notification options are also viewable by users who want to set up their own notifications (Username > My Profile).  These personal notifications are sent via email to the address associated with your user.  Personal notifications can be set up by any user, while Webhooks must be set up by a ParkMyCloud Admin.

After clicking on Notifications, you will see the above options and may use the checkboxes to select the notifications you want to receive. You can also set each webhook to handle a specific ParkMyCloud team, then set up multiple webhooks to handle different parts of your organization.  This offers maximum flexibility based on each team’s tools, processes, and procedures. Once finished, click on Save Changes. Any of these notifications can be sent then to your cloud webhook and even Slack to ensure ParkMyCloud is integrated into your cloud management operations.

 

Read more ›

Saving Money on Batch Workloads in Public Cloud

batch workloads

Large companies have traditionally had an impressive list of batch workloads, which run at night, when people have gone home for the day. These include such things as application and database backup jobs; extraction, translation and load (ETL) jobs; disaster recovery (DR) environment checks and updates; online analytical processing (OLAP) jobs; and monthly/ quarterly billing updates or financial “close”, to name a few.

Traditionally, with on-premise data centers, these workloads have run at night to allow the same hardware infrastructure that supports daytime interactive workloads to be repurposed, if you will, to run these batch workloads at night. This served a couple of purposes:

  • It avoided network contention between the two workloads (as both are important), allowing the interactive workloads to remain responsive.
  • It avoided data center sprawl by using the same infrastructure to run both, rather than having dedicated infrastructure for interactive and batch.

Things Are Different with Public Cloud

As companies move to the public cloud, they are no longer constrained by having to repurpose the same infrastructure. In fact, they can spin up and spin down new resources on demand in AWS, Azure or Google Cloud Platform (GCP), running both interactive and batch workloads whenever they want.

Network contention is also less of concern, since the public cloud providers typically have plenty of bandwidth. The exception of course is where batch workloads use the same application interfaces or APIs to read/write data.

So, moving to public cloud offers a spectrum of possibilities, and you can use one or any combination of them:

  • You can run batch nightly using similar processes as you do in your online data centers, but on separate provisioned instances/virtual machines. This probably results in the least effort to moving batch to the public cloud, the least change to your DevOps processes, and perhaps saves you some money by having instances sized specifically for the workloads and being able to leverage cloud cost savings options (e.g.,  reserved instances);
  • You can run batch on separately provisioned instances/virtual machines, but concurrently with existing interactive workloads. This will likely result in some additional work to change your DevOps processes, but offers more freedom and similar benefits mentioned above. You will still need to pay attention to application interfaces/APIs the workloads may have in common; or
  • At the extreme end of the cloud adoptions spectrum, you could use cloud provider platform as a service (PaaS) offerings, such as AWS Batch, Microsoft Azure Batch or GCP Cloud Dataflow, where batch is essentially treated as a “black box”. A detailed comparison of these services is beyond the scope of this blog. However, in summary, these are fully managed services, where you queue up input data in an S3 bucket, object blob or volume along with a job definition, appropriate environment variables and a schedule and you’re off to races. These services employ containers and autoscaling/resource groups/instance groups where appropriate, with options to use less expensive compute in some cases. (For example, with AWS Batch, you have the option of using spot instances.)

The advantage of this approach is potentially faster time to implement and (maybe) less expensive monthly cloud costs, because the compute services run only at the times you specify. The disadvantages of this approach may be the degree of operational/configuration control you have; the fact, that these services may be totally foreign to your existing DevOps folks/processes (i.e., there is a steep learning curve); and it may tie you to that specific cloud provider.

A Simple Alternative

If you are looking to minimize impact to your DevOps processes (that is, the first two approaches mentioned above), but still save money, then ParkMyCloud can help.

Normally, with the first two options, there are cron jobs scheduled to kick-off batch jobs at the appropriate times throughout the day, but the underlying instances must be running for cron to do its thing. You could use ParkMyCloud to put parking schedules on these resources, such they are turned OFF for most of the day, but are turned ON just-in-time to still allow the cron jobs to execute.

We have been successfully using this approach in our own infrastructure for some time now, to control a batch server used to do database backups. This would, in fact, provide more savings than AWS reserved instances.

Let’s look at specific example in AWS. Suppose you have an m4.large server you use run batch jobs. Assuming Linux pricing in us-east-1, this server costs $0.10 per hour, or about $73 per month. Suppose you have configured cron to start batch jobs at midnight UTC and that they normally complete 1 to 1-½ hours later.

You could purchase a Reserved Instance for that server, where you either pay nothing upfront or all upfront and your savings would be 38%-42%.

Or, you could put a ParkMyCloud schedule where the instance is only ON from 11 pm-1 am UTC, allowing enough time for the cron jobs to start and run. The savings in that case would be 87.6% (including the cost of ParkMyCloud) without the need for a one year commitment. Depending on how many batch servers you run in your environment and their sizes, that could be some hefty savings.

Conclusion

Public cloud will offer you a lot of freedom and some potentially attractive cost savings as you move batch workloads from on premise. You are no longer constrained by having the same infrastructure serve two vastly different types of workloads — interactive and batch. The savings you can achieve by moving to public cloud can vary, depending on the approach you take and the provider/service you use.

The approach you take, depends on the amount of process change you’re willing to absorb in your DevOps processes. If you are willing to throw caution to the wind, the cloud provider PaaS offerings for batch can be quite compelling.

If you wish to take a more cautious approach, then we engineered ParkMyCloud to park servers without the need for scripting, or the need for you to be a DevOps expert. This approach allows you to achieve decent savings, with minimal change to your DevOps batch processes and without the need for Reserved Instances.

Read more ›

New: Cloud Savings Dashboard Now Available in ParkMyCloud

We’re happy to introduce ParkMyCloud’s new reporting dashboard! There’s now easy to access reports that provide greater insight into information regarding cloud costs, team rosters, and more. Details on this update can be found in our support portal

cloud savings

Dashboard Details

Now, when you click Reports in the left navigational panel, instead of getting the option to download a full savings report, you’ll see your ParkMyCloud reporting dashboard. This provides a quick view of cloud provider, team and resource costs, and information regarding your ParkMyCloud savings. At the top of the reporting dashboard, two drop-down menus are provided for selecting the report type and the time period. The default selections are Dashboard and Trailing 30 Days, which is what you see after clicking on reporting in the left navigational menu. Click on a drop-down menu to choose other available options.

Underneath the Report Type drop-down menu, you will see several options that are broken down into additional sections (Financial, Resource, Administrative, etc.) Click on an option in the menu to view that specific report within the dashboard. These reports can also be shown using a variety of time periods. Reports may be exported as an CSV or Excel file by clicking on the desired option on the right of the Report and Time Period drop-down menus as well.

Click on Legacy if you would prefer to still use the previous reporting functionality rather than the new reporting dashboard in ParkMyCloud. A pop-up window will appear for selecting the start and end date along with the type of legacy report. As part of this change, we have also moved Audit Logs underneath reporting. To access this option, you will need to select Reports in the left navigational panel and then Audit Log.

Check It Out

If you don’t yet use ParkMyCloud, you can try it now for free. We offer a 14-day free trial of all ParkMyCloud features, after which you can choose to subscribe to a premium plan or continue parking your instances using ParkMyCloud’s free tier forever.

If you already use ParkMyCloud, you’ll instantly see a visual representation of your cloud savings just by logging in to the platform. We challenge you to use this as a scoreboard, and try to drive your monthly savings as high as you can!

Read more ›

Exploring AWS RDS Pricing and Features

AWS RDS savings

Traditional systems administration of servers, applications, and databases used to be a little simpler when it came to choices and costs.  For a long time, there was no other choice than to hook up a physical server, put on your desired OS, and install the database or application software that you needed.  Eventually, you could choose to install your OS on a physical server or on a virtual machine running on a hypervisor.  Then, large companies started running their own hypervisor and allowed you to rent your VM for as long as you needed it on their servers.  In 2009, Amazon started offering the ability to rent databases directly, without having to worry about the underlying OS in a platform as a service (PaaS) offering called Relational Database Service (RDS).  This added another layer of complexity to your choices when managing your infrastructure.  Let’s explore AWS RDS pricing a little bit, and examine some of the features that comes with it.

RDS Basics

AWS RDS offers the ability to directly run and manage a relational database without managing the infrastructure that the database is running on, or a having to worry about patching of the database software itself.  Amazon currently offers RDS in the form of MySQL, Aurora (MySQL on steroids), Oracle, Microsoft SQL Server, PostgreSQL, and MariaDB.  The database sizes are grouped into 3 categories: Standard (m4), Memory Optimized (r3), and Micro (t2).  Each family has multiple sizes that have varying numbers of vCPUs, GiBs of memory, levels of network performance, and can be input/output optimized.

Each RDS instance can be set up to be “multi-AZ”, leveraging replicas of the database in a different availability zones within AWS.  This is often used for production databases. If a problem arises in one availability zone, failover to one of replica databases happens automatically behind the scenes. You don’t have to manage it. .  Along with multi-AZ deployments, Amazon offers “Aurora”, which has more fault tolerance and self healing beyond multi-AZ,  as well as additional performance features.

RDS Pricing

RDS is essentially a service running on top of EC2 instances, but you don’t have access to the underlying instances. Therefore, Amazon has set the pricing for RDS instances in a very similar way to EC2 instances, which will be familiar once you’ve gotten a grasp on the structure that is already in place for compute.  There are multiple components to the price of an instance, including: the underlying instance size , storage of data, multi-AZ capability, and sending data out (sending data in is free for the transfer).  To add another layer of complexity, each database type (MySQL, Oracle, etc) has different prices for each of the factors.  Aurora also charges for I/O on top of the other costs.

When you add all this up, the cost of an RDS instance can go through the roof for a high-volume database.  It also can be hard to predict the usage, storage, and transfer needs of your database, especially for new applications.  Also, the raw performance might be a lot less than what you might expect running on your own hardware or even on your own instances. What makes the price worth it?

RDS vs. Installing a Database on EC2

Frequently, the choice comes down to using RDS for your database backend, or installing your own database on an EC2 instance the “traditional” way.  From a purely financial perspective, installing your own database is almost guaranteed to be cheaper if you focus on AWS direct costs alone.  However, there’s more to the decision than just the cost of the services.

What often gets lost in the use of a service is the time-to-value savings (which includes your time and potentially opportunity cost/benefit for bringing services online, faster).  For example , by using RDS instead of your own database, you avoid the need to install and manage the OS and database software, as well as the ongoing patching of those.  You also get automatic backups and recovery through the AWS console or AWS API.  You avoid having to configure storage LUNs and worrying about optimizing striping for better I/O. Resizing instances is much simpler with RDS, both going smaller or bigger if necessary.  High-availability (either cold or warm) is available at the click of a button.  All of this means less management for you and faster deployment times, though at a higher price point. If your company competes in a highly competitive market, these faster deployment times can make all the difference in the world to your bottom line.

One downside of just about every PaaS offering (and RDS was no exception) is that there typically is no “OFF” switch. This means that in non-production environments you are paying for the service, whether your devops folks are using it or not.  For RDS that was changed recently by AWS.  RDS instances in dev/test environments can now be stopped. .  

ParkMyCloud has made “parking” public cloud compute resources as simple as possible. We also natively support parking RDS instances as well, helping you save money on non-production databases.  

By using our  Logical Groups feature, you can create a simple “stack” containing both compute instances and RDS databases to represent a particular application. The start/stop times can be sequenced within the group and a single schedule can be used on the group for simplified management.

Conclusion

AWS RDS pricing can get a bit tricky, and really requires you to know the details of your database in order to accurately predict the bill.  However, there are a ton of benefits to using the service, and can really help streamline your systems administration by handling the management and deployment of your backend database.  For companies that are moving to the cloud (or born in the cloud), RDS might be your choice when compared to running on a separate compute instance or on your own hypervisor, as it allows you to focus on your business and application, not on being a database administrator. For larger, established companies with a large team of DBAs and well established automation or for IO-intensive applications, RDS might not be the right fit for your business. By knowing the features, benefits, drawbacks, and factors in the cost, you can make the most informed decision for your database needs.

Read more ›

Interview: DevOps in AWS – How to Automate Cloud Cost Savings

automate cloud cost savings

We chatted with Ryan Alexander, DevOps Engineer at Decision Resources Group (DRG) about his company’s use of AWS and how they automate cloud cost savings. Below is a transcript of our conversion.

Hi Ryan, thanks for speaking with us. To start out, can you please describe what your company does?

Decision Resources Group offers market information and data for the medtech industry. For example, let’s say a medical graduate student is doing a thesis on Viagra use in the Boston area. They can use our tool to see information such as age groups, ethnicities, number of hospitals, and number of people who were issued Viagra in the city of Boston.

What does your team do within the company? What is your role?

I’m a DevOps engineer on a team of two. We provide infrastructure automation to the other teams in the organization. We report to senior tech management, which makes us somewhat of an island within the organization.

Can you describe how you are using AWS?

We have an infrastructure team internally. Once a server or infrastructure is built, we take over to build clusters and environments for what’s required. We utilize pretty much every tool AWS offers — EBS, ELB, RDS, Aurora, CloudFormation, etc.

What prompted you to look for a cost control solution?

When I joined DRG in December, there was a new cost saving initiative developing within the organization. It came from our CTO, who knew we could be doing better and wanted to see where we might be leaving money on the table.

How did you hear about ParkMyCloud?

One of my colleagues actually spoke with your CTO, Dale, at AWS re:Invent, and I had also heard about ParkMyCloud at DevOpsDays Toronto 2016. We realized it could help solve some of our cloud cost control problems and decided to take a look.

What challenges were contributing to the high costs? How has ParkMyCloud helped you solve them?

We knew we had a problem where development, staging, and QA environments were only used for 8 hours a day – but they were running for 24 hours a day. We wanted to shut them down and save money on the off hours, which ParkMyCloud helps us do automatically.

We also have “worker” machines that are used a few times a month, but they need to be there. It was tedious to go in and shut them down individually. Now with ParkMyCloud, I put those in a group and shut them down with one click. It is really just that easy to automate cloud cost savings with ParkMyCloud.

We also have security measures in place, where not everyone has the ability to sign in to AWS and shut down instances. If there was a team that needed them started on demand, but they’re in another country and I’m sleeping, they have to wait until I wake up the next morning, or I get up at 2 AM. Now that we set up Single Sign-On, I can set up the guys who use those servers, and give them the rights to startup and shutdown those servers. This has been more efficient for all of us. I no longer have to babysit and turn those on/off as needed, which saves time for all of us.

With ParkMyCloud, we set up teams and users so they can only see their own instances, so they can’t cause a cascading failure because they can only see the servers they need.

Were there any unexpected benefits of ParkMyCloud?

When I started, I deleted 3 servers that were sitting there doing nothing for a year and costing the company lots of money. With ParkMyCloud, that kind of stuff won’t happen, because everything gets sorted into teams. We can see the costs by team and ask the right questions, like, “why is your team’s cost so expensive right now? Why are you ignoring these recommendations from ParkMyCloud to park these instances?”

 

We rely on tagging to do all of this. Tagging is life in DevOps.

Read more ›

Interview: Atlassian Bamboo Automation + ParkMyCloud for AWS Cost Savings

Atlassian Bamboo automation

We talked with Travis Rehl, Director of Application and Engineering at Siteworx, about how his team is using ParkMyCloud in conjunction with Atlassian Bamboo automation in order to improve governance and optimize their AWS cloud infrastructure. Below is a transcript of our conversation.

Can you start by telling us about SiteWorx and what you guys do?

Sure, so Siteworx is a company that does digital transformations for clients, and my particular piece of it is Managed Services Hosting. We host ecommerce and content management systems for clients, generally Fortune 500 Companies or larger. We host specific products in AWS, and we’re moving into Azure as well.

What is your role in the company?

I am the Director of Application and Engineering here at Siteworx. I run the Siteworx services group which includes our hosting department as well as our application development team which supports our “run” phase of an engagement with a client.

Who in your organization is using ParkMyCloud?

We are currently using it for our Siteworx internal infrastructure, both EC2 and RDS, but I have some ideas to add it as a part of our managed services offering.

In the app we have maybe 5 or 6 users. They are team leads or engineering managers who have identified the scheduling that is appropriate for those particular instances and AWS accounts. This gives them the ability to group different servers together by environment levels for different clients.  One person from our finance team has access to it for billing and reporting.

My team in particular that is using ParkMyCloud is our engineering and operations group. There are two different teams who are the main ParkMyCloud users: our Operations team is 24/7, our Engineering team is generally 9-5 Eastern. They use ParkMyCloud to reduce costs, and have implemented it in such a way that will give the ability for our Development teams to turn servers back on as needed. If they have a project or demo that is occurring at an off hour, they are able to hit a button through our automation system — we’re using Atlassian Bamboo automation — to turn on the servers and utilize them.

Can you tell us more about that Atlassian Bamboo automation system?

If a team member wants to deploy code to a server during off hours, they will have a button within Bamboo to press to turn the server on via the ParkMyCloud API. Then they can hit a second set of buttons to send their code changes out to it. We utilize the calendar “snooze” function that PMC offers.

What were you looking for when you found ParkMyCloud?

I was looking for a technology that would allow us to optimize and automate our AWS cloud management. Internally, we have an agenda of trying to branch out to as many cloud platforms as necessary. So I was looking into many different services that manage your cloud-based servers and are compatible with different providers. That is when ParkMyCloud was suggested to me by a friend. We started a free trial, and got in touch with you all.

I am all in on ParkMyCloud, and I think we have a lot of use for it and down the road we plan to work with our clients to incorporate into our service offering.

Do you have any other cost control measures in place for AWS?

We evaluate server performance using Trusted Advisor in AWS or other services that say that you could scale down. The issue with those other services is that they are sometimes inaccurate because they use average CPU usage that does not take into account server down time. We try to evaluate and scale down as necessary based on the CPU usage when it is active.

How did the evaluation with ParkMyCloud go?

After we did some initial research on ParkMyCloud and other tools, we got in touch with PMC, started a free trial, did a demo, and a few questions we needed clarified – the entire process took just a couple weeks. The platform is entirely self service, and the ROI is immediate and verifiable.

Read more ›

How X-Mode Deals with Rising AWS Costs

rising AWS costs

We sat down with Josh Anton, CEO of X-Mode, a technology app that has been experiencing rapid growth and rising AWS costs. We asked him about his company, what cloud services he uses, and how he goes about mitigating those costs.

Can you start by telling us about X-Mode and what you guys do?

X-Mode is a location platform that currently maps out 5-10% of the U.S. population on a monthly basis and 1-2% of the U.S. population daily, which is about 3-6 million active daily users | 15M to 20M users monthly. X-Mode collect location based data from applications and platforms used by these consumers, and then develop consumer segments or attribution where our customers basically use the data to determine if their advertising is effective and to develop target profiles. For example, based on the number and types of coffee shops a person has visited, we can assume they are this type of coffee drinker. Or a company like McDonald’s will determine if their advertising is effective if they see that an ad is run in a certain area, and a person visits that restaurant in the next few days. The data has many applications.

How did you get this idea, Josh?

We started off as an app called Drunk Mode, which was founded and built while I was at the University of Virginia studying Marketing and IT. After about a year and half our app grew to about 1.5 million users by leveraging influencer marketing via Trend Pie and student campus rep program at 50+ universities. In September of 2016, we realized that if we developed a location-based technology platform we could monetize and capitalize on the location data we collected from the Drunk Mode app. Along with input from our advisors, we developed a strategy to help out other small apps by aggregating their data, crunching it, and packaging it up in real-time to sell to ad agencies and retailers, acting almost as a data wholesaler and helping these small app plays monetize their data as a secondary source of income.

Who’s cloud services are you using and how does X-Mode work?

We use Amazon Web Services (AWS) for all of our cloud infrastructure and primarily use their EC2, RDS, and Elastic Beanstalk services. Our technology works by collecting and aggregating location data based on when and where people go on a daily basis. it is collected locally by iOS and Android devices, and passed to AWS’s cloud using their API gateway function. The cool thing is that we are able to pinpoint a person’s location within feet of a retail location. The location data is batched and sent to our servers every 12 hours and we package it up and license the data out to our vendors. We are processing around 10 to 12 billion location based records per month, and have some proprietary algorithms which make our processing very fast and we have almost no burn on the phone’s battery. Our customers are sent the data daily and we use services like Lambda, RDS and Elastic Beanstalk to make this as efficient as possible. We are now developing the functionality to better triangulate beacons so that we can pinpoint locations even better, and send location data within the hour, rather than within the day.

Why did you pick AWS?

We chose AWS because when X-Mode joined Fishbowl Labs (a startup accelerator run and sponsored by AOL in Northern Virginia), we were given $15,000 in free AWS credits. The free credits have made me very loyal to Amazon’s service and now the switching costs would be fairly high in terms of effort and dollars to move away from Amazon. So even though it’s expensive, we are here to stay and adopting more of AWS’s advanced services in order to improve our platform performance and take advantage of their technology advances. Another reason we stay with AWS is that we know it is going to be there, we previously used another service called Parse.com that was acquired by Facebook and a few years later, they shut down the service, for us performance and stability (the server service existing 10 years from now) are very important to us.

Are you still using AWS credits?

No, we used those up many months ago. We have gone from spending a few hundred dollars a month to spending $25,000 or more a month. While that is a cost, it’s also a blessing in that X-Mode is rapidly growing and scaling. Outside of the cost of people, this is our biggest monthly expense. ParkMyCloud was an easy choice, given 75% or more of our AWS spend is on EC2 and RDS services, and ParkMyCloud’s ability to ‘park’ each service and their flexible governance model for our remote engineering team. So we are very excited about the savings ParkMyCloud will produce for us, along with some of the new design work we will be doing to make our platform even more efficient.

Are there other ways you are looking to optimize your AWS spend?  

We believe that we have to re-architect the system. We have actually done that three times given our rapid platform growth, but it is all about making sure that we are optimizing our import/export process. We are running our servers at maximum capacity to help get information to people, and are continually looking to make our operation more efficient. Along with using ParkMyCloud, we are focusing on general platform optimization to make sure we keep costs down, improve performance and innovate at a rapid pace.

What other tools do you use as part of your DevOps process?

Let’s keep in mind we are startup, but we are getting more and more organized in terms of development cycles and have a solid delivery process. And yes, we use tools like Slack, Jira, BaseCamp, Bitbucket, and Google Drive. Everything is SaaS-based and everything is in the cloud, and we follow an agile development process. On the Sales and Marketing side we are solely a millennial workforce and work in office but our development team is basically stay at home dads distributed around the country, so planning and communication are keys to our success. That’s where Slack and Jira come into play. In terms of processes, we are trying to implement a better QA process so we deliver very vetted code to our end users. We do a lot of development planning and mapping each quarter, so all of this is incredibly important to the growth of the X-Mode platform and to the success of our organization.

Read more ›

Trends in Cloud Computing – ParkMyCloud Turns Two, What’s New?

trends in cloud computing

It’s not hard to start a company but it’s definitely hard to grow and scale a company, so two years later we thought we would discuss trends in cloud computing that shape our growth and vision – what we see and hear as we talk to enterprises, MSP’s and industry pundits on a daily basis. First, and foremost we need to thank our customers, both free and paid, who use ParkMyCloud, save millions a year, and actively engage with us in defining our roadmap, and have helped us develop the best damn cloud cost control solution in the market. And the bloggers, analysts, and writers who share our story, given we have customers on every continent (except Antarctica) this has been extremely beneficial to us.

Observation Number One: the public cloud is here to stay. Given the CapEx investment needed to build and operate data centers all over the world, only the cash rich companies will succeed at scale so you need to figure out if you want to be a single cloud / multi-region, or multi-cloud user. We discussed that in detail recently in this blog and it really boils down to risk mitigation. Most companies we talk to are single cloud BUT do ask if we support multi-cloud in case they diversify (we are, we support AWS, Azure, and Google).

Observation Number Two: AWS is king, duh – well they are, and they continue to innovate and grow at a record setting pace. AWS just hit $4bn in quarterly revenue – that’s $16bn in run rate. It’s like the new IBM – what CIO or CTO is going to get fired for moving their infrastructure to AWS’ cloud to improve agility, attract millennial developers who want to innovate in the cloud, leverage the cloud ecosystem, and lower cost (we will address this one in a bit). We released support for Azure and Google in 2017, and yet 75% or more of the new trials and customers we get use AWS, and their environments are almost always larger than those on Azure and Google. There is a reason Microsoft and Google do not release IaaS statistics. And for IBM and Oracle, they are the way back IaaS time machine.

Observation Number Three: Cloud Cost Control is a real thing. It’s something enterprises really care about, and optimizing their cloud spend as their bills grow is becoming increasingly more important to the CFO and CIO. This is mainly focused on buying capacity in advance (which kind of defeats the purpose of the pay as you go model), rightsizing servers as developers have a tendency to over provision for their needs, turning stuff off when it’s not being used, and finding orphaned resources that are ‘lost’ in the cloud. As 65% of a bill is spent on compute (servers / instances) the focus is usually directed there first and foremost as a reduction there is the largest impact on a bill.

Observation Number Four: DevOps and IT Ops are responsible for cloud cost control, not Finance. Now, Finance (or the CFO) might provide a directive to  IT or Engineering that their cloud costs must be brought under control and that they need to look at ways to optimize, but at the end of the day DevOps and IT Ops are responsible for evaluating and selecting tools to help their companies immediately reduce their cloud costs. When we talk to the technical teams during a demo they have been told to they need to reduce their cloud spend or there is a cost control initiative in place, and then they research technologies to help them solve this problem (SEO is key here). Here’s a great example of a FinTech customer of ours and how their cost control decision went down.

Observation Number Five: It’s all about automation, DevOps and self-service. As mentioned, the technical folks are responsible for implementing a cost control platform to optimize their cloud spend, and as such it’s all about show me, not pretty reports and graphs. What we mean here is that as an action oriented platform they want us to be able to easily integrate into their continuous integration and delivery processes through a fully functional API, but also provide a simple UI for the non-techies to ensure self-service. And at the infrastructure layer it’s about what you can do with and through DevOps tools like Slack, Atlassian, and Jenkins, and at the enterprises level with SSO providers such as Ping, Okta and Microsoft, repeating themes over and over again regardless of the cloud provider.

Observation Number Six: Looking ahead, it’s about Stacks. As the idea of microservices continues to take hold, more developers are utilizing multiple instances or services to deploy a single application or environment. In years past, the bottleneck for implementing such groups of servers or databases was the deployment time, but modern configuration management tools (like Chef, Puppet, and Ansible) make this a common strategy by turning the infrastructure into code.  However, managing these environments for humans can remain challenging. ParkMyCloud already allows logical groupings of instances for one-click scheduling, but we’re planning on taking this a step further by integrating with the deployment solutions to really tie it all together.

Obviously the trends in cloud computing we touch on have a mix of macro and micro, and are generally looked at through a cost control lens, but they do provide insights into the day to day of what we see and hear from the folks that operate and use cloud from multinational enterprises to startups. By tracking these trends over time, we can help you keep on top of cloud best-practices to optimize your IT budget, and we look forward to what the next 2 years of cloud computing will bring us.

Read more ›

Cloud Nine, Ten or Eleven: What all those cloud computing growth statistics really mean?

cloud computing growth statistics

Photo by Abigail Keenan on Unsplash

 

Growth in the various cloud platforms has become a dinner party conversation staple of those in the tech industry, in much the same way that house price appreciation was in the mid-2000’s. It’s interesting, everyone has an opinion about cloud computing growth statistics and it’s not entirely clear how it ends.

So let’s start with some industry projections. According to Gartner, the global infrastructure as a service (IaaS) market will grow by 39% in 2017 to reach $35 billion by the end of the year. IaaS growth shows no sign of slowing down and is expected to reach $72 billion by 2021, a CAGR of 30%. This revenue is principally split by the big-four players in public cloud: Amazon Web Services (AWS), Microsoft Azure (Azure), Google Compute Platform (GC) and IBM.

The approximate market share of these four public cloud platforms at the end of the first quarter of 2017 can be seen in the Canalys chart below. The reasons these numbers are only approximate is that each of these vendors include (or exclude) different facets of their cloud business and each seek to ensure their growth remains opaque to the investor community.

However, Amazon reported their earning in April 2017 and showed revenue growing 43 percent in the quarter to $3.66 billion, an annualized run rate of some $14.6 billion. Meanwhile Microsoft reported their cloud earnings in July 2017 and that its annualized revenue run rate was just under $19 billion. However, this includes a lot more than just IaaS and once non-IaaS is removed, analysts suggest that revenue is likely at the $6 billion run rate. Google cloud business is even harder to separate but its cloud revenue was estimated to be some $1 billion at the end of 2015, and although they seem to have hit their stride in the last year or so they clearly have a lot of ground to make. Current estimates are for approximately $2.5 billion in 2017. Lastly, IBM are estimated to be of a similar size to Google but appear to have a lot less momentum than the others, and certainly based on the requests we hear from our customer base IBM is not very often, if ever, referenced.

OK so other than guessing on the winners and losers why does this matter? In our humble opinion, it matters because this scenario creates increased competition and competition is good for consumers. It’s also relevant as companies have a choice, and many are looking at more than one cloud platform, even if they have not yet done anything about it. But what is really interesting, and what keeps us awake at night, is how much of this consumption is being wasted. We think and talk about this waste in terms of three buckets:

1) Always on means always paying – 44% of workloads are classified as non-production (i.e., test, development etc.) and don’t need to run 24×7

2) Over provisioning – 55% of all public cloud resources are not correctly sized for their workloads

3) Inventory Waste – 15% of spend on paying resources which are no longer used.

Combine these three buckets and by our reckoning you are looking at some estimated $6 billion in wasted cloud spend cloud in 2016, growing to $20 billion by 2020. Now that is something to really care about.

Few tools exist to actively monitor and manage this waste, and today there is not a cloud waste management industry per se, and currently tech analysts tend to lump everything under ‘cloud management’. We think that will change in the near future as cloud cost control becomes top-of-mind and the industry is able to leverage cloud computing growth statistics to calculate the scale of this industry problem.If you are in the cloud this is definitely something you should know about. Maybe you should consider optimizing your cloud spend now (before your CTO, CIO or CFO ask you to do so).

Read more ›

Real Time Cases – How AWS Free Credits is Helping this Startup

AWS Free Credits

We sat down with Brian Park from Real Time Cases to talk to him about his company, how he uses AWS, and AWS free credits. We found out that the AWS startup package is a crucial part of making his business run.

Can you start by telling us about Real Time Cases and what you guys do?

So Real Time Cases is an education tech startup that is a new generation experiential learning platform. The new form of learning for today’s student is “learning by doing” and not just learning by reading antiquated textbooks. So Real Time Cases, through our partners, approach high level executives and say: “if you can hire 70-80 students to solve any problem in your department, then what would it be?” This forms the foundation for a “Real Time Case”. We film and document the issue, and professors can use that to drive concepts, theories and frameworks that they are trying to teach in the classroom, and use current, real life examples. Our cases are ongoing and happen “in real time” so they are like mini projects. This also opens the door for students to pitch some of these ideas to local business executives, which is exciting.

What is your role in the company?

I am the Director of Product. We have a platform that hosts the cases, videos are the primary content, due to the fact that most students would prefer to watch, rather than to read – think YouTube and Netflix. I am responsible for overseeing the technical team, both developers and designers. Amazon Web Services (AWS) is our cloud provider of choice and our entire infrastructure is hosted there

Why AWS over others?

We chose AWS because of the startup package, we get $10,000 of AWS free credits to use as we wish – compute, databases, and storage all for free! As with any startups, we have to bootstrap operations by keeping costs as low as possible and in addition AWS services are easy to use and access. If we had launched this company 10 years ago, we couldn’t operate at this cost point. So the credits and service offerings were very important to getting us successfully off the the ground and to market quickly and in a cost effective way.  We have both domestic and international customers, and we can host and publish content for any university in the cloud at negligible cost which translates into affordable price points for students, and at our current cloud burn we can further sustain our operations for many months to come.

What technologies do you use in AWS?

We don’t have an official DevOps team, but we use Github for our code repository, Jira for agile processes, and Slack for communication . These low cost, SaaS tools plus AWS have been very productive for us. We are able to push code out in either 1 or 2 week cycles depending on the size of our stories. Our output used to be a 2 week sprint, but is now a 1 week sprint due to improved tools and processes. We follow agile development practices, participate in scrums and try to utilize the latest DevOps tools. As we have a distributed development and QA team. It’s best to use a tool like Jira to coordinate over time zones and accomplish the harder logistical tasks. We don’t have an overly complex architecture in AWS and use EC2, RDS and S3. S3 is used to store and host the video content we create for the professors and students.

 

Do you have any cost control measures in place for AWS?

Right now, no. When our AWS free credits expire we don’t expect our costs to be very high, but  as a startup being able to leverage cost control tools like ParkMyCloud, to save 20-30% will be important – every dollar counts in a startup. We have been using AWS since our inception and haven’t had to move into the paid area yet – Bezos has created a truly disruptive business model that enables the startup community to rapidly prototype and test their thesis by quickly and inexpensively getting to market.

Read more ›

Implementing a Cloud Cost Management Tool

cloud cost management tool

What do people say when they evaluate and implement a cloud cost management tool? Are they concerned with automation? Projected savings? Or are they interested in the ease of access of the product? An experience that I had when I started with ParkMyCloud shed some light on these questions for me.

One of the first tasks that I was assigned as an intern this summer at ParkMyCloud was to go through our Capterra reviews and pull out some compelling customer quotes that helped answer those questions. What I found interesting in reading the quotes is that what’s important to you depends on the size and type of company, your role in that company, and the outcome you’re looking for. We went back and talked to several of the people who left reviews:

One customer was excited to see how easy it was to start saving with ParkMyCloud:

“Try it out as soon as you can if you’re running on AWS and watch the savings add up!”

-John K. Manager of Solutions Analytics

He even followed up with the long-term savings he was able to get:

“ParkMyCloud is an excellent service that allows us to easily manage our AWS instances so that we’re only paying for our AWS instances when we’re actively using them. We were able to save almost 50% off of our monthly bill after about only 20 minutes of setup!”

-John K. Manager of Solutions Analytics

Other customers were excited about the usability of ParkMyCloud. They viewed it as incredibly important that just about anyone can use the product for cloud cost management, you don’t have to be an IT pro. In fact, it usually only takes around 15 minutes to get going with ParkMyCloud!

“As a tool that you can give to ANYONE in your organization, and have them be responsible for their own AWS costs, it is certainly unmatched. I’ve given it to execs who had no technical ability at all, and told them “here you go – you can only control your specific servers, design a power schedule that works for you”, and they’ve done it with zero assistance.”

-Reed S.

Our role-based access controls that allows multiple members of a team or different teams to dictate their own schedules was worth mentioning by some of our reviewers:

“The ability to distribute rights to groups has made the ability for our teams to take advantage of individual application sleep schedules.”

-Edward P. Software Engineer
So what do people say when they are implementing a cloud cost management tool?  Every CFO says that it needs to happen today, because those cloud bills arent getting any smaller.  Every manager says that the tool needs to make it easy to implement governance on a per-team basis.  Every developer says they need something that works right out of the box without getting in their way.  Whatever your role might be, ParkMyCloud will have you saying Its about time!  Try it out for free today!

Read more ›

AWS vs Google Cloud Pricing – A Comprehensive Look

aws vs google cloud pricing

Back in May 2017 I wrote a very popular blog about Cutting through the AWS and Azure Cloud Pricing Confusion.

Since ParkMyCloud also provides cost control for Google Cloud Platform (GCP) resources, I thought it might be useful to compare AWS vs Google Cloud pricing. An addition I will take a look at the terminology, and billing differences NOTE: There are other “services” involved, such as networking, storage and load balancing, when looking at your overall bill. I am going to be focused mainly on compute charges in this article.

AWS and GCP Terminology Differences

As mentioned before, in AWS, their compute service is called “Elastic Compute Cloud” (EC2). The virtual servers are called “Instances”.

In GCP, the service is referred to as “Google Compute Engine” (GCE). The servers are called also called “instances”. However, in GCP there are “preemptible” and non-preemptible instances.  Non-preemptible instances are the same as AWS “on demand” instances.  

Preemptible instances are similar to AWS “spot” instances, in that they are a lot less expensive, but can be preempted with little or no notice. The difference is that GCP preemptible instances can actually be stopped without being terminated. That is not true for AWS spot instances.

Flocks of these instances spun up from a snapshot according scaling rules are called “auto scaling groups” in AWS.

The similar concept can be created within GCP using “instance groups”. However, instance groups are really more of a “stack”, which are created using an “instance group template”. As such, they are more closely related to AWS CloudFormation stacks.

 

aws vs. google cloud pricing

 

AWS and GCP Compute Sizing

Both AWS and GCP have a dizzying array of instance sizes to choose from, and doing an apples-to-apples comparison between them can be quite challenging. These predefined instance sizes are based upon number of virtual cores, amount of virtual memory and amount of virtual disk.

They have different categories.

AWS offers:

  • Free tier – inexpensive, burst performance (t2 family)
  • General purpose (m3/m4 family)
  • Compute optimized (c4 family)
  • GPU instances (p2 family)
  • FPGA instances (f1 family)
  • Memory optimized (x1, r3/r4 family)
  • Storage optimized (i3, d2 family)

 

GCP offers the following predefined types:

  • Free tier – inexpensive, burst performance (f1/g1 family)
  • Standard, shared core (n1-standard family)
  • High memory (n1-highmem family)
  • High CPU (n1-highCPU family)

 

However, GCP also allows you to make your own custom machine types, if none of the predefined ones fit your workload. You pay for uplifts in CPU/Hr and memory GiB/Hr. You can also add GPUs and premium processors as uplifts.

Both providers take marketing liberties with things like memory and disk sizes.  For example, AWS lists its memory size in GiB (base2) and disk size in GB (base10).
GCP reports its memory size and disk size as GB. However, to make things really confusing this is what they say on their pricing page: “Disk size, machine type memory, and network usage are calculated in gigabytes (GB), where 1 GB is 230 bytes. This unit of measurement is also known as a gibibyte (GiB).”

This, of course, is pure nonsense. A gigabyte (GB) is 109 bytes. A gibibyte (GiB) is 230 bytes. The two are definitely NOT equal. It was probably just a typo.
If you look at what is actually delivered, neither seems to match what is shown on their pricing pages. For example, an AWS t2.micro is advertised as having 1 GiB of memory. In reality, it is 0.969 GiB (using “top”).

For GCP, their f1.micro is advertised as “0.6 GB”. Assuming they simply have their units mixed up and “GB” should really be “GiB”, they actually deliver 0.580 GiB. So, both round up, as marketing/sales people are apt to do.

With respect to pricing, this is how the two seem to compare, by looking at some of the most common “work horses” and focusing on CPU, memory and cost. (One would have to run actual benchmarks to more accurately compare):

 

aws vs. google cloud pricing

 

The bottom line:

In general, for most workloads AWS is less expensive on a CPU/Hr basis. For compute intensive workloads, GCP instances are less expensive

Also, as you can see from the table, both providers charge uplifts for different operating systems, and those uplifts can be substantial! You really need to pay attention to the fine print. For example, GCP charges a 4 core minimum for all their SQL uplifts (yikes!). And, in the case of Red Hat Enterprise Licensing (RHEL) in GCP, they charge you a 1 hour minimum for the uplifts and in 1 hour increments after that. (We’ll talk more about how the providers charge you in the next section.)

AWS vs. Google Cloud Pricing – Examining the Differences

Cost/Hr is only one aspect of the equation, though. To better understand your monthly bill, you must also understand how the cloud providers actually charge you. AWS prices their compute time by the hour, but requires a 1 hour minimum. If you start an instance and run it for 61 minutes then shut it down, you get charged for 2 hours of compute time.

Google Compute Engine pricing is also listed by the hour for each instance, but they charge you by the minute, rounded up to the nearest minute, with a 10 minute minimum charge. So, if you run for 1 minute, you get charged for 10 minutes. However, if you run for 61 minutes, you get charged for 61 minutes. On the surface, this sounds very appealing (and makes me want to wag my finger at AWS and say, “shame on you, AWS”).

You also really need to pay attention to the use case and the comparable instance prices. Let me give you a concrete example. So, here is a graph of 6 months worth of data from an m4.large instance. Remember that our goal at ParkMyCloud is to help you “park” non-production instances automatically, when they are not being used, to save you money.

This instance is on a ParkMyCloud parking schedule, where it is RUNNING from 8:00 a.m. to 7:00 p.m. on weekdays and PARKED evenings and weekends. This instance, assuming Linux pricing, costs $0.10 per hour in AWS. From November 6, 2016 until May 9, 2017, this instance ran for 111,690 minutes. This is actually about 1,862 hours, but AWS charged for 1,922 hours and it cost $192.20 in compute time.

 

aws vs. google cloud pricing

 

Why the difference? ParkMyCloud has a very fast and accurate orchestration engine, but when you start and stop instances, the cloud provider and network response can vary from hour-to-hour and day-to-day, depending on their load, so occasionally things will run that extra minute. And, even though this instance is on a parking schedule, when you look at the graph, you can see that the user took manual control a few times, perhaps to do maintenance. Stuff happens!

What would it have cost to run the similar instance in GCP?  If you look at the comparable GCP instance, (the n1-standard-2), it costs $0.1070/hour. So, this workload running in GCP would have cost $199.18 (not including Sustained Use Discounts). Since this instance really only ran 42.6% of the time (111,690 minutes out of 262,140 minutes), it would qualify for a partial Sustained Use Discount. With those discounts the actual cost would have been about $182.72. This is about $10 cheaper than AWS, even though per hour cost for AWS was lower). That may not seem much, but if you have hundreds or thousands of instances, it adds up.

AWS Reserved Instances vs GCP Committed Use

Both providers offer deeper discounts off their normal pricing, for “predictable” workloads that need to run for sustained periods of time, if you are willing to commit to capacity consumption upfront. AWS offers Reserved Instances. Google offers Committed Use Discounts (currently in beta). An in-depth comparison of these is beyond the intent of this blog (and you have already been very patient, if you made it this far). Therefore, I’ll reserve that discussion for a future blog.

Conclusion

If you are new to public cloud, once you get past all the confusing jargon, the creative approaches to pricing and the different ways providers charge for usage, the actual cloud services themselves are much easier to use than legacy on-premise services.

The public cloud services do provide much better flexibility and faster time-to-value. The cloud providers simply need to get out of their own way. Pricing is but one example where AWS and GCP could stand to make things a lot simpler, so that newcomers can make informed decisions.

When comparing AWS vs. Google Cloud pricing AWS oEC2 n-demand pricing may on the surface appear to be more competitive than GCPPpricing for comparable compute engine’s. However, when you examine specific workloads and factor in Google’s more enlightened approach to charging for CPU/Hr time and their use of Sustained Use Discounts, GCP may actually be less expensive. AWS really needs to get in-line with both Azure and Google, who charge by the minute and have much smaller minimums. Nobody likes being charged extra for something they don’t use.

In the meantime, ParkMyCloud will continue to help you turn off non-production cloud resources, when you don’t need them and help save you a lot of money on your monthly cloud bills, regardless of which public cloud provider you use.

Read more ›

Was the Acquisition of Cloudyn About the need to Manage Microsoft Azure? Sort of.

batch workloads

Perhaps you heard that Microsoft recently acquired Cloudyn in order to manage Microsoft Azure cloud resources, along with of course Amazon Web Services (AWS), Google Cloud Platform (GCP), and others. Why? Well the IT landscape is becoming more and more a multi-cloud landscape. Originally this multi-cloud (or hybrid cloud) approach was about private and public cloud, but as we recently wrote here the strategy as we talk to large enterprises is becoming more about leveraging multiple public clouds for a variety of reasons – risk management, vendor lock in, and workload optimization seem to be the three main reasons.

 

That said, according to TechCrunch and quotes from Microsoft executives the acquisition is meant to provide Microsoft a cloud billing and management solution that provides it with an advantage over competitors (particularly AWS and GCP) as companies continue to pursue, drum roll please … a multi-cloud strategy. Additional, benefits for Microsoft include visibility into usage patterns, adoption rates, and other cloud-related data points that they can leverage in the ‘great cloud war’ to come … GOT reference of course.

 

Why are we writing about this – a couple reasons. One of course is that this a relevant event in the cloud management platform (CMP) space, as this is really the first big cloud visibility and governance acquisition to date. The other acquisitions by Dell (Enstratius), Cisco (Cliqr), and CSC (ServiceMesh) for example were more orchestration and infrastructure platforms than reporting tools. Second, this points to the focus enterprises have on cost visibility, cost management and governance as they look to optimize their spend and usage as one does with any utility. And third, this proves that a ‘pushback’ from enterprises to more widely adopt Azure has been, “I am already using AWS, I don’t want to manage through yet another screen / console”, and that multi-cloud visibility and governance helps solve that problem.

 

Now, taking this one step farther: the visibility, recommendations, and reporting are all well and good, but what about the actions that must be taken off those reports, and integration into enterprise Devops processes for automation and continuous cost control? That’s where something like Cloudyn falls short, and where a platform like ParkMyCloud kicks in:

 

  • Multi-cloud Visibility and Governance- check
  • Single-Sign On (SSO) – check
  • REST API for DevOps Automation – check
  • Policy Engine for Automated Actions (parking) – check
  • Real-time Usage and Savings data – check
  • Manage Microsoft Azure (AWS + GCP) – check

 

The next step in cloud cost control is automation and action, not just visibility and reporting. Let technology automate these tasks for you instead of just telling you about it.

Read more ›

AWS Slack Integration for Interactive Cost Control

AWS slack integration

Today we’re happy to announce a new chatbot for AWS Slack integration that allows you to fully interact with ParkMyCloud without having to access the GUI.  Combined with the recent addition of Notifications in ParkMyCloud, you can manage your continuous cost control from the Slack channels you live in every day!

 

Developers and operations engineers are increasingly utilizing ChatOps to manipulate their environments and help users self-manage the servers and databases they require for their work.  There’s a few different chat systems and bot platforms available, but the most common used today is Slack.  By setting up the SlackBot to interact with your ParkMyCloud account, you can allow users to assign schedules, temporarily override parked instances, or toggle instances to turn off or on as needed.

 

Combine this with notifications from ParkMyCloud, and you can have full visibility into your cost control initiatives right from your standard Slack chat channels.  Notifications allow you to have ParkMyCloud post messages for things like schedule changes or instances that are being turned off automatically.  Now, with the new ParkMyCloud Slackbot, you can reply back to those notifications to snooze the schedule, turn a system back on temporarily, or assign a new schedule.

 

The chatbot is open-source, so you can feel free to modify the bot as necessary to fit your environment or use cases.  It’s written in Python using the slackclient library, but even if you’re not a Python expert, you’ll find it easy to modify to suit your needs.  We’d love to have you send your ideas and modifications back to us for rapid improvement.

 

If you haven’t already signed up for ParkMyCloud, then start a free trial and get the Slackbot hooked up for easy AWS Slack integration.  You’ll find that ParkMyCloud can make continuous cost control easy and help reduce your cloud spend, all while integrating with your favorite DevOps tools!

Read more ›
Page 1 of 812345...Last »
Copyright © ParkMyCloud 2016. All rights reserved|Privacy Policy