How to Save on AWS Archives - ParkMyCloud

How to save on AWS costs is a question that many project managers will have likely asked themselves since Amazon launched its Elastic Compute Cloud (EC2) in 2006. Indeed, as the number of organizations using the EC2 platform has grown – and the amount each organization spends on EC2 balloons – the question has probably been asked more frequently than you might imagine.

There are a number of answers to this question. Organizations can take advantage of Amazon´s flexible pricing plans to assign instances to the most cost-effective option or, if they have the organizational foresight to plan three years ahead, use Amazon´s Reserved Instances pricing model to save on AWS costs.

Where organizations use a high proportion of non-production instances, one way how to save on AWS costs is to reassign resources in order to develop scheduling scripts. Although this solution may be counter-productive in terms of reducing costs, it is a lot more reliable than asking teams to switch off their non-production instances before they go home at night.

Another way how to save on AWS costs is to implement an AWS management solution from ParkMyCloud. ParkMyCloud is a lightweight Software-as-a-Service app that automates the scheduling process in a simple user-friendly process called “parking”. The app even suggests which non-production instances are most suitable for parking – saving internal resources as well as AWS costs.

ParkMyCloud is also an exceptionally versatile cloud management solution. It enables administrators to set permission levels for development teams, provides a single-view dashboard across all accounts for easy governance, and allows for the schedule to be snoozed when developers want to access their development, testing and staging instances while they are parked.

To find out more about how to save on AWS costs with ParkMyCloud, contact us today.

Cloud Per-Second Billing – How Much Does It Really Save?

It has been a little over a month since Amazon and Google switched some of their cloud services to per-second billing and so the first invoices with the revised billing are hitting your inboxes right about now. If you are not seeing the cost savings you hoped for, it may be a good time to look again at what services were slated for the pricing change, and how you are using them.

Google Cloud Platform

Starting with the easiest one, Google Cloud Platform (GCP), you may not be seeing a significant change, as most of their services were already billing at the per-minute level, and some were already at the per-second level. The services moved to per-second billing (with a one-minute minimum) included Compute Engine, Container Engine, Cloud Dataproc, and App Engine VMs.  Moving from per-minute billing to per-second billing is not likely to change a GCP service bill by more than a fraction of a percent.

Let’s consider the example of an organization that has ten GCP n1-standard-8 Compute Engine machines in Oregon at a base cost of $0.3800 per hour as of the date of this blog. Under per-minute billing, the worst-case scenario would be to shut a system down one second into the next minute, for a cost difference of about $0.0063. Even if each of the ten systems were assigned to the QA or development organization, and they were shut down at the end of every work day, say 22 days out of the month, your worst-case scenario would be an extra charge of 22 days x 10 systems x $0.0063 = $1.3860. Under per-second billing, the worst case is to shut down at the beginning of a second, with a highest possible cost for these same machines (sparing you the math) being about $0.02. So, the best this example organization can hope to save over a month with these machine with per-second billing is $1.39.

Amazon Web Services

On the Amazon Web Services (AWS) side of the fence, the change is both bigger and smaller.  It is bigger in that they took the leap from per-hour to per-second billing for On-Demand, Reserved, and Spot EC2 instances and provisioned EBS, but smaller in that it is only for Linux-based instances; Windows instances are still at per-hour.

Still, if you are running a lot of Linux instances, this change can be significant enough to notice.  Looking at the same example as before, let’s run the same calculation with the roughly equivalent t2.2xlarge instance type, charged at $0.3712 per hour. Under per-hour billing, the worst-case scenario is to shut a system down even a second into the next higher hour. In this example, the cost would be an extra charge of 22 days x 10 systems x $0.3712 = $81.664. Under per-second billing, the worst case is the same $0.02 as with GCP (with fractions of cents difference lost in the noise). So, under AWS, one can hope to see significantly different numbers in the bill.

The scenario above is equally relevant to other situations where instances get turned on and off on a frequent basis, driving those fractions of an hour or a minute of “lost” time. Another common example would be auto-scaling groups that dynamically resize based on load, and see enough change over time to bring instances in and out of the group. (Auto-scale groups are frequently used as a high-availability mechanism, so their elastic growth capabilities are not always used, and so savings will not always be seen.) Finally, Spot instances are built on the premise of bringing them up and down frequently, and they will also enjoy the shift to per-second billing.

However, as you look at your cloud service bill, do keep in mind some of the nuances that still apply:

  • Windows: GCP applies per-second billing to Windows; AWS is still on one-hour billing for Windows.
  • Marketplace Linux: Some Linux instances in the AWS Marketplace that have a separate hourly charge are also still on hourly billing (perhaps due to contracts or licensing arrangements with the vendors?), so you may want to reconsider which flavor of Linux you want to use.
  • Reserved instances: AWS does strive to “use up” all of the pre-purchased time for reserved instances, spreading it across multiple machines with fractions of usage time, and per-second billing can really stretch the value of these instances.
  • Minimum of one-minute charge: Both GCP and AWS will charge for at least a minute from instance start before per-second billing comes into play.

Overall, per-second billing is a great improvement for consumers of cloud resources…and will probably drive us all more than ever to make each second count.

Read more ›

AWS IAM Roles and Ways to Use them to Improve Security

What are AWS IAM Roles?

Within AWS Identity and Access Management system (IAM) there are a number of different identity mechanisms that can be configured to secure your AWS environment, such as Users, Groups, and AWS IAM Roles. Users are clearly the humans in the picture, and Groups are collections of Users, but Roles can be a bit more obscure. Roles are defined as a set of permissions that grant access to actions and resources in AWS. Unlike Users, which are tied to a specific Identity and a specific AWS account, an IAM Role can be used by or assumed by IAM User accounts or by services within AWS, and can give access to Users from another account altogether.

To better understand Roles, I like the metaphor of a hat.  When we say a Role is assumed by a user – it is like saying someone can assume certain rights or privileges because of what hat they are wearing.  In any company (especially startups), we sometimes say someone “wears a lot of hats” – meaning that person temporarily takes on a number of different Roles, depending on what is needed. Mail delivery person, phone operator, IT support, code developer, appliance repairman…all in the space of a couple hours.

IAM Roles are similar to wearing different hats this in that they temporarily let an IAM User or a service get permissions to do things they would not normally get to do.  These permissions are attached to the Role itself, and are conveyed to anyone or anything that assumes the role.  Like Users, Roles have credentials that can be used to authenticate the Role identity.

Here are a couple ways in which you can use IAM Roles to improve your security:

EC2 Instances

All too often, we see software products that rely on credentials (username/password) for services or accounts that are either hard-coded into an application or written into some file on disk. Frequently the developer had no choice, as the system had to be able to automatically restart and reconnect if the machine rebooted, without anyone to manually type in credentials during the rebootwhen the system rebooted. If the code is examined, or file system is compromised, then the credentials are exposed, potentially compromisingand can potentially used to compromise other systems and services. In addition, such credentials make it really difficult to periodically change the password. Even in AWS we sometimes see developers hard-code API Key IDs and Keys into apps in order to get access to some AWS service. This is a security accident waiting to happen, and can be avoided through the use of IAM Roles.

With AWS, we can assign a single IAM Role to an EC2 instance. This assignment is usually made when the instance is launched, but can also be done at runtime if needed. Applications running on the server retrieve the Role’s security credentials by pulling them out of the instance metadata through a simple web command. These credentials have an additional advantage over potentially long-lived, hard-coded credentials, in that they are changed or rotated frequently, so even if somehow compromised, they can only be used for a brief period.

Another key security advantage of Roles is that they can be limited to just the access/rights privileges needed to get a specific job done. Amazon’s documentation for roles gives the example of an application that only needs to be able to read files out of S3. In this case, one can assign a Role that contains read-only permissions for a specific S3 bucket, and the Role’s configuration can say that the role can only be used by EC2 instances. This is an example of the security principle of “least privilege,”, where the minimum privileges necessary are assigned, limiting the risk of damage if the credential is compromised. In the same sense that you would not give all of your users “Administrator” privileges, you should not create a single “Allow Everything” Role that you assign everywhere. Instead create a different Role specific to the needs of each system or group of systems.

Delegation

Sometimes one company needs to give access to their resources to another company. Before IAM Roles, (and before AWS) the common ways to do that were to share account logins (with the same issues identified earlier with hardcoded credentials) or to use complicated PKI/certificate based systems. If both companies using AWS, sharing access is much easier with Role-based Delegation. There are several ways to configure IAM Roles for delegation, but for now we will just focus on delegation between accounts from two different organizations.

At ParkMyCloud, our customers use Delegation to let us read the state of their EC2, RDS, and scaling group instances, and then start and stop them per the schedules they configure in our management console.

To configure Role Delegation, a customer first creates an account with the service provider, and is given the provider’s AWS Account ID and an External ID. The External ID is a unique number for each customer generated by the service provider.

The administrator of the customer environment creates an IAM Policy with a constrained set of access (principle of “least privilege” again), and then assigns that policy to a new Role (like “ParkMyCloudAccess”), specifically assigned to the provider’s Account ID and External ID.  When done, the resulting IAM Role is given a specific Amazon Resource Name (ARN), which is a unique string that identifies the role.  The customer then enters that role in the service provider’s management console, which is then able to assume the role.  Like the EC2 example, when the ParkMyCloud service needs to start a customer EC2 instance, it calls the AssumeRole API, which verifies our service is properly authenticated, and returns temporary security credentials needed to manage the customer environment.

Conclusions

AWS IAM Roles make some tasks a lot simpler by flexibly assigning roles to instances and other accounts. IAM Roles can help make your environment more secure by:

  • Using the principle of Least Privilege in IAM policies to isolate the systems and services to only those needed to do a specific job.
  • Prevent hard coding of credentials in code or files, minimizing danger from exposure, and removing the risk of long-unchanged passwords.
  • Minimizing common accounts and passwords by allowing controlled cross-account access.
Read more ›

AWS Lambda + ParkMyCloud = Supercharged Automation

Among the variety of AWS services and functionality, AWS Lambda seems to be taking off with hackers and tinkerers. The idea of “serverless” architecture is quite a shift in the way we think about applications, tools, and services, but it’s a shift that is opening up some new ideas and approaches to problem solving.  

If you haven’t had a chance to check out Lambda, it’s a “function-as-a-service” platform that allows you to run scripts or code on demand, without having to set up servers with the proper packages and environments installed. Your lambda function can trigger from a variety of sources and events, such as HTTP requests, API calls, S3 bucket changes, and more. The function can scale up automatically, so more compute resources will be used if necessary without any human intervention. The code can be written in Node.js, Python, Java, and C#.

Some pretty cool ideas already exist for lambda functions to automate processes.  One example from AWS is to respond to a Github event to trigger an action, such as the next step in a build process.  There’s also a guide on how to use React and Lambda to make an interactive website that has no server.

For those of you who are already using ParkMyCloud to schedule resources, you may be looking to plug in to your CI/CD pipeline to achieve Continuous Cost Control.  I’ve come up with a few ideas of how to use Lambda along with ParkMyCloud to supercharge your AWS cloud savings.  Let’s take a look at a few options:

Make ParkMyCloud API calls from Lambda

With ParkMyCloud’s API available to control your schedules programmatically, you could make calls to ParkMyCloud from Lambda based on events that occur.  The API allows you to do things like list resources and schedules, assign schedules to resources, snooze schedules to temporarily override them, or cancel a snooze or schedule.

For instance, if a user logs in remotely to the VPN, it could trigger a Lambda call to snooze the schedules for that user’s instances.  Alternatively, a Lambda function could change the schedules of your Auto Scaling Group based on average requests to your website.  If you store data in S3 for batch processing, a trigger from an S3 bucket can tell Lambda to notify ParkMyCloud that the batch is ready and the processing servers need to come online.

Send notifications from ParkMyCloud to Lambda

With ParkMyCloud’s notification system, you can send events that occur in the ParkMyCloud system to a webhook or email.  The events can be actions taken by schedules that are applied to resources, user actions that are done in the UI, team and schedule assignments from policies, or errors that occur during parking.

By sending schedule events, you could use a Lambda function to tell your monitoring tool when servers are being shut down from schedules.  This could also be a method for letting your build server know that the build environment has fully started before the rest of your CI/CD tools take over.  You could also send user events to Lambda to feed into a log tool like Splunk or Logstash.  Policy events can be sent to Lambda to trigger an update to your CMDB with information on the team and schedule that’s applied to a new server.

Think outside the box!

Are you already using AWS Lambda to kick off functions and run scripts in your environment?  Try combining Lambda with ParkMyCloud and let us know what cool tricks you come up with for supercharging your automation and saving on your cloud bill! Stop by Booth 1402 at AWS re:Invent this year and tell us.

Read more ›

5 Favorite AWS Training Resources

When it comes to AWS training resources, there’s no shortage of information out there. Considering the wide range of videos, tutorials, blogs, and more, it’s hard knowing where to look or how to begin. Finding the best resource depends on your learning style, your needs for AWS, and getting the most updated information available. With this in mind, we came up with our 5 favorite AWS training resources, sure to give you the tools you need to learn AWS:

1. AWS Self-Paced Labs

What better way to learn that at your own pace? AWS self-paced labs give you hands-on learning in a live AWS environment, with AWS services, and actual scenarios you would encounter in the cloud. Among the recommended labs you’ll find an Introduction to Amazon Elastic Compute Cloud (EC2), and for more advanced users, a lab on Creating Amazon EC2 Instances with Microsoft Windows. If you’re up for an adventure, enroll in a learning quest and immerse yourself in a collection of labs that will help you master any AWS scenario at your own pace. Once completed, you will earn a badge that you can boast on your resume, LinkedIn, website, etc.  

2. AWS Free Tier

Sometimes the best way to learn something is by jumping right in. With the AWS Free Tier, you can try AWS services for free. This is a great way to test out AWS for your business, or for the developers out there, to try services like AWS CodePipeLine, AWS Data Pipeline, and more. While you are still getting a hands-on opportunity to learn a number of AWS services, the only downside is that there are certain usage limits. You can track your usage with a billing alarm to avoid unwanted charges, or you can try ParkMyCloud and park your instances when they’re not in use to get the most out of your free tier experience. In fact, ParkMyCloud started its journey by using AWS’ free tier – we eat our own dog food!

3. AWS Documentation

AWS Documentation is like a virtual encyclopedia of tools, terms, training, and everything AWS. You’ll find white papers, case studies, tutorials, cloud computing basics, and so much more. This resource is a one-stop-shop for all of your AWS documentation needs, whether you’re a beginner or advanced user. No matter where you are in your AWS training journey, AWS documentation is always a useful reference and certainly deserves a spot in your bookmarks.

4. YouTube

So far, we’ve gone straight to the source for 3 out of 5 of our favorite AWS training resources. Amazon really does a great job of providing hands-on training, tutorials, and documentation for users with a range of experience. However, YouTube opens up a whole new world of video training that includes contributions from not only Amazon, but other great resources as well. Besides the obvious Amazon Web Services channel, there are also popular and highly rated videos by Edureka, Simplilearn, Eli the Computer Guy, and more.

5. Bloggers

As cloud technology usage continues to expand and evolve, blogs are a great way to stay up to speed with AWS and the world of cloud computing. Of course, in addition to labs, a free-trial, extensive documentation, and their own YouTube channel, AWS also has their own blog. Since AWS actually has a number of blogs that vary by region and technology, we recommend that you start by following Jeff Barr – Chief Evangelist at Amazon Web Services, and primary contributor. Edureka was mentioned in our recommended YouTube channels, they also have a blog that covers plenty of AWS topics. In addition, the CloudThat blog is an excellent resource for AWS and all things cloud, and was co-founded by Bhaves Goswami – a former member of the AWS product development team.

 


There’s plenty of information out there when it comes to AWS training resources. We picked our 5 favorite resources for their reliability, quality, and range of information. Whether you’re new to AWS or consider yourself an expert, these resources are sure to help you find what you’re looking for.

Read more ›

3 Enterprise Cloud Management Challenges You Should Be Thinking About

Enterprise cloud management is a top priority. As the shift towards multi-cloud environments continues, so has the need to consider the potential challenges. Whether you already use the public cloud, or are considering making the switch, you probably want to know what the risks are. Here are three you should be thinking about.

1. Multi-Cloud Environments

As the ParkMyCloud platform supports AWS, Azure, and Google, we’ve noticed that multi-cloud strategies are becoming increasingly common among enterprises. There are a number of reasons why it would be beneficial to utilize more than one cloud provider. We have discussed risk mitigation as a common reason, along with price protection and workload optimization. As multi-cloud strategies become more popular, the advantages are clear. However, every strategy comes with its challenges, and it’s important for CIOs to be aware of the associated risks.

Without the use of cloud management tools, multi-cloud management is complex and sometimes difficult to navigate. Different cloud providers have different price models, product features, APIs, and terminology. Compliance requirements are also a factor that must be considered when dealing with multiple providers. Meeting and maintaining requirements for one cloud provider is complicated enough, let alone multiple. And don’t forget you need a single pane to view your multi-cloud infrastructure.

2. Cost Control

Cost control is a first priority among cloud computing trends. Enterprise Management Associates (EMA) conducted a research study and identified key reasons why there is a need for cloud cost control, among them were inefficient use of cloud resources, unpredictable billing, and contractual obligation or technological dependency.

Managing your cloud environment and controlling costs requires a great deal of time and strategy, taking away from the initiatives your enterprise really needs to be focusing on. The good news is that we offer a solution to cost control that will save 65% or more on your monthly cloud bills – just by simply parking your idle cloud resources. ParkMyCloud was one of the top three vendors recommended by EMA as a Rapid ROI Utility. If you’re interested in seeing why, we offer a 14-day free trial.

3. Security & Governance

In discussing a multi-cloud strategy and its challenges, the bigger picture also includes security and governance. As we have mentioned, a multi-cloud environment is complex, complicated, and requires native or 3rd party tools to maintain vigilance. Aside from legal compliance based on the industry your company is in, the cloud also comes with standard security issues and of course the possibility of cloud breaches. In this vein, as we talk to customers they often worry about too many users being granted console access to create and terminate cloud resources which can lead to waste. A key here is limiting user access based on roles or Role-based Access Controls (RBAC). At ParkMyCloud we recognize that visibility and control is important in today’s complex cloud world. That’s why in designing our platform, we provide the sysadmin the ability to delegate access based on a user’s role and the ability to authenticate leveraging SSO using SAML integration . This approach brings security benefits without losing the appeal of a multi-cloud strategy.

Our Solution

Enterprise cloud management is an inevitable priority as the shift towards a multi-cloud environment continues. Multiple cloud services add complexity to the challenges of IT and cloud management. Cost control is time consuming and needs to be automated and monitored constantly. Security and governance is a must and it’s necessary to ensure that users and resources are optimally governed. As the need for cloud management continues to grow, cloud automation tools like ParkMyCloud provide a means to effectively manage cloud resources, minimize challenges, and save you money.

Read more ›

How to Get the Cheapest Cloud Computing

Are you looking for the cheapest cloud computing available? Depending on your current situation, there are a few ways you might find the least expensive cloud offering that fits your needs.

If you don’t currently use the public cloud, or if you’re willing to have infrastructure in multiple clouds, you’re probably looking for the cheapest cloud provider. If you have existing infrastructure, there are a few approaches you can take to minimize costs and ensure they don’t spiral out of control.

Find the Cloud Provider that Offers the Cheapest Cloud Computing

There are a variety of small cloud providers that attempt to compete by dropping their prices. If you work for a small business and prefer a no-frills experience, perhaps one of these is right for you.

However, there’s a reason that the “big three” cloud providers – Amazon Web Services (AWS), Microsoft Azure, and Google Cloud – dominate the market. They offer a wide range of product lines, and are continually innovating. They have a low frequency of outages, and their scale requires a straightforward onboarding process and plenty of documentation.

Whatever provider you decide on, ensure that you’ll have access to all the services you need – is there a computing product, storage, databases? How good is the customer support?

For more information about the three major providers’ pricing, please see this whitepaper on AWS vs. Google Cloud Pricing and this article comparing AWS vs. Azure pricing.

Locked In? How to Get the Cheapest Cloud Computing from Your Current Provider

Of course, if your organization is already locked into a cloud computing provider, comparing providers won’t do you much good. Here’s a short checklist of things you should do to ensure you’re getting the cheapest cloud computing possible from your current provider:

  • Use Reserved Instances for production – Reserved instances can save money – as long as you use them the right way. More here. (This article is about AWS RIs, but similar principles apply to Azure’s RIs and Google’s Committed Use discounts.)
  • Only pay for what you actually need – there are a few common ways that users inadvertently waste money, such as using larger instances than they need, and running development/testing instances 24/7 rather than only when they’re needed. (Here at ParkMyCloud, we’re all about reducing this waste – try it out.)
  • Ask – it never hurts to contact your provider and ask if there’s anything you could be doing to get a cheaper price. If you use Microsoft Azure, you may want to sign up for an Enterprise License Agreement. Or maybe you qualify for AWS startup credits.

Get Credit for Your Efforts

While finding the cheapest cloud computing is, of course, beneficial to your organization’s common good, there’s no need to let your work in spending reduction go unnoticed. Make sure that you track your organization’s spending and show your team where you are reducing spend.

We’ve recently made this task easier than ever for ParkMyCloud users. Now, you can not only create and customize reports of your cloud spending and savings, but you can also schedule these reports to be emailed out. Users are already putting this to work by having savings reports automatically emailed to their bosses and department heads, to ensure that leadership is aware of the cost savings gained… and so users can get credit for their efforts.

 

 

Read more ›

Reduce RDS Costs with ParkMyCloud

Thanks to the ability to shut down instances with a start/stop scheduler, users of Amazon’s database service can finally save time and reduce RDS costs. Until June 2017, the only way to accomplish this feat was by copying and deleting instances, running the risk of losing transaction logs and automatic backups. While Amazon’s development of the start/stop scheduler is useful and provides a level of cost savings, it also comes with issues of its own.

For one, the start/stop scheduler is not foolproof. The process for stopping and starting non-production RDS instances is manual, relying on the user to create and consistently manage the schedule. Having to manually switch off when instances are not in use, and then restart when access is needed again, is a helpful advantage but also leaves room for human error. Complicating things further, RDS instances that have been shut down will automatically be restarted after seven days, again relying on the user to switch those instances back off if they’re not needed at the time.

Why Scripting is not the Best Answer

One way of minimizing potential for error is by automating the stop/start schedule yourself with writing your own scripts. While that could work, you would need to consider the number of non-production instances deployed on AWS RDS, and plan for a schedule that would allow developers to have access when needed, which could very well be at varying times throughout the day. All factors considered, the process of writing and maintaining scheduling scripts is one that takes extra time and costs money as well. Ultimately, setting up and maintaining your own schedule could increase your cloud spend more than it reduces RDS costs.

When you start thinking about the cost of paying developers, the amount of scripts that would have to be written, and the ongoing maintenance required, buying into an automated scheduling process is a no-brainer.

How ParkMyCloud Reduces RDS Costs

Automated Scheduling

ParkMyCloud saves you time and money by automating the scheduling process of stopping and starting AWS RDS instances (in addition to Microsoft Azure VMs and Google Cloud Compute instances, but that’s another post). At the same time, you get total visibility and full autonomy over your account.

The process is simple. With you as the account manager, ParkMyCloud conducts a discovery of all the company accounts, and determines which instances are most suitable for parking. From there, you have the option of implementing company-wide schedules for non-production instances, or giving each development team the ability to create schedules of their own.

Flexible Parking

ParkMyCloud takes saving on RDS costs to a whole new level with parking schedules. Different schedules can be applied to different instances, or they can be parked permanently and put on “snooze” when access is needed. Amazon’s seven-day automatic restart of switched off instances is a non-issue with our platform, and snoozed instances can be re-parked when access is no longer needed, so there’s no more relying on the user to do it manually.

For the most part, we find that companies will want to park their non-production instances outside the normal working hours of Monday to Friday, let’s say from 8:00am to 8:00pm. By parking your instances outside of those days and hours, ParkMyCloud can reduce your cloud spend by 65% – even more if you implement a parking schedule and use the snooze option.

Valuable Insight

Because you have total visibility over the account, you can reduce RDS costs even further by having a bird’s eye view of your company’s cloud use. You’ll be able to tell which of your instances are underused, terminate them, and possibly benefit further from selecting a cheaper plan (really soon). You’ll be able to see all RDS instances across all regions and AWS accounts in one simple view. You can also view the parking schedules for each instance and see how much each schedule is saving, potentially reducing costs even further. The viewability of your account and access to information provides a great resource for budgeting and planning.

Conclusion

The AWS start/stop scheduler is useful, but has to be done manually. Writing your own scripts sounds helpful, but it’s actually time consuming, and not fully cost-effective. ParkMyCloud automates the process while still putting you in control, reducing RDS costs and saving you time and money.

See the benefits of ParkMyCloud for yourself by taking advantage of our two-week free trial. Test our cloud cost control platform in your own environment, without any need for a credit card or signed contract, and see why our simple, cost-effective tool is the key to reducing RDS costs. We offer a variety of competitive pricing plans to choose from, or a limited-function version that you can continue to use for free after the trial ends.

To start your free trial today, sign up here.

Read more ›

Cloud Optimization Tools = Cloud Cost Control (Part II)

A couple of weeks ago in Part 1 of this blog topic we discussed the need for cloud optimization tools to help enterprises with the problem of cloud cost control. Amazon Web Services (AWS) even goes as far as suggesting the following simple steps to control their costs (which can also be applied  to Microsoft Azure and Google Cloud Platform, but of course with slightly different terminology):

    1. Right-size your services to meet capacity needs at the lowest cost;
    2. Save money when you reserve;
    3. Use the spot market;
    4. Monitor and track service usage;
    5. Use Cost Explorer to optimize savings; and
    6. Turn off idle instances (we added this one).

A variety of third-party tools and services have popped up in the market over the past few years to help with cloud cost optimization – why? Because upwards of $23B was spent on public cloud infrastructure in 2016, and spending continues to grow at a rate of 40% per year. Furthermore, depending on who you talk to, roughly 25% of public cloud spend is wasted or not optimized — that’s a huge market! If left unchecked, this waste problem is supposed to triple to over $20B by 2020 – enter the vultures (full disclosure, we are also a vulture, but the nice kind). Most of these tools are lumped under the Cloud Management category, which includes subcategories like Cost Visibility and Governance, Cost Optimization, and Cost Control vendors – we are a cost control vendor to be sure.

Why do you, an enterprise, care? Because there are very unique and subtle differences between the tools that fit into these categories, so your use case should dictate where you go for what – and that’s what I am trying to help you with. So, why am I a credible source to write about this (and not just because ParkMyCloud is the best thing since sliced bread)?

Well, yesterday we had a demo with a FinTech company in California that was interested in Cost Control, or thought they were. It turns out that what they were actually interested in was Cost Visibility and Reporting; the folks we talked to were in Engineering Finance, so their concerns were primarily with billing metrics, business unit chargeback for cloud usage, RI management, and dials and widgets to view all stuff AWS and GCP billing related. Instead of trying to force a square peg into a round hole, we passed them on to a company in this space who’s better suited to solve their immediate needs. In response, the Finance folks are going to put us in touch with the FinTech Cloud Ops folks who care about automating their cloud cost control as part of their DevOps processes.

This type of situation happens more often than not. We have a lot of enterprise customers using ParkMyCloud along with CloudHealth, CloudChekr, Cloudability, and Cloudyn because in general, they provide Cost Visibility and Governance, and we provide actionable, automated Cost Control.

As this is our blog, and my view from the street – we have 200+ customers now using ParkMyCloud, and we demo to 5-10 enterprises per week. Based on a couple of generic customer uses cases where we have strong familiarity, here’s what you need to know to stay ahead of the game:

  • Cost Visibility and Governance: CloudHealth, CloudChekr, Cloudability and Cloudyn (now owned by Microsoft)
  • Reserved Instance (RI) management – all of the above
  • Spot Instance management – SpotInst
  • Monitor and Track Usage: CloudHealth, CloudChekr, Cloudability and Cloudyn
  • Turn off (park) Idle Resources – ParkMyCloud, Skeddly, Gorilla Stack, BotMetric
  • Automate Cost Control as part of your DevOps Process: ParkMyCloud
  • Govern User Access to Cloud Console for Start/Stop: ParkMyCloud
  • Integrate with Single Sign-On (SSO) for Federated User Access: ParkMyCloud

To summarize, cloud cost control is important, and there are many cloud optimization tools available to assist with visibility, governance, management, and control of your single or multi-cloud environments. However, there are very few tools which allow you to set up automated actions leveraging your existing enterprise tools like Ping, Okta, Atlassian, Jenkins, and Slack.  Make sure you are not only focusing on cost visibility and recommendations, but also on action-oriented platforms to really get the best bang for your buck.

Read more ›

How to Optimize Cloud Spend with ParkMyCloud

The focus on how to optimize cloud spend is now as relentless as the initial surge was to migrate workloads from ‘on-prem’ to public cloud. A lot of this focus, and resultant discussions, were in regards to  options related to the use of Reserved Instances (RI’s), Spot Instances,or other pre-pay options. The pay-up-front discount plan makes sense  when you have some degree of visibility on future needs, and when there is no ‘turn-if-off’ option, which we here at ParkMyCloud call “parking”.

When it comes to the ability to ‘park instances’ we like to divide the world into two halves. There are those Production Systems, which typically need to be running 24/7/365, and then there are Non-Production Systems, which at least in theory have the potential to be parked when not in use. The former are typically  your end-customer or enterprise facing systems, which need to be online and available at all times.In this case, RI’s typically make sense. When it comes to those non-production systems, that’s where a tool such as ParkMyCloud comes into play. Here you have an opportunity to review the usage patterns and needs of your organization and how to optimize cloud spend accordingly. For example, you may well discover that your QA team never works on weekends, so you can turn their EC2 instances off on a Friday night and turn them back on first thing on Monday morning. Elsewhere, you might find other workloads that can be turned off in the small hours or even workloads which can be left off for extended periods.

Our customers typically like to view both their production and non-production systems in our simple dashboard. Here they can view all their public cloud infrastructure and  simply lock those production systems which cannot be touched. Once within the dashboard the different non-production workloads can then be reviewed and either centrally managed by an admin or have their management delegated to individual business units or teams.

Based on our customer usage we track, we see these non-production systems typically accounting for about 50% of what the companies spend on compute (i.e. instances / VMs). We then see those who aggressively manage these non-production instances saving up to 65% of their cost, which then makes a large dent in their overall cloud bill.

So, when you are thinking about how to optimize cloud spend, there’s a lot more opportunities than just committing to purchase in advance, especially for your non-production workloads.

Read more ›

Shutting Down RDS Instances in AWS – Introducing the Start/Stop Scheduler

Users of Amazon’s database service have been clamoring for a solution to shutting down RDS instances with an automatic schedule ever since 2009, when the PaaS service was first released.  Once Amazon announced the ability to power off and on RDS instances earlier this year, AWS users started planning out ways to schedule these instances using scripts or home-grown tools.  However, users of ParkMyCloud were happy to find out that support for RDS scheduling was immediately available in the platform.  If you were planning on writing your own scripts for RDS parking, let’s take a look at some of the additional features that ParkMyCloud could provide for you.

Schedule EC2 and ASG in addition to RDS

Very few AWS users are utilizing RDS databases without simultaneously running EC2 instances as compute resources.  This means that writing your own scheduling scripts for shutting down RDS instances would involve scheduling EC2 instances as well.

ParkMyCloud has support for parking EC2 resources, RDS databases, and Auto Scaling Groups all from the same interface, so it’s easy to apply on/off schedules to all of your cloud resources.

Logical Groups to tie instances together

Let’s say you have a QA environment with a couple of RDS databases and multiple EC2 instances running a specific version of your software. With custom scripts, you have to implement logic that will shut down and start up all of those instances together, and potentially in a specific order.  ParkMyCloud allows users to create Logical Groups, which shows up as one logical entity in the interface but is scheduling multiple instances behind it.  You can also set start or stop delays within the Logical Group to customize the order, so if databases need to be started first and stopped last, then you can set that level of granularity.

Govern user access to databases

If your AWS account includes RDS databases that relate to dev, QA, staging, production, test, and UAT, then you’ll want to allow different users to access different databases based on their role or current project.  Implementing user governance in your own scripts can be a huge hassle, but ParkMyCloud makes it easy to split your user base into teams.  Users can be part of multiple teams if necessary, but by default they will only see the RDS databases that are in the teams they have access to.

High visibility into all AWS accounts and regions

Scripting your own schedules can be a challenge with a single region or account, but once you’re using RDS databases from around the world or across AWS accounts, you’re in for a challenge.  ParkMyCloud pulls all resources from all accounts and all AWS regions into one pane of glass, so it’s easy to apply schedules and keep an eye on all your RDS databases.

RDS DevOps automation

It can be a challenge to integrate your own custom scripts with your devops processes.  With ParkMyCloud, you have multiple options for automation.  With the Policy Engine, RDS instances can have schedules applied automatically based on tags, names, or locations.  Also, the ParkMyCloud API makes it easy to override schedules and toggle instances from your Slack channels, CI/CD tools, load-testing apps, and any other automated processes that might need a database instance powered on for a brief time.

Conclusion

Shutting down RDS instances is a huge money-saver.  Anyone who is looking to implement their own enterprise-grade AWS RDS start/stop scheduler is going to run into many challenges along the way.  Luckily, ParkMyCloud is on top of things and has implemented RDS parking alongside the other robust feature set that you already used for cost savings.  Sign up for a free trial today to supercharge your RDS database scheduling!

Read more ›

Interview: Hybrid Events Group + ParkMyCloud to Automate EC2 Instance Scheduling and Optimize AWS Infrastructure

We talked with Jedidiah Hurt, DevOps and technical lead at Hybrid Events Group, about how his company is using ParkMyCloud to automate EC2 instance scheduling, saving hours of development work. Below is a transcript of our conversation.

Appreciate you taking the time to speak with us today. Can you start off by giving us some background on your role, what Hybrid Events Group does, and why you got into doing what you do?

I do freelance work for Hybrid Events Group and am now moving into the role of technical lead. We had a big client we were working with this spring and we needed to fire up several EC2 instances. We were doing live broadcasting events across the U.S., which is what the company specializes in – events A/V services. So we do live webcasting, and we can do CapturePro, another service we offer where we basically just show up to any event that someone would want to record, which usually is workshops and keynotes at tech conferences, and we record on video and also capture the presenter’s presentation in video in real time.

ParkMyCloud, what we used it for, was just to automate EC2 instances for doing live broadcasts.

Was there any reason you chose AWS over others like Azure or Google Cloud, out of curiosity?

I just had the most experience with AWS; I was using AWS before Azure and Google Cloud existed. So I haven’t, or I can’t say that I’ve actually really given much of a trial to Azure or Google Cloud. I might have to give them a look here sometime in the future.

Do you use any PaaS services in AWS, or do you focus on compute databases and storage?

Yeah, not a whole lot right now. Just your basic S3, EC2, and I think we are probably going to move into elastic load balancing and auto scaling groups within the next few months or so as we build out our platform.

Do you use Agile development process to build out your platform and provide continuous delivery?

So, I am an agile practitioner, but we are just kind of brown fielding the platform. We are in the architecture stage right now, so we will be doing all of that, as far as continuous deployment, and hopefully continuous integration where we actually have some automated testing.

As far as tools, I’m the only developer on the team right now, so we won’t really have a full Agile or be fully into Agile. We haven’t got boards and prints and planning, weekly meetings, and all those things, because it’s just me. But we integrate portions of it, as far as having stakeholders kind of figuring out what our minimum viable product is.

What drove you to look for something like ParkMyCloud, and how did you come across it?

ParkMyCloud enabled us to automate a process that we were going to do manually, or that I was going to have to write scripts for and maintain. I think initially I was looking into just using the AWS CLI, and some other kind of test scheduler, to bring up the instances and then turn them off after our daily broadcast session was over. I did a little bit of googling to see if there were any time-based solutions available and found ParkMyCloud, and this platform does exactly what’s needed and more.

And you are using the free tier ParkMyCloud, correct?

Yes. I don’t remember what the higher tiers offered, but this was all we really needed. We just had three or four large EC2 instances that we wanted to bring up for four to five hours a day, Monday through Friday, so it had all the core features that we currently need.

Anything that stood out for you in terms of using the product?

I’d say on the plus side I was a little bit concerned at the beginning as far as the reliability of the tool, because we would have been in big trouble with our client if ParkMyCloud failed to bring up an instance at a scheduled start time. We used it, or I guess I would say we relied on it, every day for 2 months solid, and never saw any issues as far as instances not coming up when they were supposed to, or shutting down when they were not supposed to. I was really pleased with, what I would say, the reliability of the tool – that definitely stuck out to me.

From an ROI standpoint, are you satisfied with savings and the way the information is presented to you?

Yeah, absolutely. And I think for us, the ROI wasn’t so much the big difference between having the instances running all the time, or having the instances on a schedule. The ROI was more from the fact that I didn’t have to build the utility to accomplish that because you guys already did that. So in that sense, it probably saved me many hours of development work.

Also, that kind of uneasy feeling you get when you hack up a little script and put it into production versus having a well-tested, fully-automated platform. I’m really happy that we found ParkMyCloud, it has definitely become an important part of our infrastructure management over last few months.

As our final question, how much overhead or time did you have to spend in getting ParkMyCloud set up to manage your environment, and did you have to do anything on a daily or weekly basis to maintain it?

So, as I said, our particular use case was very basic, so it ended up being three instances that we needed to bring up for three or four hours a day and then shut them down. I’d say it took me ten to fifteen minutes to get rolling with ParkMyCloud and automate EC2 instance scheduling. And now we save thousands of dollars per month on our AWS bill.

Read more ›

Cloud Optimization Tools = Cloud Cost Control

Over the past couple of years we have had a lot of conversations with large and small enterprises regarding cloud management and cloud optimization tools, all of whom were looking for cost control. They wanted to reduce their bills, just like any utility you might run at home — why spend more than you need to? Amazon Web Services (AWS) actively promotes optimizing cloud infrastructure, and where they lead, others follow. AWS even goes so far as to suggest the following simple steps to control AWS costs:

  1. Right-size your services to meet capacity needs at the lowest cost;
  2. Save money when you reserve;
  3. Use the spot market;
  4. Monitor and track service usage;
  5. Use Cost Explorer to optimize savings; and
  6. Turn off idle instances (we added this one).

Its interesting to note use of the word ‘control’ even though the section is labeled Cost Optimization.

So where is all of this headed? It’s great that AWS offers their own solutions but what if you want automation into your DevOps processes, multi-cloud support (or plan to be multi cloud), real-time reporting on these savings, and to turn stuff off when you are not using it? Well then you likely need to use a third-party tool to help with these tasks.

Let’s take a quick look at a description of each AWS recommendation above, and get a better understanding of each offering. Following this we will then explore if these cost optimization options can be automated as part of a continuous cost control process:

  1. Right-sizing – Both the EC2 Right Sizing solution and AWS Trusted Advisor analyze utilization of EC2 instances running during the prior two weeks. The EC2 Right Sizing solution analyzes all instances with a max CPU utilization less than 50% and determines a more cost-effective instance type for that workload, if available.
  2. Reserved Instances (RI) – For certain services like Amazon EC2 and Amazon RDS, you can invest in reserved capacity. With RI’s, you can save up to 75% over equivalent ‘on-demand’ capacity. RI’s are available in three options – (1) All up-front, (2) Partial up-front or (3) No upfront payments.
  3. Spot – Amazon EC2 Spot instances allow you to bid on spare Amazon EC2 computing capacity. Since Spot instances are often available at a discount compared to On-Demand pricing, you can significantly reduce the cost of running your applications, grow your application’s compute capacity and throughput for the same budget, and enable new types of cloud computing applications.
  4. Monitor and Track Usage – You can use Amazon CloudWatch to collect and track metrics, monitor log files, set alarms, and automatically react to changes in your AWS resources. You can also use Amazon CloudWatch to gain system-wide visibility into resource utilization, application performance, and operational health.
  5. Cost Explorer – AWS Cost Explorer gives you the ability to analyze your costs and usage. Using a set of default reports, you can quickly get started with identifying your underlying cost drivers and usage trends. From there, you can slice and dice your data along numerous dimensions to dive deeper into your costs.
  6. Turn off Idle Instances – To “park” your cloud resources by assigning them schedules of operating hours they will run or be temporarily stopped – i.e. parked. Most non-production resources (dev, test, staging, and QA) can be parked at nights and on weekends, when they are not being used. On the flip side of this, some batch processing or load testing type applications can only run during non-business hours, so they can be shut down during the day.

Many of these AWS solutions offer recommendations, but do require manual efforts to gain the benefits. This is why third party solutions have have seen widespread adoption and include cloud management, cloud governance and visibility, and cloud optimization tools. In part two of this this blog we will have a look at some of those tools, the benefits of each, approach and the level of automation to be gained.

Read more ›

Cloud Cost Management Tool Comparison

Not only has it become apparent that public cloud is here to stay, it’s also growing faster as time goes on (by 2020, it is estimated that more than 40% of enterprise workloads will be in the cloud). IT infrastructure has changed permanently, and enterprise organizations are coming to terms with some of the side effects of this shift.  One of those side effects is the need for tools and processes (and even teams in larger organizations) dedicated to cloud cost management and cost control.  Executives from all teams within an organization want to see costs, projections, usage, savings, and quantifiable efforts to save the company money while maximizing IT throughput as enterprises shift to resources to the cloud.  

There’s a variety of tools to solve some of these problems, so let’s take a look at a few of the major ones.  All of the tools mentioned below support Amazon AWS, Microsoft Azure, and Google Cloud Platform.

CloudHealth

CloudHealth provides detailed analytics and reporting on your overall cloud spend, with the ability to slice-and-dice that data in a variety of ways.  Recommendations about your instances are made based on a score driven by instance utilization and cloud provider best practices. This data is collected from agents that are installed on the instances, along with cloud-level information.  Analysis and business intelligence tools for cloud spend and infrastructure utilization are featured prominently in the dashboard, with governance provided through policies driven by teams for alerts and thresholds.  Some actions can be scripted, such as deleting elastic IPs/snapshots and managing EC2 instances, but reporting and dashboards are the main focus.

Overall, the platform seems to be a popular choice for large enterprises wanting cost and governance visibility across their cloud infrastructure.  Pricing is based on a percentage of your monthly cloud spend.

CloudCheckr

Cloudcheckr provides visibility into governance, security, compliance, and cost problems based on doing analytics and checks against logic built into their platform. It relies on non-native tools and integrations to take action on the recommendations, such as Spotinst, Ansible, or Chef.  CloudCheckr’s reports cover a wide range of topics, including inventory, utilization, security, costs, and overall best-practices. The UI is simple and is likely equally well regarded by technical and non-technical users.

The platform seems to be a popular choice with small and medium sized enterprises looking for greater overall visibility and recommendations to help optimize their use of cloud.  Given their SMB focus customers are often provided this service through MSPs. Pricing is based on your cloud spend, but a free tier is also available.

Cloudyn

Cloudyn (recently acquired by Microsoft) is focused on providing advice and recommendations along with chargeback and showback capabilities for enterprise organizations. Cloud resources and costs can be managed through their hierarchical team structure.  Visibility, alerting, and recommendations are made in real time to assist in right-sizing instances and identifying outlying resources.  Like CloudCheckr, it relies on external tools or people to act upon recommendations and lacks automation

Their platform options include supporting MSPs in the management of their end customer’s cloud environments as well as an interesting cloud benchmarking service called Cloudyndex.  Pricing for Cloudyn is also based on your monthly cloud spend.  Much of the focus seems to be on current Microsoft Azure customers and users.

ParkMyCloud

Unlike the other tools mentioned, ParkMyCloud focuses on actions and automated scheduling of resources to provide optimization and immediate ROI.  Reports and dashboards are available to show the cost savings provided by these schedules and recommendations on which instances to park.  The schedules can be manually attached to instances, or automatically assigned based on tags or naming schemes through its Policy Engine.  It pairs well with the other previously mentioned recommendation-based tools in this space to provide total cost control through both actions and reporting.

ParkMyCloud is widely used by DevOps and IT Ops in organizations from small startups to global multinationals, all who are keen to automate cost control by leveraging ParkMyCloud’s native API and pre-built integration with tools like Slack, Atlassian, and Jenkins.  Pricing is based on a cost per-instance, with a free tier available.

Conclusion

Cloud cost management isn’t just a “should think about” item, it’s a “must have in place” item, regardless of the size of a company’s cloud bill.  Specialized tools can help you view, manage, and project your cloud costs no matter which provider you choose.  The right toolkit can supercharge your IT infrastructure, so consider a combination of some of the tools above to really get the most out of your AWS, Azure, or Google environment.

Read more ›

New: Park AWS RDS Instances with ParkMyCloud

Now You Can Park AWS RDS Instances with ParkMyCloud

We’re happy to share that you can now park AWS RDS instances with ParkMyCloud!

AWS just recently released the ability to start and stop RDS instances. Now with ParkMyCloud, you can automate RDS start/stop on a schedule, so your databases used for development, testing, and other non-production purposes are only running when you actually need them – and you only pay for the hours you use. This is the first parking feature on the market that’s fully integrated with AWS’s new RDS start/stop capability.

You can also use ParkMyCloud’s policy engine to create rules that automatically assign your RDS instances to parking schedules and to teams, so they’re only accessible to the users who need them.

Why it Matters

Our customers who use AWS have long asked for the ability to park RDS instances. In fact,

RDS is the area of biggest of cloud spend after compute, accounting for about 15-20% of an average user’s bill. The savings users can enjoy from parking RDS will be significant. On average, ParkMyCloud users save $140 per parked instance per month on compute – and as RDS instances cost significantly more per hour, the savings will be proportionally higher.

“We’ve used ParkMyCloud for over a year to reduce our EC2 spend, enjoying a 13X return on our yearly license fee – it’s literally saved us thousands of dollars on our AWS bill. We look forward to saving even more now that ParkMyCloud has added support for RDS start/stop!” – Anthony Suda, Release Manager/Senior Network Manager, Sundog.

How to Get Started

It’s easy to get started and park AWS RDS instances with ParkMyCloud.

If you don’t yet use ParkMyCloud, you can try it now for free. We offer a 14-day free trial of all ParkMyCloud features, after which you can choose to subscribe to a premium  plan or continue parking your instances using ParkMyCloud’s free tier.

If you already use ParkMyCloud, you’ll need to check your AWS permissions and ParkMyCloud policies out, and then turn on the RDS feature via your settings page. Please see more information about this on our support page.

As always, we welcome your feedback about this new addition to ParkMyCloud, and anything else you’d like to see in the future.

Happy parking!

Read more ›

Start and Stop RDS Instances on AWS – and Schedule with ParkMyCloud

AWS RDS Pricing

Amazon Web Services shared today that users can now start and stop RDS instances – check out the full announcement on their blog.

This is good news for cost-conscious engineering teams. Until now, databases were generally left running 24×7, even if they were only used during working hours for testing and staging purposes. Now, they can be turned off, so you’re not charged for time you’re not using. Nice!

Keep in mind that stopping the RDS instances will not bring the cost to zero – you will still be charged for provisioned storage, manual snapshots and automated backup storage.

Now, what if you want to start and stop RDS instances on an automated schedule to ensure they’re not left running when they’re not needed? Coming soon, you’ll be able to with ParkMyCloud!

Start and Stop RDS Instances on a Schedule with ParkMyCloud

Since ParkMyCloud was first released, customers have been asking us for the ability to park their RDS instances in the same way that they can park EC2 instances and auto scaling groups.

The logic to start/stop RDS instances using schedules is already in the production code for ParkMyCloud. We have been patiently waiting for AWS to officially announce this capability, so that we could turn the feature ON and release it to the public. That day is finally here!

Our development team has some final end-to-end testing to complete, just to make sure everything works as expected. Expect RDS parking to be released within a couple of weeks! Let us know if you’d like to be notified when this is released, or if you’re interested in beta testing the new functionality.

 

We’re excited about this opportunity to give ParkMyCloud users what they’re asking for. What else would you like to see for optimal cost control? Comment below to let us know.

Read more ›

Cutting through the AWS and Azure Cloud Pricing Confusion (Caveat Emptor)

Before I try to break down the AWS and Azure cloud pricing jargon, let me give you some context. I am a crusty, old CTO who has been working in advanced technology since the 1980’s. (That’s more than 18 Moore’s Law cycles for processor and chipset fans, and I have lost count of how many technology hype cycles that has been.)

I have grown accustomed to the “deal of a lifetime” on the “technology of the decade” coming around about once every week. So, you can believe me, when I tell you have a very low BS threshold for dishonest sales folks and bogus technology claims. Yes, I am jaded.

My latest venture is a platform, ParkMyCloud, that brings together  multiple public cloud providers. And I can tell you first hand that it is not for the faint-of-heart. It’s like being dropped off in the middle of the jungle in Papua, New Guinea. Each cloud provider has its own culture, its own philosophy, its own language and customers, its own maturity level and, worst of all — its own pricing strategy — which makes it tough for buyers to manage costs. I am convinced that the lowest circles of hell are reserved for people who develop cloud service pricing models. AWS and Azure cloud pricing gurus, beware. And reader, to you: caveat emptor.

AWS and Azure Terminology Differences

Case in point: You have probably read the comparisons of various services across the top cloud providers, as people try to wrap their minds around all the varying jargon used to describe pretty much the same thing. For example, let’s just look at one service: Cloud Computing.

In AWS, servers are called Elastic Compute Cloud (EC2) “Instances”. In Azure they are called “Virtual Machines” or “VMs”. Flocks of these spun up from a snapshot according scaling rules are called “auto scaling groups” in AWS. The same things are called “scale sets” in Azure.

Of course cloud providers had to start somewhere, then they learned from their mistakes and improved. When AWS started with EC2, they had not yet released virtual private clouds (VPCs), so their instances ran outside of VPCs. Now all the latest stuff runs inside of VPCs. The older ones are called, “classic” and have a number of limitations.
The same thing is true of Azure. When they first released, their VMs were not set up to use what is now their Resource Manager or be managed in Resource Groups (the moral equivalent of CloudFormation Stacks in AWS). Now, all of their latest VMs are compatible with Resource Manager. The older ones are called, you guessed it … “classic”.

(What genius came up with the idea to call the older versions of these, the ones you’re probably stranded with and no longer want, “classic”?)

Both AWS and Azure have a dizzying array of instances/VMs to choose from, and doing an apples-to-apples comparison between them can be quite daunting. They have different categories: General purpose, compute optimized, storage optimized, disk optimized, etc.

Then within each one of those, there are types or sizes. For example, in AWS the tiny, cheap ones are currently the “t2” family. In Azure, they are the “A” series. On top of that there are different generations of processors. In AWS, they use an integer after the family type, like t2, m3, m4 and there are sizes, t2.small, m3.medium, m4.large, r16.ginormus (OK, I made that one up).  

In Azure, they use a number after the family letter to connote size, like A0, A1, A2, D1, etc. and “v1”, “v2” after that to tell what generation it is, like D1v1, D2v2.

The bottom line: this is very confusing for folks moving their workloads to public cloud from on-premise data centers (yet another Wonderland of jargon and confusion in its own right). How does one decide which cloud provider to use? How does one even begin to compare prices with all of this mess? Cheer up … it gets worse!

AWS and Azure Cloud Pricing – Examining Differences in Charging

To add to that confusion, they charge you differently for the compute time you use. What do I mean?  AWS prices their compute time by the hour. And by hour, they mean any fraction of an hour: If you start an instance and run it for 61 minutes then shut it down, you get charged for 2 hours of compute time.

Microsoft Azure cloud pricing is listed by the hour for each VM, but they charge you by the minute. So, if you run for 61 minutes, you get charged for 61 minutes. On the surface, this sounds very appealing (and makes me want to wag my finger at AWS and say, “shame on you, AWS”).

However, you really have to pay attention to the use case and the comparable instance prices. Let me give you a concrete example. I mentioned my latest venture, ParkMyCloud, earlier. We park (schedule on/off times) for cloud computing resources in non-production environments (without scripting by the way). So, here is a graph of 6 months worth of data from an m4.large instance somewhere in Asia Pac. The m4 processor family is based on the Xeon Broadwell or Haswell processor and it is one of the most commonly used instance types.

This instance is on a ParkMyCloud parking schedule, where it is RUNNING from 8:00 a.m. to 7:00 p.m. on weekdays and PARKED evenings and weekends. This instance, assuming Linux pricing, costs $0.125 per hour in AWS. From November 6, 2016 until May 9, 2017, this instance ran for 111,690 minutes. This is actually about 1,862 hours, but AWS charged for 1,922 hours and it cost $240.25 in compute time.

example of instance uptime in minutes per dayWhy the difference? ParkMyCloud has a very fast and accurate orchestration engine, but when you start and stop instances, the cloud provider and network response can vary from hour-to-hour and day-to-day, depending on their load, so occasionally things will run that extra minute. And, even though this instance is on a parking schedule, when you look at the graph, you can see that the user took manual control a few times. Stuff happens!

What would the cost have been if AWS charged the same way as Azure?  It would have only cost $232.69. Well, that’s not too bad over the course of six months, unless you have 1,000 of these. Then it becomes material.

However, I wouldn’t rush to judgment on AWS. If you look at the comparable Azure VM, the standard pricing DS2 V2, also running Linux, costs $0.152/hour. So, this same instance running in Azure would have cost $290.39. Yikes!

Therefore, in my particular use case, unless the Azure cloud pricing drops to make their CPU pricing more competitive, their per minute pricing really doesn’t save money.

Conclusion

The ironic thing about all of this, is that once you get past all the confusing jargon and the ridiculous approaches to pricing and charging for usage, the actual cloud services themselves are much easier to use than legacy on-premise services. The public cloud services do provide much better flexibility and faster time-to-value. The cloud providers simply need to get out of their own way. Pricing is but one example where AWS and Azure need to make things a lot simpler, so that newcomers can make informed decisions.

From a pricing standpoint, AWS on-demand pricing is still more competitive than Azure cloud pricing for comparable compute engine’s, despite Azure’s more enlightened approach to charging for CPU/Hr time. That said, AWS really needs to get in-line with both Azure and Google, who charge by the minute. Nobody likes being charged extra for something they don’t use.

In the meantime, ParkMyCloud will continue to help you turn off non-production cloud resources, when you don’t need them and help save you a lot of money on your monthly cloud bills. If we make anything sound more complex than it needs to, call us out. No hiding behind jargon here.

Read more ›

Azure vs. AWS 2017: Is Azure really surpassing AWS?

Azure vs. AWS 2017: what’s the deal? There’s been a lot of speculation lately that Microsoft Azure may be outpacing Amazon Web Services (AWS). We think that’s interesting and therefore worth taking a look at these claims. After all, AWS has been dominating the public cloud market for so long, maybe the media is just bored of that story, and ready for an underdog to jump ahead. So let’s take a look.

Is Azure catching up to AWS?

You may have seen some of the recent reports on both Microsoft and Amazon’s recent quarterly earnings. There have certainly been some provocative headlines:

With Amazon and Microsoft reporting their quarterly earnings at the same time, this is a good time to analyze the numbers and see where they stand in relation to one another. Upon closer inspection, here’s what the recent quarterly earnings reports showed:

  • AWS revenue grew 43% in the quarter, with quarterly earnings of $3.66 billion, annualized to $14.6 billion. Sales and earnings exceeded expectations given by analyst estimates. In the immediate wake of Amazon’s report, the stock went up.
  • Microsoft reported that its Intelligent Cloud division grew 11% to $6.8 billion, and that the Commercial Cloud division has a annualized run rate of $15.2 billion. These reported earnings only met analyst expectations, and therefore the stock fell by nearly 2 percent within hours.
  • We think it’s important to note when it comes to Microsoft’s reported earnings the Commercial Cloud business includes Office 365, not just Azure. We have never fully understood why the Office 365 business has been bundled in with Commercial Cloud, given that it’s a very different business than the IAAS services of Amazon and Google to which it is often compared.
  • Microsoft stated that Azure’s growth rate was 93%, without providing an actual revenue number. Once again, we find this lack of lack of earnings clarity somewhat problematic.

So is Azure bigger than AWS?

Well, currently no. There is little evidence of Azure surpassing AWS, aside from a small research study which pales in comparison to a clear majority of data stating otherwise.

But is Azure growing quickly?

Yes. In this regard, it’s important to consider what factors are at play in Azure’s growth, and whether they hold any weight as far as surpassing Azure outpacing AWS in the future.

Where is Azure actually gaining ground?

Now let’s take a look at what is driving Azure’s growth, and where Azure is gaining ground.

First of all, as companies grow beyond dipping their toes in the water of public cloud, they become more interested in secondary options for diversity and different business cases. Just from our own conversations, we’re finding that more and more AWS users are using Azure as a secondary option. While users might be interested to see what Azure can offer them in comparison, this doesn’t necessarily indicate that it will ultimately surpass AWS.

Take, for example, the results of a research survey released by data analytics provider Sumo Logic and conducted by UBM Research. According to the survey of 230 IT professionals from 500+ employees, Azure actually beat AWS as the preferred primary cloud provider, taking the lead by a 10 percent margin, with 66 percent of participants preferring Azure as opposed to the 55 percent who relied in AWS.

This research is significant because it’s the first time that survey data on customer preferences has reported Azure taking a lead over AWS. However, the data also revealed that a significant number of enterprises are using more than one cloud provider. While Azure and AWS both take the lead, there is certainly an overlap in participants who use both, in addition to other up-and-coming providers.

Second, enterprises have been committed to a variety of Microsoft products for years. According to UBM Research survey data, over 50 percent of participants who preferred Azure as their primary cloud provider were coming from large enterprises with 10,000+ employees. This makes sense considering that Microsoft has a foothold in terms of relationships and enterprise agreements with these larger organizations and are able to cross-sell Azure.

Third, Azure has a strong base in Europe, where more users report using Azure rather than AWS as their primary provider. In a 451 Research Survey with 700 participants considered to be “IT decision makers,” AWS topped the list among all participants as the preferred provider among 39 percent of participants. While Azure saw an increase in users, it still landed in second place overall at 35 percent. However, among the European participants only, Azure took the top spot, with 43.7 percent naming Azure as their provider, and 32 percent sticking with AWS.

Why does the Azure vs. AWS debate matter?

Why does the Azure vs. AWS 2017 debate matter to, when choosing a new or secondary cloud provider? Well… in terms of market performance, it probably doesn’t. As always, the specific needs of your business are going to be what’s important.

One thing is for certain: the public cloud is growing and it’s here to stay. Let’s not forget that both Google and IBM both have growing public cloud offerings too (and Google is looking to expand their enterprise market this year.) All of this competition drives innovation, and therefore IaaS and PaaS offerings – and perhaps, better pricing.

For the customer, the basic questions remain the same when evaluating public cloud providers:

  • How understandable are the public cloud offerings to new customers?
  • How much do the products cost?
  • Are there adequate customer support and growth options?
  • Are there useful surrounding management tools?
  • Will our DevOps processes translate to these offerings?
  • Can the PaaS offerings speed time-to-value and simplify things sufficiently, to drive stickiness?
  • What security measures does the cloud provider have in place?

Based upon the evidence we think it’s pretty clear that AWS is still the leader among public cloud providers.

We’ll continue to track the AWS vs. Azure comparison, and as the companies’ offerings and pricing options grow and change – we’ll be interested to see how this evaluation changes in 2018.

Read more ›

Is your AWS bill too high this month?

Amazon Web Services (AWS) monthly bills start arriving in inboxes the world round about this time every month. When they do, there are two questions we like to ask AWS users.aws bill too high?

One, did you look at your AWS bill?

For some readers, the idea that you might not is ridiculous. You may be surprised how many companies we’ve talked to where even key decision makers are unsure how much they are spending on cloud services. (Mature cloud users are more likely to worry about spend, as found by RightScale’s 2017 State of the Cloud Report, but that doesn’t mean that even those users have their eye on the bill each month.)

Okay, so let’s assume that you have looked at your AWS bill. Time for the second question. Was your AWS bill more than you expected this month?

For more and more cloud users, the answer is yes. Only 46% of enterprises monitor and rightsize cloud resources – which means 54% do nothing. Between resources left running when they’re not needed, incorrectly sized resources, and orphaned volumes, it’s easy for bills to climb out of control.

We’ve written extensively about how to reduce cloud waste, whether you should build cost-reduction tools yourself, and how to control AWS spend. If that’s overwhelming, there’s one simple thing you can do to get started, and combat that sticker shock in time for your next AWS bill.

Your first step toward getting AWS bills in control is to schedule on/off times for your non-production resources, so you’re not wasting a single dollar on compute time you don’t need.

It’s easy – get started with a free trial of ParkMyCloud.

Read more ›

Amazon uses robots for mundane tasks. Do the same and automate tasks in AWS

automate aws tasks like amazon automates in their warehouses with robotsI’m a fan of automation – as a CEO, I think you should do everything possible to simplify your day-to-day, whether that means you overhaul your calendar system or automate tasks in AWS.

Amazon themselves is great at this. I’m sure you are well aware of Amazon’s quest for automation in their warehouses (robots) and distribution (drones) to reduce costs and deliver packages faster. (That’s a great goal, by the way – I am an Amazon Prime customer – love it.) If you’re interested in Amazon’s robots, check out this article from Business Insider. And as the use of drones to deliver product becomes reality, Amazon has created a service – check out Amazon Prime Air if you haven’t already.

So let’s look at another branch of Amazon – Amazon Web Services (AWS). AWS is a large distributor of cloud, which in and of itself is actually a utility that can be used on demand to provide compute, database, and storage services to small and large companies alike. It’s just like the way utility companies provide electricity, water and heat to homes and business. Over time, features and services have been built and sold to optimize these traditional utilities in order to simplify mundane tasks via automation and achieve ROI by saving more money than cost. Here are a few examples:

  • Nest to detect, learn, and automate programmable thermostats to save on heating and cooling
  • In-office motion sensors to detect movement to turn lights off/on
  • Motion sensors on faucets and hand dryers to eliminate water and electric waste
  • Gadgets on showers, toilets, etc. to reduce water consumption and waste

The point is, all of these utilities need to be optimized with 3rd party technologies to automate and reduce waste, and to optimize spend. At home, you’re the CFO (or maybe your spouse is 🙂) but you don’t want to spend more than you need to, and you will buy technology to automate mundane tasks and save yourself money if there is a tangible ROI.

Amazon is doing this with robots and drones – neither of which they built. So why not use 3rd party technology to automate for AWS, Azure, and Google Compute. Remember that the public cloud is a utility, and utilities have waste. In public cloud, what can we look at that’s mundane, that can be automated and where you can save dollars on cloud waste?

I have a simple one – automate tasks in AWS by turning servers off and on. Did you know that on average, 66% of what you spend on the public cloud is on compute (servers), and 45% of that is on non-production systems like development, test, and QA – servers that’s don’t need to run 7×24. That’s $6B in waste per year.

Even better than Nest, which you can install and set up in 30 minutes, ParkMyCloud can be setup and configured in 15 minutes or less. The next day, we will tell you how much you saved in the previous 24 hours by simply automating the mundane task of turning idles servers on/off with schedules.

There’s a reason Amazon is so successful – they automate mundane tasks in a simple, efficient way, follow their lead – automate today!

Read more ›

Don’t Write a Script to Save Money on AWS

>_    Today we have a piece of advice: don’t write a script to save money on AWS. Here at ParkMyCloud, we spend a lot of time chatting with DevOps and infrastructure teams, listening to how they manage their cloud operations.

You can Take the DIY Approach (scripting)

don't write a script to save on awsAlthough there is a lot of ‘tooling’ out there to make their lives easier, many infrastructure guys still choose to drop to the command line to take control of their environments. They might do this by using automation tools like Chef or Puppet, continuous delivery tools like Jenkins or simply revert to Bash or PowerShell. They are using these tools because of the granular control they provide and because they can seemingly quickly ‘knock out a script’ to either solve a problem or provide a quick automation of a common task.

We’ve talked with a number of larger AWS customers who are optimizing thousands or even tens of thousands of instances using scripting technologies. Given the needs of their businesses, their internal customers are typically geographically distributed with hundreds of teams and team members utilizing their cloud infrastructure.

But do you really want to?

Although we love scripts as much as the next guy, when it comes to cloud we see a few problems with this approach. Firstly, cloud is the great democratizer of infrastructure. No longer do the business folk in finance, marketing, sales etc. need to call IT to bring on new resources. They know they can spin it up themselves – even if they don’t know they can turn it off just as easily. But if you want cloud users to take on responsibility and ensure governance, they need tools, not scripts. Secondly, supporting hundreds of teams and user-managed infrastructures without embedding DevOps resources in every team quickly becomes burdensome and inefficient. The way we see it, just because you can do it, doesn’t mean you should do it.

There is a reason that simple, single-purpose web apps are sweeping across the enterprise. Users like simple UI’s with little to no learning curve. Companies realized that if you want to empower internal stakeholders by providing tools that allow them to optimize their workflow and resource use, it’s much easier to sustain if the end users can ‘do it themselves’. Be it the crack dev team who begins to self-manage their non-production environment or the team of data-scientists who make the CFO wince when they run their clusters.

Leave it to the folks down under to turn things on their heads

As we listen and learn how our customers optimize their cloud environments, we are always excited when we see an entirely new way of doing something. Recently, we were chatting with Foster Moore, one of our Antipodean customers in New Zealand. They too were active users of automation scripts for cloud optimization.

Once they found ParkMyCloud, however, they realized that they could free up their DevOps team to work on higher value activities. With their new tool in hand they decided to turn upside down the way their teams thought about cloud computing. They created a simple ‘always-off’ schedule, which takes newly launched instances and turns them off by default unless a user needs them turned on. By having them held in a stopped-state until exactly when they need them they avoid all unnecessary running costs.

The lesson is, you don’t need to write a script to save money on AWS. While our customer’s overall approach would have been technically possible using scripts, the ability to enable all their teams to have a simple way to remove this ‘always-off schedule’ would have required custom application development. Instead they were able to use ParkMyCloud’s ‘snooze’ functionality which allows its parking schedules to be temporarily removed for user defined periods of time, which could be an 8-hour workday or any other compute resources needed to be available. By reversing the ‘always-on’ nature of cloud compute to ‘always-off’, they were able to empower their teams to cut costs by 40% – without a single script in sight.

Read more ›
Page 1 of 212
Copyright © ParkMyCloud 2016. All rights reserved|Privacy Policy