Cloud Services Archives - ParkMyCloud

How to Optimize Costs When Using Blue-Green Deployments

Blue-green deployments are a great way to minimize downtime and risk — however, users should remember to keep cost in mind as well when optimizing deployments.

Why You Should Use Blue-Green Deployments

One approach to continuous deployment of applications that has really taken off in popularity recently is the use of blue-green deployments.

The main idea behind this system is to have two full production deployments in existence that are running the last two versions of code, with only the latest version actively in use.  For instance, if the current version of your software is running in your “blue” environment, your next deployment would take place in the “green” environment. When you’re ready to flip the switch, you start pointing users at green instead of blue.

This deployment method has a couple of great benefits. First, it helps with minimizing downtime when cutting over to newly-deployed code. Instead of upgrading your current system and having to make users wait until the upgrade is complete, the cutover downtime is minimized. Second, along the same lines, you have a fresh deployment each time instead of upgrading an existing system repeatedly. Third, you have a system that has been already working for you that you can roll back to if necessary.

How to Optimize Costs With Two Production Deployments

Of course, running two production environments means that you are paying twice the cost for your infrastructure. ParkMyCloud users have asked how they can optimize costs while using the blue-green deployment strategy.  We use AWS internally for our blue-green deployments, so we’ll discuss some options in terms of AWS terminology, but you can use other clouds like Azure and Google as well.

One approach is to use AWS Auto-Scaling Groups as your deployment mechanism. With ASGs, you decide how many instances you want as a minimum, a maximum, and a desired amount for your environment. When setting up ASGs in ParkMyCloud, you can have two different settings for min/max/desired for when the ASG is “on” and “off”.  This way, you can have an ASG for blue and one for green, then use ParkMyCloud to set the min/max/desired as needed, so each of these environments is only running when necessary, and not wasting money.

Another option is to use Logical Groups in ParkMyCloud. This allows you to group together instances into one entity, so you could have a database and a web server start and stop together.  If you go this route, you can put all of your blue instances together in a group, then start the whole group up when you are ready to switch over. When going between blue and green, you can just update the logical group to have the newest instances as you deploy. Again, this allows you to park the inactive environment, saving its cost.

If your continuous deployment is fully automated, a third option is to utilize the ParkMyCloud API to change schedules and toggle servers as deployments are completed. Typically, you’ll want your current active deployment on an “always on” schedule, so ParkMyCloud will turn things on even if someone tries to turn them off, and the standby deployment on an “always off” schedule so you are saving money.

This idea of using ParkMyCloud with blue-green deployments is one way to start implementing Continuous Cost Control in your pipeline. This way, you can save money while delivering software quickly and automatically. Try it out with ParkMyCloud today and get the most out of your cloud!

Read more ›

New 451 Research Report on ParkMyCloud’s Multi-Cloud Scheduling Software

Analyst 451 Research has released a new report on ParkMyCloud, highlighting that “ParkMyCloud continues to build out its multi-cloud scheduling software, maintaining the clean interface but adding functionality with a reporting dashboard, single sign-on and notifications, including a Slackbot for automated parking.”

It’s true! We’ve been steadily adding features to ParkMyCloud as our customers ask for them. Recent examples include:

  • Mobile app – easy access to your ParkMyCloud account for cost management on the go
  • RDS parking – park AWS RDS instances, just like EC2
  • Slack integration – get notifications and manage your continuous cost control via Slack

Here’s the full “451 take” on ParkMyCloud:

“ParkMyCloud is one of a handful of products that automate cloud resource scheduling via a lightweight SaaS application. With support for Azure and Google Cloud Platform as well as AWS, it offers a bird’s-eye view of provisioned public cloud resources and a slick interface for ‘parking’ idle capacity, either according to a schedule or ad hoc. With a clear ROI story and plans to improve the user experience with a mobile app and a more robust policy engine, the company benefits from a focus on doing one thing and doing it well.”

That “clear ROI story” that 451 Research noted is clear to our customers, too. In fact, most customers have an ROI of less than two months of using the product. The savings rapidly pays for the cost of premium features.

They also noted that the number of instances managed in the platform has tripled, just from Q2 to Q3 this year. More and more AWS, Azure, and GCP users are relying on ParkMyCloud for continuous cost control.

So if you are evaluating cloud cost control (ParkMyCloud), we encourage you to check out the full 451 Research analysis. Download and read the report here: ParkMyCloud automates scheduling of AWS, Azure, and GCP resources.  

Ready to join the ParkMyCloud following and start controlling your cloud spend? Start a free trial of ParkMyCloud today.

Read more ›

Cloud Per-Second Billing – How Much Does It Really Save?

It has been a little over a month since Amazon and Google switched some of their cloud services to per-second billing and so the first invoices with the revised billing are hitting your inboxes right about now. If you are not seeing the cost savings you hoped for, it may be a good time to look again at what services were slated for the pricing change, and how you are using them.

Google Cloud Platform

Starting with the easiest one, Google Cloud Platform (GCP), you may not be seeing a significant change, as most of their services were already billing at the per-minute level, and some were already at the per-second level. The services moved to per-second billing (with a one-minute minimum) included Compute Engine, Container Engine, Cloud Dataproc, and App Engine VMs.  Moving from per-minute billing to per-second billing is not likely to change a GCP service bill by more than a fraction of a percent.

Let’s consider the example of an organization that has ten GCP n1-standard-8 Compute Engine machines in Oregon at a base cost of $0.3800 per hour as of the date of this blog. Under per-minute billing, the worst-case scenario would be to shut a system down one second into the next minute, for a cost difference of about $0.0063. Even if each of the ten systems were assigned to the QA or development organization, and they were shut down at the end of every work day, say 22 days out of the month, your worst-case scenario would be an extra charge of 22 days x 10 systems x $0.0063 = $1.3860. Under per-second billing, the worst case is to shut down at the beginning of a second, with a highest possible cost for these same machines (sparing you the math) being about $0.02. So, the best this example organization can hope to save over a month with these machine with per-second billing is $1.39.

Amazon Web Services

On the Amazon Web Services (AWS) side of the fence, the change is both bigger and smaller.  It is bigger in that they took the leap from per-hour to per-second billing for On-Demand, Reserved, and Spot EC2 instances and provisioned EBS, but smaller in that it is only for Linux-based instances; Windows instances are still at per-hour.

Still, if you are running a lot of Linux instances, this change can be significant enough to notice.  Looking at the same example as before, let’s run the same calculation with the roughly equivalent t2.2xlarge instance type, charged at $0.3712 per hour. Under per-hour billing, the worst-case scenario is to shut a system down even a second into the next higher hour. In this example, the cost would be an extra charge of 22 days x 10 systems x $0.3712 = $81.664. Under per-second billing, the worst case is the same $0.02 as with GCP (with fractions of cents difference lost in the noise). So, under AWS, one can hope to see significantly different numbers in the bill.

The scenario above is equally relevant to other situations where instances get turned on and off on a frequent basis, driving those fractions of an hour or a minute of “lost” time. Another common example would be auto-scaling groups that dynamically resize based on load, and see enough change over time to bring instances in and out of the group. (Auto-scale groups are frequently used as a high-availability mechanism, so their elastic growth capabilities are not always used, and so savings will not always be seen.) Finally, Spot instances are built on the premise of bringing them up and down frequently, and they will also enjoy the shift to per-second billing.

However, as you look at your cloud service bill, do keep in mind some of the nuances that still apply:

  • Windows: GCP applies per-second billing to Windows; AWS is still on one-hour billing for Windows.
  • Marketplace Linux: Some Linux instances in the AWS Marketplace that have a separate hourly charge are also still on hourly billing (perhaps due to contracts or licensing arrangements with the vendors?), so you may want to reconsider which flavor of Linux you want to use.
  • Reserved instances: AWS does strive to “use up” all of the pre-purchased time for reserved instances, spreading it across multiple machines with fractions of usage time, and per-second billing can really stretch the value of these instances.
  • Minimum of one-minute charge: Both GCP and AWS will charge for at least a minute from instance start before per-second billing comes into play.

Overall, per-second billing is a great improvement for consumers of cloud resources…and will probably drive us all more than ever to make each second count.

Read more ›

ParkMyCloud Launches App to Meet Demand for Mobile Cloud Cost Optimization

Cloud Cost Optimization Just Got Easier With the New ParkMyCloud Mobile App

November 8, 2017 (Dulles, VA) – ParkMyCloud, the leading enterprise platform for continuous cost control in public cloud, announced today the release of a new iOS app that allows users to park idle instances directly from their mobile devices. The app makes it easy for ParkMyCloud customers to reduce cloud waste and cut monthly cloud spend by 65% or more, now with even more capability and ease of use.

Before release of the app, current users were invited to participate in a beta test and offer feedback. Keith Nichols, CTO of FurstPerson, said, “Overall love it. I was out to dinner last Friday and got an emergency call to restart an instance that was parked – and I had my phone with me and was able to use the app without needing to drive home to login to my laptop.”

ParkMyCloud CTO Bill Supernor adds that “In addition to reducing cloud costs, ParkMyCloud stands for simplicity and ease of use. Our customers are thrilled to have control over cloud resources with a mobile app, making reducing cloud spend that much easier, even when they are on the go.”

ParkMyCloud is a recognized leader in cloud cost optimization. The new mobile app is another example of how the platform provider is making the experience of managing cloud costs easier and more accessible for enterprise customers. An Android version of the app is currently in development. ParkMyCloud also plans to release utilization-based parking later this year, to further automate instance off times and maximize savings.

About ParkMyCloud

ParkMyCloud is a SaaS platform that helps enterprises optimize their public cloud spend by automatically reducing resource waste — think “Nest for the cloud”. ParkMyCloud has helped customers such as McDonald’s, Capital One, Unilever, Foster Moore, and Sage Software dramatically cut their cloud bills by up to 65%, delivering millions of dollars in savings for customers using Amazon Web Services, Microsoft Azure, and Google Cloud Platform. For more information, visit http://www.parkmycloud.com.

Contact

Katy Stalcup

kstalcup@parkmycloud.com

(571) 334-3291

Read more ›

Google Cloud Platform vs AWS: Is the answer obvious? Maybe not.

Google Cloud Platform vs AWS: what’s the deal? A few months ago, we asked the same question about Azure vs AWS. While Microsoft continues to see growth, and Amazon maintains a steady lead among cloud providers, Google is stepping in. Now that Google Cloud Platform has solidly secured its spot to round out the “big three” cloud providers, we think it’s time to take a closer look and see how the underdog matches up to the 800-pound gorilla.

Is Google Cloud catching up to AWS?

As they’ve been known to do, Amazon, Google, and Microsoft all released their recent quarterly earnings on the same day. At first glance, the headlines tell it all:

The natural conclusion is that AWS continues to dominate in the cloud war. With all major cloud providers reporting earnings at the same time, we have an ideal opportunity to examine the numbers and determine if there’s more to the story. Here’s what the quarterly earning reports tell us:

  • AWS reported $4.6 billion in revenue for the quarter and is on its way to $18 billion in revenue for year, a 42% year-over-year increase, taking the top spot among cloud providers
  • Google’s revenue has cloud sales lumped together with revenue from the Google Play app store, summing up to a total of $3.4 billion for the last quarter
  • Although Google did not report specific revenue for Google Cloud Platform (GCP), Canalys estimates earnings at $870 million for the quarter – a 76% year-over-year growth

 

  • It’s also important to note that Google is just getting started. Also included in their report was an increase in new hires, a total of 2,495 in the last quarter, and most of them going to positions in their cloud sector

The Obvious: Google is not surpassing AWS

When it comes to Google Cloud Platform vs AWS, presently we have a clear winner. Amazon continues to have the advantage as the biggest and most successful cloud provider on the market. While AWS is growing at a smaller rate now than both Google Cloud and Azure, Amazon’s growth is still more impressive given that it has the largest market share of all three. AWS is the clear competitor to beat as the first successful cloud provider, with the widest range of services, and a strong familiarity among developers.

The Less Obvious: Google is gaining ground

While it’s easy to write off Google Cloud Platform, AWS is not untouchable. Let’s not forget that 76% year-over-year growth is nothing to scoff at. AWS has already solidified itself in the cloud market, but Google Cloud is just beginning to take off.

Where is Google actually gaining ground?

We know that AWS is at the forefront of cloud providers today. At the same time, AWS is now only one among three major cloud providers. Google Cloud Platform has more in store for its cloud business in 2018.

Google’s stock continues to rise. With nearly 2,495 new hires added to the headcount, a vast majority of them being cloud-related jobs, it’s clear that Google is serious about expanding its role in the cloud market. Deals have been made with major retailer Kohl’s department store, and payments processor giant Paypal. Google CEO Sundar Pichai lists the cloud platform as one of the top three priorities for the company, confirming that they will continue expanding their cloud sales headcount.

In discussing Google’s recent quarterly earnings, Pichai added his thoughts on why he believes the Google Cloud Platform is on a set path for strong growth. He credits their success to customer confidence in Google’s impressive technology and a lead in machine learning, naming the company’s open-source software TensorFlow as a prime example. Another key component to growth is strategic partnerships, such as the recent announcement of a deal with Cisco, in addition to teaming up with VMware and Pivotal.

Driving Google’s growth is also the fact that the cloud market itself is growing fast. The move to the cloud has prompted large enterprises to use multiple cloud providers in building their applications, such as Home Depot Inc. and Target Corp., who rely on a combination of cloud vendors. Home Depot in particular uses both Azure and Google Cloud Platform, and a spokesman for the home improvement retailer explains why that was that intentional: “Our philosophy here is to be cloud agnostic, as much as we can.” This philosophy goes to show that as long as there is more than one major cloud provider in the mix, enterprises will continue trying, comparing, and adopting more than one at a time, making way for Google Cloud to gain further ground.

Andy Jassy, CEO of AWS, put it best:

“There won’t be just one successful player. There won’t be 30 because scale really matters here in regards to cost structure, as well as the breadth of services, but there are going to be multiple successful players, and who those are I think is still to be written. But I would expect several of the older guard players to have businesses here as they have large installed enterprise customer bases and a large sales force and things of that sort.”

Google Cloud Platform vs. AWS: Why does it matter?

Google Cloud Platform vs AWS is only one battle to consider in the ongoing cloud war. The truth is, market performance is only one factor in choosing the best cloud provider, and as we always say, the specific needs of your business are what will drive your decision.

What we do know: the public cloud is not just growing, it’s booming.

Referring back to our Azure vs AWS comparison, the basic questions still remain the same when it comes to choosing the best cloud provider:

  • Are the public cloud offerings to new customers easily comprehensible?
  • What is the pricing structure and how much do the products cost?
  • Are there adequate customer support and growth options?
  • Are there useful surrounding management tools?
  • Will our DevOps processes translate to these offerings?
  • Can the PaaS offerings speed time-to-value and simplify things sufficiently, to drive stickiness?
  • What security measures does the cloud provider have in place?

Right now AWS is certainly in the lead among major cloud providers, but for how long? We will continue to track and compare cloud providers as earnings are reported, offers are increased, and price options grow and change. To be continued in 2018…

Read more ›

AWS IAM Roles and Ways to Use them to Improve Security

What are AWS IAM Roles?

Within AWS Identity and Access Management system (IAM) there are a number of different identity mechanisms that can be configured to secure your AWS environment, such as Users, Groups, and AWS IAM Roles. Users are clearly the humans in the picture, and Groups are collections of Users, but Roles can be a bit more obscure. Roles are defined as a set of permissions that grant access to actions and resources in AWS. Unlike Users, which are tied to a specific Identity and a specific AWS account, an IAM Role can be used by or assumed by IAM User accounts or by services within AWS, and can give access to Users from another account altogether.

To better understand Roles, I like the metaphor of a hat.  When we say a Role is assumed by a user – it is like saying someone can assume certain rights or privileges because of what hat they are wearing.  In any company (especially startups), we sometimes say someone “wears a lot of hats” – meaning that person temporarily takes on a number of different Roles, depending on what is needed. Mail delivery person, phone operator, IT support, code developer, appliance repairman…all in the space of a couple hours.

IAM Roles are similar to wearing different hats this in that they temporarily let an IAM User or a service get permissions to do things they would not normally get to do.  These permissions are attached to the Role itself, and are conveyed to anyone or anything that assumes the role.  Like Users, Roles have credentials that can be used to authenticate the Role identity.

Here are a couple ways in which you can use IAM Roles to improve your security:

EC2 Instances

All too often, we see software products that rely on credentials (username/password) for services or accounts that are either hard-coded into an application or written into some file on disk. Frequently the developer had no choice, as the system had to be able to automatically restart and reconnect if the machine rebooted, without anyone to manually type in credentials during the rebootwhen the system rebooted. If the code is examined, or file system is compromised, then the credentials are exposed, potentially compromisingand can potentially used to compromise other systems and services. In addition, such credentials make it really difficult to periodically change the password. Even in AWS we sometimes see developers hard-code API Key IDs and Keys into apps in order to get access to some AWS service. This is a security accident waiting to happen, and can be avoided through the use of IAM Roles.

With AWS, we can assign a single IAM Role to an EC2 instance. This assignment is usually made when the instance is launched, but can also be done at runtime if needed. Applications running on the server retrieve the Role’s security credentials by pulling them out of the instance metadata through a simple web command. These credentials have an additional advantage over potentially long-lived, hard-coded credentials, in that they are changed or rotated frequently, so even if somehow compromised, they can only be used for a brief period.

Another key security advantage of Roles is that they can be limited to just the access/rights privileges needed to get a specific job done. Amazon’s documentation for roles gives the example of an application that only needs to be able to read files out of S3. In this case, one can assign a Role that contains read-only permissions for a specific S3 bucket, and the Role’s configuration can say that the role can only be used by EC2 instances. This is an example of the security principle of “least privilege,”, where the minimum privileges necessary are assigned, limiting the risk of damage if the credential is compromised. In the same sense that you would not give all of your users “Administrator” privileges, you should not create a single “Allow Everything” Role that you assign everywhere. Instead create a different Role specific to the needs of each system or group of systems.

Delegation

Sometimes one company needs to give access to their resources to another company. Before IAM Roles, (and before AWS) the common ways to do that were to share account logins (with the same issues identified earlier with hardcoded credentials) or to use complicated PKI/certificate based systems. If both companies using AWS, sharing access is much easier with Role-based Delegation. There are several ways to configure IAM Roles for delegation, but for now we will just focus on delegation between accounts from two different organizations.

At ParkMyCloud, our customers use Delegation to let us read the state of their EC2, RDS, and scaling group instances, and then start and stop them per the schedules they configure in our management console.

To configure Role Delegation, a customer first creates an account with the service provider, and is given the provider’s AWS Account ID and an External ID. The External ID is a unique number for each customer generated by the service provider.

The administrator of the customer environment creates an IAM Policy with a constrained set of access (principle of “least privilege” again), and then assigns that policy to a new Role (like “ParkMyCloudAccess”), specifically assigned to the provider’s Account ID and External ID.  When done, the resulting IAM Role is given a specific Amazon Resource Name (ARN), which is a unique string that identifies the role.  The customer then enters that role in the service provider’s management console, which is then able to assume the role.  Like the EC2 example, when the ParkMyCloud service needs to start a customer EC2 instance, it calls the AssumeRole API, which verifies our service is properly authenticated, and returns temporary security credentials needed to manage the customer environment.

Conclusions

AWS IAM Roles make some tasks a lot simpler by flexibly assigning roles to instances and other accounts. IAM Roles can help make your environment more secure by:

  • Using the principle of Least Privilege in IAM policies to isolate the systems and services to only those needed to do a specific job.
  • Prevent hard coding of credentials in code or files, minimizing danger from exposure, and removing the risk of long-unchanged passwords.
  • Minimizing common accounts and passwords by allowing controlled cross-account access.
Read more ›

AWS Lambda + ParkMyCloud = Supercharged Automation

Among the variety of AWS services and functionality, AWS Lambda seems to be taking off with hackers and tinkerers. The idea of “serverless” architecture is quite a shift in the way we think about applications, tools, and services, but it’s a shift that is opening up some new ideas and approaches to problem solving.  

If you haven’t had a chance to check out Lambda, it’s a “function-as-a-service” platform that allows you to run scripts or code on demand, without having to set up servers with the proper packages and environments installed. Your lambda function can trigger from a variety of sources and events, such as HTTP requests, API calls, S3 bucket changes, and more. The function can scale up automatically, so more compute resources will be used if necessary without any human intervention. The code can be written in Node.js, Python, Java, and C#.

Some pretty cool ideas already exist for lambda functions to automate processes.  One example from AWS is to respond to a Github event to trigger an action, such as the next step in a build process.  There’s also a guide on how to use React and Lambda to make an interactive website that has no server.

For those of you who are already using ParkMyCloud to schedule resources, you may be looking to plug in to your CI/CD pipeline to achieve Continuous Cost Control.  I’ve come up with a few ideas of how to use Lambda along with ParkMyCloud to supercharge your AWS cloud savings.  Let’s take a look at a few options:

Make ParkMyCloud API calls from Lambda

With ParkMyCloud’s API available to control your schedules programmatically, you could make calls to ParkMyCloud from Lambda based on events that occur.  The API allows you to do things like list resources and schedules, assign schedules to resources, snooze schedules to temporarily override them, or cancel a snooze or schedule.

For instance, if a user logs in remotely to the VPN, it could trigger a Lambda call to snooze the schedules for that user’s instances.  Alternatively, a Lambda function could change the schedules of your Auto Scaling Group based on average requests to your website.  If you store data in S3 for batch processing, a trigger from an S3 bucket can tell Lambda to notify ParkMyCloud that the batch is ready and the processing servers need to come online.

Send notifications from ParkMyCloud to Lambda

With ParkMyCloud’s notification system, you can send events that occur in the ParkMyCloud system to a webhook or email.  The events can be actions taken by schedules that are applied to resources, user actions that are done in the UI, team and schedule assignments from policies, or errors that occur during parking.

By sending schedule events, you could use a Lambda function to tell your monitoring tool when servers are being shut down from schedules.  This could also be a method for letting your build server know that the build environment has fully started before the rest of your CI/CD tools take over.  You could also send user events to Lambda to feed into a log tool like Splunk or Logstash.  Policy events can be sent to Lambda to trigger an update to your CMDB with information on the team and schedule that’s applied to a new server.

Think outside the box!

Are you already using AWS Lambda to kick off functions and run scripts in your environment?  Try combining Lambda with ParkMyCloud and let us know what cool tricks you come up with for supercharging your automation and saving on your cloud bill! Stop by Booth 1402 at AWS re:Invent this year and tell us.

Read more ›

5 Favorite AWS Training Resources

When it comes to AWS training resources, there’s no shortage of information out there. Considering the wide range of videos, tutorials, blogs, and more, it’s hard knowing where to look or how to begin. Finding the best resource depends on your learning style, your needs for AWS, and getting the most updated information available. With this in mind, we came up with our 5 favorite AWS training resources, sure to give you the tools you need to learn AWS:

1. AWS Self-Paced Labs

What better way to learn that at your own pace? AWS self-paced labs give you hands-on learning in a live AWS environment, with AWS services, and actual scenarios you would encounter in the cloud. Among the recommended labs you’ll find an Introduction to Amazon Elastic Compute Cloud (EC2), and for more advanced users, a lab on Creating Amazon EC2 Instances with Microsoft Windows. If you’re up for an adventure, enroll in a learning quest and immerse yourself in a collection of labs that will help you master any AWS scenario at your own pace. Once completed, you will earn a badge that you can boast on your resume, LinkedIn, website, etc.  

2. AWS Free Tier

Sometimes the best way to learn something is by jumping right in. With the AWS Free Tier, you can try AWS services for free. This is a great way to test out AWS for your business, or for the developers out there, to try services like AWS CodePipeLine, AWS Data Pipeline, and more. While you are still getting a hands-on opportunity to learn a number of AWS services, the only downside is that there are certain usage limits. You can track your usage with a billing alarm to avoid unwanted charges, or you can try ParkMyCloud and park your instances when they’re not in use to get the most out of your free tier experience. In fact, ParkMyCloud started its journey by using AWS’ free tier – we eat our own dog food!

3. AWS Documentation

AWS Documentation is like a virtual encyclopedia of tools, terms, training, and everything AWS. You’ll find white papers, case studies, tutorials, cloud computing basics, and so much more. This resource is a one-stop-shop for all of your AWS documentation needs, whether you’re a beginner or advanced user. No matter where you are in your AWS training journey, AWS documentation is always a useful reference and certainly deserves a spot in your bookmarks.

4. YouTube

So far, we’ve gone straight to the source for 3 out of 5 of our favorite AWS training resources. Amazon really does a great job of providing hands-on training, tutorials, and documentation for users with a range of experience. However, YouTube opens up a whole new world of video training that includes contributions from not only Amazon, but other great resources as well. Besides the obvious Amazon Web Services channel, there are also popular and highly rated videos by Edureka, Simplilearn, Eli the Computer Guy, and more.

5. Bloggers

As cloud technology usage continues to expand and evolve, blogs are a great way to stay up to speed with AWS and the world of cloud computing. Of course, in addition to labs, a free-trial, extensive documentation, and their own YouTube channel, AWS also has their own blog. Since AWS actually has a number of blogs that vary by region and technology, we recommend that you start by following Jeff Barr – Chief Evangelist at Amazon Web Services, and primary contributor. Edureka was mentioned in our recommended YouTube channels, they also have a blog that covers plenty of AWS topics. In addition, the CloudThat blog is an excellent resource for AWS and all things cloud, and was co-founded by Bhaves Goswami – a former member of the AWS product development team.

 


There’s plenty of information out there when it comes to AWS training resources. We picked our 5 favorite resources for their reliability, quality, and range of information. Whether you’re new to AWS or consider yourself an expert, these resources are sure to help you find what you’re looking for.

Read more ›

ParkMyCloud Appoints Veteran Tech Leader Bill Supernor as CTO

Cloud Cost Optimization Platform Vendor Gears Up for Rapid Expansion with New Hire

October 16, 2017 (Dulles, VA) – ParkMyCloud, the leading enterprise platform for continuous cost control in public cloud, announced today that Bill Supernor has joined the team as Chief Technology Officer (CTO). His more than 20 years of leadership experience in engineering and management have included scaling teams and managing enterprise-grade software products, including KoolSpan’s TrustCall secure call and messaging system.

At ParkMyCloud, Supernor will be responsible for product development and software engineering as ParkMyCloud expands its platform, which currently helps enterprises like McDonald’s, Unilever, and Fox control costs on Amazon Web Services (AWS), Microsoft Azure, and Google Cloud, to more clouds and continues to add more services and integrations.

“Bill’s experience in the software industry will be a boon to us as we scale and grow the business,” said ParkMyCloud CEO Jay Chapel. “His years in the software and IT space will be a huge advantage as we grow our engineering team and continue to innovate upon the cost control platform that cloud users need.”

“This is a fast-moving company in a really hot space,” said Supernor. “I’m excited to be working with great people who have passion about what they do.”

Prior to joining ParkMyCloud, Supernor was the CTO of KoolSpan, where he led the development of a globally deployed secure voice communication system for smartphones. He has also served in engineering leadership positions at Trust Digital, Cognio, Symantec, and McAfee/Network Associates, and as an officer in the United States Navy.

About ParkMyCloud

ParkMyCloud is a SaaS platform that helps enterprises optimize their public cloud spend by automatically reducing resource waste — think “Nest for the cloud”. ParkMyCloud has helped customers such as McDonald’s, Capital One, Unilever, Fox, and Sage Software dramatically cut their cloud bills by up to 65%, delivering millions of dollars in savings on Amazon Web Services, Microsoft Azure, and Google Cloud Platform. For more information, visit http://www.parkmycloud.com.

Contact

Katy Stalcup

kstalcup@parkmycloud.com

(571) 334-3291

Read more ›

3 Enterprise Cloud Management Challenges You Should Be Thinking About

Enterprise cloud management is a top priority. As the shift towards multi-cloud environments continues, so has the need to consider the potential challenges. Whether you already use the public cloud, or are considering making the switch, you probably want to know what the risks are. Here are three you should be thinking about.

1. Multi-Cloud Environments

As the ParkMyCloud platform supports AWS, Azure, and Google, we’ve noticed that multi-cloud strategies are becoming increasingly common among enterprises. There are a number of reasons why it would be beneficial to utilize more than one cloud provider. We have discussed risk mitigation as a common reason, along with price protection and workload optimization. As multi-cloud strategies become more popular, the advantages are clear. However, every strategy comes with its challenges, and it’s important for CIOs to be aware of the associated risks.

Without the use of cloud management tools, multi-cloud management is complex and sometimes difficult to navigate. Different cloud providers have different price models, product features, APIs, and terminology. Compliance requirements are also a factor that must be considered when dealing with multiple providers. Meeting and maintaining requirements for one cloud provider is complicated enough, let alone multiple. And don’t forget you need a single pane to view your multi-cloud infrastructure.

2. Cost Control

Cost control is a first priority among cloud computing trends. Enterprise Management Associates (EMA) conducted a research study and identified key reasons why there is a need for cloud cost control, among them were inefficient use of cloud resources, unpredictable billing, and contractual obligation or technological dependency.

Managing your cloud environment and controlling costs requires a great deal of time and strategy, taking away from the initiatives your enterprise really needs to be focusing on. The good news is that we offer a solution to cost control that will save 65% or more on your monthly cloud bills – just by simply parking your idle cloud resources. ParkMyCloud was one of the top three vendors recommended by EMA as a Rapid ROI Utility. If you’re interested in seeing why, we offer a 14-day free trial.

3. Security & Governance

In discussing a multi-cloud strategy and its challenges, the bigger picture also includes security and governance. As we have mentioned, a multi-cloud environment is complex, complicated, and requires native or 3rd party tools to maintain vigilance. Aside from legal compliance based on the industry your company is in, the cloud also comes with standard security issues and of course the possibility of cloud breaches. In this vein, as we talk to customers they often worry about too many users being granted console access to create and terminate cloud resources which can lead to waste. A key here is limiting user access based on roles or Role-based Access Controls (RBAC). At ParkMyCloud we recognize that visibility and control is important in today’s complex cloud world. That’s why in designing our platform, we provide the sysadmin the ability to delegate access based on a user’s role and the ability to authenticate leveraging SSO using SAML integration . This approach brings security benefits without losing the appeal of a multi-cloud strategy.

Our Solution

Enterprise cloud management is an inevitable priority as the shift towards a multi-cloud environment continues. Multiple cloud services add complexity to the challenges of IT and cloud management. Cost control is time consuming and needs to be automated and monitored constantly. Security and governance is a must and it’s necessary to ensure that users and resources are optimally governed. As the need for cloud management continues to grow, cloud automation tools like ParkMyCloud provide a means to effectively manage cloud resources, minimize challenges, and save you money.

Read more ›

How to Get the Cheapest Cloud Computing

Are you looking for the cheapest cloud computing available? Depending on your current situation, there are a few ways you might find the least expensive cloud offering that fits your needs.

If you don’t currently use the public cloud, or if you’re willing to have infrastructure in multiple clouds, you’re probably looking for the cheapest cloud provider. If you have existing infrastructure, there are a few approaches you can take to minimize costs and ensure they don’t spiral out of control.

Find the Cloud Provider that Offers the Cheapest Cloud Computing

There are a variety of small cloud providers that attempt to compete by dropping their prices. If you work for a small business and prefer a no-frills experience, perhaps one of these is right for you.

However, there’s a reason that the “big three” cloud providers – Amazon Web Services (AWS), Microsoft Azure, and Google Cloud – dominate the market. They offer a wide range of product lines, and are continually innovating. They have a low frequency of outages, and their scale requires a straightforward onboarding process and plenty of documentation.

Whatever provider you decide on, ensure that you’ll have access to all the services you need – is there a computing product, storage, databases? How good is the customer support?

For more information about the three major providers’ pricing, please see this whitepaper on AWS vs. Google Cloud Pricing and this article comparing AWS vs. Azure pricing.

Locked In? How to Get the Cheapest Cloud Computing from Your Current Provider

Of course, if your organization is already locked into a cloud computing provider, comparing providers won’t do you much good. Here’s a short checklist of things you should do to ensure you’re getting the cheapest cloud computing possible from your current provider:

  • Use Reserved Instances for production – Reserved instances can save money – as long as you use them the right way. More here. (This article is about AWS RIs, but similar principles apply to Azure’s RIs and Google’s Committed Use discounts.)
  • Only pay for what you actually need – there are a few common ways that users inadvertently waste money, such as using larger instances than they need, and running development/testing instances 24/7 rather than only when they’re needed. (Here at ParkMyCloud, we’re all about reducing this waste – try it out.)
  • Ask – it never hurts to contact your provider and ask if there’s anything you could be doing to get a cheaper price. If you use Microsoft Azure, you may want to sign up for an Enterprise License Agreement. Or maybe you qualify for AWS startup credits.

Get Credit for Your Efforts

While finding the cheapest cloud computing is, of course, beneficial to your organization’s common good, there’s no need to let your work in spending reduction go unnoticed. Make sure that you track your organization’s spending and show your team where you are reducing spend.

We’ve recently made this task easier than ever for ParkMyCloud users. Now, you can not only create and customize reports of your cloud spending and savings, but you can also schedule these reports to be emailed out. Users are already putting this to work by having savings reports automatically emailed to their bosses and department heads, to ensure that leadership is aware of the cost savings gained… and so users can get credit for their efforts.

 

 

Read more ›

Why We Love the AWS IoT Button

The AWS IoT button is a simple wi-fi device with endless possibilities. If you’re an Amazon Prime member, you’re probably familiar with the hardware that inspired the IoT button – the Amazon Dash button. The wi-fi connected Dash Button can be used to reorder your favorite Amazon products automatically, making impulse buys with the click of a button. The Dash Button makes ordering fast and easy, products are readily accessible, and you’ll never run out of toilet paper again. The AWS IoT button can do that and so much more. A lot more.  

Beyond the singular function of making Amazon Prime purchases, the IoT button can be used to control just about anything that uses the internet. Based on the Amazon Dash Button hardware, the IoT button is programmable, easy to configure, and can be integrated with virtually any internet-connected device. It was designed for developers to help them get acquainted with Amazon Web Services like AWS IoT, AWS Lambda, Amazon DynamoDB, and more, without the need to write device-specific code.

How to Use the AWS IoT button

  • Configure the button to connect to your wi-fi network
  • Provision the button with an AWS IoT certificate and private key
  • From there, the button connects to AWS IoT and publishes a message on a topic when clicked
  • Use the rules engine to set up a rule – configure single-click, double-click, or long-press events to be routed to any AWS service
  • Configure the button to send notifications through Amazon SNS, store clicks in an Amazon DynamoDB table, or code custom logic in an AWS Lambda function
  • Configure the function to connect to third-party services or AWS IoT-powered devices

 

 

What You Can Do with It

The AWS IoT button can be made to set a variety of actions. With incredible potential for what you can do, it’s hard knowing to know where to begin. Rest assured, Amazon has a few suggestions:  

  • Count or track items
  • Call or alert someone
  • Start or stop something
  • Order devices
  • Remotely control home appliances

With this in mind, here are some ways that creative developers are using the AWS IoT button:

A Challenge

The AWS IoT button opens the door for developers to create an unlimited number of functions. You can use it to do just about anything on the internet – including parking your instances.

So here’s our challenge: create a function to park your instances (or perhaps, to snooze your parking schedules) using the AWS IoT button in configuration with ParkMyCloud. If you do, tell us about it and we’ll send you some ParkMyCloud swag.

Read more ›

Managing Microsoft Azure VMs with ParkMyCloud

Microsoft has made it easy for companies to get started using Microsoft Azure VMs for development and beyond. However, as an organization’s usage grows past a few servers, it becomes necessary to manage both costs and users and can become complex quickly. ParkMyCloud simplifies cloud management of Microsoft Azure VMs by giving you options to create teams of users, groups of instances, and schedule resources easily.

Consider the case of a large Australian financial institution that uses Microsoft Azure as its sole cloud provider. In this case, they currently they have 125 VMs, costing them over $100k on their monthly cloud bill with Microsoft. Their compute spend is about 95% of their total Azure bill.

Using one Azure account for the entire organization, they chose to split it into multiple divisions, such as DEV, UAT, Prod, and DR. These divisions are then split further into multiple applications that run within each division. In order for them to use ParkMyCloud to best optimize their cloud costs, they created teams of users (one per division). They gave each team permissions in order to allow shutdown and startup of individual applications/VMs. A few select admin users have the ability to control all VMs, regardless of where the applications are placed.

The organization also required specific startup/shutdown ordering for their servers. How would ParkMyCloud handle this need? This looks like a perfect use case for logical groups in ParkMyCloud.

For detailed instructions on how to manage logical groups with ParkMyCloud, see our user guide.

Putting this into context, let’s say that you have a DB and a web server grouped together. You want the DB to start first and stop last, therefore you would need to set the DB to have a start delay of 0 and a stop delay of 5. For the web server, you would set a start delay of 5 and stop delay of 0.

Of course, you could also manage logical groups of Microsoft Azure VMs with tags, scripts, and Azure automation. However, we know firsthand that the alternative solution involves complexities and requires constant upkeep – and who wants that?

ParkMyCloud offers the advantage of not only to cutting your cloud costs, but also making cloud management simpler, easier, and more effective. To experience all great the benefits of our platform, start a free trial today!  

Read more ›

7 AWS Security Best Practices with ParkMyCloud

Besides cost control, one of the biggest concerns from IT administrators is utilizing AWS security best practices to keep their infrastructure safe.  While there are some great tools that specialize in cloud and information security, there are some security benefits of ParkMyCloud that are not often considered when hardening a cloud infrastructure.

1. Keep Instances Off When Not In Use

Scheduling your instances to be turned off on nights and weekends when you aren’t using them saves you a ton of money on your cloud bill, but  also provides security and protection.  Leaving servers and databases on 24/7 is just asking for someone to try to break in and connect to servers within your infrastructure, especially during off-hours when you don’t have as many IT staff keeping an eye on things.  By aggressively scheduling your resources to be off as much as possible, you minimize the opportunity for outside attacks on those servers.

2. User Governance

Your users are trustworthy and need to access lots of servers to do their job, but why give them more access than necessary?  Limiting what servers, databases, and auto scaling groups everyone can see to only what they need keeps accidents from happening and limits mistakes.  ParkMyCloud lets you separate users into teams, with designated Team Leads to manage the individual Team Members and limits their control to just start / stop.

3. Single Sign On

In addition to governing user access to resources, ParkMyCloud integrates with all major SSO providers for SAML authentication for your users.  This includes Okta, Ping Identity, OneLogin, Centrify, Azure AD, ADFS, and Google Apps.  By using one of these providers, you can keep identity management centralized and offer multi-factor authentication through those SAML connections.

4. Audit Logs and Notifications

Every user action in ParkMyCloud is tracked in an Audit Log that is available to super admins.  These audit logs can also be downloaded as a CSV if you want to import them into something like Splunk or Logstash for log management.  Audit logs can help you see when schedules are snoozed or changed, policies are updated, or teams are created or changed.

In addition, those audit log entries can be sent as notifications to Slack channels, email addresses, or through webhooks to other tools.  This lets you keep an eye on either specific teams or the entire organization within ParkMyCloud.

5. Minimal Connection Permissions

ParkMyCloud connects to AWS through an IAM Role (preferred) or an IAM User.  The AWS policy that is required uses the bare minimum of necessary actions, which boils down to Describe, Start, and Stop for each resource type (EC2, ASG, and RDS). This means you don’t have to worry about ParkMyCloud doing something to your AWS account that you don’t intend.  For Azure connections, ParkMyCloud requires a similarly-limited Limited Access Role, and the connection to Google Cloud requires a limited Service Account.

6. Restrict Scheduling Based on Names or Tags

The ParkMyCloud policy engine is a powerful way to automate your resource scheduling and team management, but it can also be used to prevent schedules from being applied to certain systems. For instance, if you have a prod database that you want to keep up 24/7, you can use a policy to never let any user apply a schedule (even if they wanted to).  These policies can be applied based on tags, naming conventions, AWS regions, or account names.

7. Full Cloud Visibility

One great benefit of ParkMyCloud is the ability to see across all of your cloud providers (AWS, Microsoft Azure, and Google Cloud), cloud accounts, and regions within a cloud. This viewability not only provides management benefits, but helps with security by keeping all resources in one list. This prevents rogue instances from running in regions you don’t normally look at, and can help you identify resources that don’t need to be running or even stopped.

Conclusion

As you continue to strive to follow AWS security best practices, consider adding ParkMyCloud to your security toolkit.  While you’re saving money for your team, you can also get these 7 benefits to help secure your infrastructure and sleep better at night.  Start a free trial of ParkMyCloud today to start reaping the benefits!

Read more ›

Reduce RDS Costs with ParkMyCloud

Thanks to the ability to shut down instances with a start/stop scheduler, users of Amazon’s database service can finally save time and reduce RDS costs. Until June 2017, the only way to accomplish this feat was by copying and deleting instances, running the risk of losing transaction logs and automatic backups. While Amazon’s development of the start/stop scheduler is useful and provides a level of cost savings, it also comes with issues of its own.

For one, the start/stop scheduler is not foolproof. The process for stopping and starting non-production RDS instances is manual, relying on the user to create and consistently manage the schedule. Having to manually switch off when instances are not in use, and then restart when access is needed again, is a helpful advantage but also leaves room for human error. Complicating things further, RDS instances that have been shut down will automatically be restarted after seven days, again relying on the user to switch those instances back off if they’re not needed at the time.

Why Scripting is not the Best Answer

One way of minimizing potential for error is by automating the stop/start schedule yourself with writing your own scripts. While that could work, you would need to consider the number of non-production instances deployed on AWS RDS, and plan for a schedule that would allow developers to have access when needed, which could very well be at varying times throughout the day. All factors considered, the process of writing and maintaining scheduling scripts is one that takes extra time and costs money as well. Ultimately, setting up and maintaining your own schedule could increase your cloud spend more than it reduces RDS costs.

When you start thinking about the cost of paying developers, the amount of scripts that would have to be written, and the ongoing maintenance required, buying into an automated scheduling process is a no-brainer.

How ParkMyCloud Reduces RDS Costs

Automated Scheduling

ParkMyCloud saves you time and money by automating the scheduling process of stopping and starting AWS RDS instances (in addition to Microsoft Azure VMs and Google Cloud Compute instances, but that’s another post). At the same time, you get total visibility and full autonomy over your account.

The process is simple. With you as the account manager, ParkMyCloud conducts a discovery of all the company accounts, and determines which instances are most suitable for parking. From there, you have the option of implementing company-wide schedules for non-production instances, or giving each development team the ability to create schedules of their own.

Flexible Parking

ParkMyCloud takes saving on RDS costs to a whole new level with parking schedules. Different schedules can be applied to different instances, or they can be parked permanently and put on “snooze” when access is needed. Amazon’s seven-day automatic restart of switched off instances is a non-issue with our platform, and snoozed instances can be re-parked when access is no longer needed, so there’s no more relying on the user to do it manually.

For the most part, we find that companies will want to park their non-production instances outside the normal working hours of Monday to Friday, let’s say from 8:00am to 8:00pm. By parking your instances outside of those days and hours, ParkMyCloud can reduce your cloud spend by 65% – even more if you implement a parking schedule and use the snooze option.

Valuable Insight

Because you have total visibility over the account, you can reduce RDS costs even further by having a bird’s eye view of your company’s cloud use. You’ll be able to tell which of your instances are underused, terminate them, and possibly benefit further from selecting a cheaper plan (really soon). You’ll be able to see all RDS instances across all regions and AWS accounts in one simple view. You can also view the parking schedules for each instance and see how much each schedule is saving, potentially reducing costs even further. The viewability of your account and access to information provides a great resource for budgeting and planning.

Conclusion

The AWS start/stop scheduler is useful, but has to be done manually. Writing your own scripts sounds helpful, but it’s actually time consuming, and not fully cost-effective. ParkMyCloud automates the process while still putting you in control, reducing RDS costs and saving you time and money.

See the benefits of ParkMyCloud for yourself by taking advantage of our two-week free trial. Test our cloud cost control platform in your own environment, without any need for a credit card or signed contract, and see why our simple, cost-effective tool is the key to reducing RDS costs. We offer a variety of competitive pricing plans to choose from, or a limited-function version that you can continue to use for free after the trial ends.

To start your free trial today, sign up here.

Read more ›

Cloud Optimization Tools = Cloud Cost Control (Part II)

A couple of weeks ago in Part 1 of this blog topic we discussed the need for cloud optimization tools to help enterprises with the problem of cloud cost control. Amazon Web Services (AWS) even goes as far as suggesting the following simple steps to control their costs (which can also be applied  to Microsoft Azure and Google Cloud Platform, but of course with slightly different terminology):

    1. Right-size your services to meet capacity needs at the lowest cost;
    2. Save money when you reserve;
    3. Use the spot market;
    4. Monitor and track service usage;
    5. Use Cost Explorer to optimize savings; and
    6. Turn off idle instances (we added this one).

A variety of third-party tools and services have popped up in the market over the past few years to help with cloud cost optimization – why? Because upwards of $23B was spent on public cloud infrastructure in 2016, and spending continues to grow at a rate of 40% per year. Furthermore, depending on who you talk to, roughly 25% of public cloud spend is wasted or not optimized — that’s a huge market! If left unchecked, this waste problem is supposed to triple to over $20B by 2020 – enter the vultures (full disclosure, we are also a vulture, but the nice kind). Most of these tools are lumped under the Cloud Management category, which includes subcategories like Cost Visibility and Governance, Cost Optimization, and Cost Control vendors – we are a cost control vendor to be sure.

Why do you, an enterprise, care? Because there are very unique and subtle differences between the tools that fit into these categories, so your use case should dictate where you go for what – and that’s what I am trying to help you with. So, why am I a credible source to write about this (and not just because ParkMyCloud is the best thing since sliced bread)?

Well, yesterday we had a demo with a FinTech company in California that was interested in Cost Control, or thought they were. It turns out that what they were actually interested in was Cost Visibility and Reporting; the folks we talked to were in Engineering Finance, so their concerns were primarily with billing metrics, business unit chargeback for cloud usage, RI management, and dials and widgets to view all stuff AWS and GCP billing related. Instead of trying to force a square peg into a round hole, we passed them on to a company in this space who’s better suited to solve their immediate needs. In response, the Finance folks are going to put us in touch with the FinTech Cloud Ops folks who care about automating their cloud cost control as part of their DevOps processes.

This type of situation happens more often than not. We have a lot of enterprise customers using ParkMyCloud along with CloudHealth, CloudChekr, Cloudability, and Cloudyn because in general, they provide Cost Visibility and Governance, and we provide actionable, automated Cost Control.

As this is our blog, and my view from the street – we have 200+ customers now using ParkMyCloud, and we demo to 5-10 enterprises per week. Based on a couple of generic customer uses cases where we have strong familiarity, here’s what you need to know to stay ahead of the game:

  • Cost Visibility and Governance: CloudHealth, CloudChekr, Cloudability and Cloudyn (now owned by Microsoft)
  • Reserved Instance (RI) management – all of the above
  • Spot Instance management – SpotInst
  • Monitor and Track Usage: CloudHealth, CloudChekr, Cloudability and Cloudyn
  • Turn off (park) Idle Resources – ParkMyCloud, Skeddly, Gorilla Stack, BotMetric
  • Automate Cost Control as part of your DevOps Process: ParkMyCloud
  • Govern User Access to Cloud Console for Start/Stop: ParkMyCloud
  • Integrate with Single Sign-On (SSO) for Federated User Access: ParkMyCloud

To summarize, cloud cost control is important, and there are many cloud optimization tools available to assist with visibility, governance, management, and control of your single or multi-cloud environments. However, there are very few tools which allow you to set up automated actions leveraging your existing enterprise tools like Ping, Okta, Atlassian, Jenkins, and Slack.  Make sure you are not only focusing on cost visibility and recommendations, but also on action-oriented platforms to really get the best bang for your buck.

Read more ›

How to Optimize Cloud Spend with ParkMyCloud

The focus on how to optimize cloud spend is now as relentless as the initial surge was to migrate workloads from ‘on-prem’ to public cloud. A lot of this focus, and resultant discussions, were in regards to  options related to the use of Reserved Instances (RI’s), Spot Instances,or other pre-pay options. The pay-up-front discount plan makes sense  when you have some degree of visibility on future needs, and when there is no ‘turn-if-off’ option, which we here at ParkMyCloud call “parking”.

When it comes to the ability to ‘park instances’ we like to divide the world into two halves. There are those Production Systems, which typically need to be running 24/7/365, and then there are Non-Production Systems, which at least in theory have the potential to be parked when not in use. The former are typically  your end-customer or enterprise facing systems, which need to be online and available at all times.In this case, RI’s typically make sense. When it comes to those non-production systems, that’s where a tool such as ParkMyCloud comes into play. Here you have an opportunity to review the usage patterns and needs of your organization and how to optimize cloud spend accordingly. For example, you may well discover that your QA team never works on weekends, so you can turn their EC2 instances off on a Friday night and turn them back on first thing on Monday morning. Elsewhere, you might find other workloads that can be turned off in the small hours or even workloads which can be left off for extended periods.

Our customers typically like to view both their production and non-production systems in our simple dashboard. Here they can view all their public cloud infrastructure and  simply lock those production systems which cannot be touched. Once within the dashboard the different non-production workloads can then be reviewed and either centrally managed by an admin or have their management delegated to individual business units or teams.

Based on our customer usage we track, we see these non-production systems typically accounting for about 50% of what the companies spend on compute (i.e. instances / VMs). We then see those who aggressively manage these non-production instances saving up to 65% of their cost, which then makes a large dent in their overall cloud bill.

So, when you are thinking about how to optimize cloud spend, there’s a lot more opportunities than just committing to purchase in advance, especially for your non-production workloads.

Read more ›

Shutting Down RDS Instances in AWS – Introducing the Start/Stop Scheduler

Users of Amazon’s database service have been clamoring for a solution to shutting down RDS instances with an automatic schedule ever since 2009, when the PaaS service was first released.  Once Amazon announced the ability to power off and on RDS instances earlier this year, AWS users started planning out ways to schedule these instances using scripts or home-grown tools.  However, users of ParkMyCloud were happy to find out that support for RDS scheduling was immediately available in the platform.  If you were planning on writing your own scripts for RDS parking, let’s take a look at some of the additional features that ParkMyCloud could provide for you.

Schedule EC2 and ASG in addition to RDS

Very few AWS users are utilizing RDS databases without simultaneously running EC2 instances as compute resources.  This means that writing your own scheduling scripts for shutting down RDS instances would involve scheduling EC2 instances as well.

ParkMyCloud has support for parking EC2 resources, RDS databases, and Auto Scaling Groups all from the same interface, so it’s easy to apply on/off schedules to all of your cloud resources.

Logical Groups to tie instances together

Let’s say you have a QA environment with a couple of RDS databases and multiple EC2 instances running a specific version of your software. With custom scripts, you have to implement logic that will shut down and start up all of those instances together, and potentially in a specific order.  ParkMyCloud allows users to create Logical Groups, which shows up as one logical entity in the interface but is scheduling multiple instances behind it.  You can also set start or stop delays within the Logical Group to customize the order, so if databases need to be started first and stopped last, then you can set that level of granularity.

Govern user access to databases

If your AWS account includes RDS databases that relate to dev, QA, staging, production, test, and UAT, then you’ll want to allow different users to access different databases based on their role or current project.  Implementing user governance in your own scripts can be a huge hassle, but ParkMyCloud makes it easy to split your user base into teams.  Users can be part of multiple teams if necessary, but by default they will only see the RDS databases that are in the teams they have access to.

High visibility into all AWS accounts and regions

Scripting your own schedules can be a challenge with a single region or account, but once you’re using RDS databases from around the world or across AWS accounts, you’re in for a challenge.  ParkMyCloud pulls all resources from all accounts and all AWS regions into one pane of glass, so it’s easy to apply schedules and keep an eye on all your RDS databases.

RDS DevOps automation

It can be a challenge to integrate your own custom scripts with your devops processes.  With ParkMyCloud, you have multiple options for automation.  With the Policy Engine, RDS instances can have schedules applied automatically based on tags, names, or locations.  Also, the ParkMyCloud API makes it easy to override schedules and toggle instances from your Slack channels, CI/CD tools, load-testing apps, and any other automated processes that might need a database instance powered on for a brief time.

Conclusion

Shutting down RDS instances is a huge money-saver.  Anyone who is looking to implement their own enterprise-grade AWS RDS start/stop scheduler is going to run into many challenges along the way.  Luckily, ParkMyCloud is on top of things and has implemented RDS parking alongside the other robust feature set that you already used for cost savings.  Sign up for a free trial today to supercharge your RDS database scheduling!

Read more ›

Interview: Hybrid Events Group + ParkMyCloud to Automate EC2 Instance Scheduling and Optimize AWS Infrastructure

We talked with Jedidiah Hurt, DevOps and technical lead at Hybrid Events Group, about how his company is using ParkMyCloud to automate EC2 instance scheduling, saving hours of development work. Below is a transcript of our conversation.

Appreciate you taking the time to speak with us today. Can you start off by giving us some background on your role, what Hybrid Events Group does, and why you got into doing what you do?

I do freelance work for Hybrid Events Group and am now moving into the role of technical lead. We had a big client we were working with this spring and we needed to fire up several EC2 instances. We were doing live broadcasting events across the U.S., which is what the company specializes in – events A/V services. So we do live webcasting, and we can do CapturePro, another service we offer where we basically just show up to any event that someone would want to record, which usually is workshops and keynotes at tech conferences, and we record on video and also capture the presenter’s presentation in video in real time.

ParkMyCloud, what we used it for, was just to automate EC2 instances for doing live broadcasts.

Was there any reason you chose AWS over others like Azure or Google Cloud, out of curiosity?

I just had the most experience with AWS; I was using AWS before Azure and Google Cloud existed. So I haven’t, or I can’t say that I’ve actually really given much of a trial to Azure or Google Cloud. I might have to give them a look here sometime in the future.

Do you use any PaaS services in AWS, or do you focus on compute databases and storage?

Yeah, not a whole lot right now. Just your basic S3, EC2, and I think we are probably going to move into elastic load balancing and auto scaling groups within the next few months or so as we build out our platform.

Do you use Agile development process to build out your platform and provide continuous delivery?

So, I am an agile practitioner, but we are just kind of brown fielding the platform. We are in the architecture stage right now, so we will be doing all of that, as far as continuous deployment, and hopefully continuous integration where we actually have some automated testing.

As far as tools, I’m the only developer on the team right now, so we won’t really have a full Agile or be fully into Agile. We haven’t got boards and prints and planning, weekly meetings, and all those things, because it’s just me. But we integrate portions of it, as far as having stakeholders kind of figuring out what our minimum viable product is.

What drove you to look for something like ParkMyCloud, and how did you come across it?

ParkMyCloud enabled us to automate a process that we were going to do manually, or that I was going to have to write scripts for and maintain. I think initially I was looking into just using the AWS CLI, and some other kind of test scheduler, to bring up the instances and then turn them off after our daily broadcast session was over. I did a little bit of googling to see if there were any time-based solutions available and found ParkMyCloud, and this platform does exactly what’s needed and more.

And you are using the free tier ParkMyCloud, correct?

Yes. I don’t remember what the higher tiers offered, but this was all we really needed. We just had three or four large EC2 instances that we wanted to bring up for four to five hours a day, Monday through Friday, so it had all the core features that we currently need.

Anything that stood out for you in terms of using the product?

I’d say on the plus side I was a little bit concerned at the beginning as far as the reliability of the tool, because we would have been in big trouble with our client if ParkMyCloud failed to bring up an instance at a scheduled start time. We used it, or I guess I would say we relied on it, every day for 2 months solid, and never saw any issues as far as instances not coming up when they were supposed to, or shutting down when they were not supposed to. I was really pleased with, what I would say, the reliability of the tool – that definitely stuck out to me.

From an ROI standpoint, are you satisfied with savings and the way the information is presented to you?

Yeah, absolutely. And I think for us, the ROI wasn’t so much the big difference between having the instances running all the time, or having the instances on a schedule. The ROI was more from the fact that I didn’t have to build the utility to accomplish that because you guys already did that. So in that sense, it probably saved me many hours of development work.

Also, that kind of uneasy feeling you get when you hack up a little script and put it into production versus having a well-tested, fully-automated platform. I’m really happy that we found ParkMyCloud, it has definitely become an important part of our infrastructure management over last few months.

As our final question, how much overhead or time did you have to spend in getting ParkMyCloud set up to manage your environment, and did you have to do anything on a daily or weekly basis to maintain it?

So, as I said, our particular use case was very basic, so it ended up being three instances that we needed to bring up for three or four hours a day and then shut them down. I’d say it took me ten to fifteen minutes to get rolling with ParkMyCloud and automate EC2 instance scheduling. And now we save thousands of dollars per month on our AWS bill.

Read more ›

Cloud Optimization Tools = Cloud Cost Control

Over the past couple of years we have had a lot of conversations with large and small enterprises regarding cloud management and cloud optimization tools, all of whom were looking for cost control. They wanted to reduce their bills, just like any utility you might run at home — why spend more than you need to? Amazon Web Services (AWS) actively promotes optimizing cloud infrastructure, and where they lead, others follow. AWS even goes so far as to suggest the following simple steps to control AWS costs:

  1. Right-size your services to meet capacity needs at the lowest cost;
  2. Save money when you reserve;
  3. Use the spot market;
  4. Monitor and track service usage;
  5. Use Cost Explorer to optimize savings; and
  6. Turn off idle instances (we added this one).

Its interesting to note use of the word ‘control’ even though the section is labeled Cost Optimization.

So where is all of this headed? It’s great that AWS offers their own solutions but what if you want automation into your DevOps processes, multi-cloud support (or plan to be multi cloud), real-time reporting on these savings, and to turn stuff off when you are not using it? Well then you likely need to use a third-party tool to help with these tasks.

Let’s take a quick look at a description of each AWS recommendation above, and get a better understanding of each offering. Following this we will then explore if these cost optimization options can be automated as part of a continuous cost control process:

  1. Right-sizing – Both the EC2 Right Sizing solution and AWS Trusted Advisor analyze utilization of EC2 instances running during the prior two weeks. The EC2 Right Sizing solution analyzes all instances with a max CPU utilization less than 50% and determines a more cost-effective instance type for that workload, if available.
  2. Reserved Instances (RI) – For certain services like Amazon EC2 and Amazon RDS, you can invest in reserved capacity. With RI’s, you can save up to 75% over equivalent ‘on-demand’ capacity. RI’s are available in three options – (1) All up-front, (2) Partial up-front or (3) No upfront payments.
  3. Spot – Amazon EC2 Spot instances allow you to bid on spare Amazon EC2 computing capacity. Since Spot instances are often available at a discount compared to On-Demand pricing, you can significantly reduce the cost of running your applications, grow your application’s compute capacity and throughput for the same budget, and enable new types of cloud computing applications.
  4. Monitor and Track Usage – You can use Amazon CloudWatch to collect and track metrics, monitor log files, set alarms, and automatically react to changes in your AWS resources. You can also use Amazon CloudWatch to gain system-wide visibility into resource utilization, application performance, and operational health.
  5. Cost Explorer – AWS Cost Explorer gives you the ability to analyze your costs and usage. Using a set of default reports, you can quickly get started with identifying your underlying cost drivers and usage trends. From there, you can slice and dice your data along numerous dimensions to dive deeper into your costs.
  6. Turn off Idle Instances – To “park” your cloud resources by assigning them schedules of operating hours they will run or be temporarily stopped – i.e. parked. Most non-production resources (dev, test, staging, and QA) can be parked at nights and on weekends, when they are not being used. On the flip side of this, some batch processing or load testing type applications can only run during non-business hours, so they can be shut down during the day.

Many of these AWS solutions offer recommendations, but do require manual efforts to gain the benefits. This is why third party solutions have have seen widespread adoption and include cloud management, cloud governance and visibility, and cloud optimization tools. In part two of this this blog we will have a look at some of those tools, the benefits of each, approach and the level of automation to be gained.

Read more ›
Page 1 of 212
Copyright © ParkMyCloud 2016. All rights reserved|Privacy Policy