Cloud Management Archives - ParkMyCloud

As public cloud computing has grown over the past decade, the concept of cloud management has developed in order to maintain control of resource allocation, governance, security, and – most important of all – cost.

Due to the complexity of optimizing cloud compute resources, many organizations have implemented cloud management software. However, in the same way that no two organizations are the same, no two cloud management solutions are identical.

All cloud service providers offer tools to help manage their cloud environments. Some are excellent at helping organizations optimize their resources. Others are more practical for organizations that wish to create permission tiers and allow team members different levels of access to their projects.

However, many solutions overcomplicate cloud management by providing functions that most organizations find unnecessary. These additional functions are factored into the cost of the solution, eating into the financial benefits of the cloud management software.

ParkMyCloud is an ideal solution for organizations that want an effective tool suitable for minimizing cloud management costs. By “parking” non-productive instances and VMs when they are not required, organizations can save up to 60% on AWS, Microsoft and Google cloud computing costs.

By providing a single dashboard view of instances and VMs across multiple accounts, types and zones, ParkMyCloud also enables administrators to make informed decisions about resource allocation and governance – simplifying cloud management and reducing costs further.

To find out more about our versatile cloud management software, contact us today.

New in ParkMyCloud: Visualize AWS Usage Data Trends

We are excited to share the latest release in ParkMyCloud: animated heat map displays. This builds on our previous release of static heat maps displaying AWS EC2 instance utilization metrics from CloudWatch. Now, this utilization data is animated to help you better identify usage patterns over time and create automated parking schedules.

The heatmaps will display data from a sequence of weeks, in the form of an animated “video”, letting you see patterns of usage over a period of time. You can take advantage of this feature to better plan ParkMyCloud parking schedules based on your actual instance utilization.

Here is an example of an animated heatmap, which allows you to visualize when instances are used over a period of eight weeks:

The latest ParkMyCloud update also includes:

  • CloudWatch data collection improvements to reduce the number of API calls required to pull instance utilization metrics data
  • Various user interface improvements to a number of screens in the ParkMyCloud console.

As noted in our last release, utilization data also provides the necessary information that will allow ParkMyCloud to make optimal parking and rightsizing recommendations (SmartParking) when this feature is released next month, part of our ongoing efforts to do what we do best – save you money, automatically.

AWS users who sign up now can take advantage of the latest release as we ramp up for automated SmartParking. In order to give you the most optimal cost control over your cloud bill, start your ParkMyCloud trial today to collect several weeks’ worth of CloudWatch data, track your usage patterns, and get recommendations as soon as the SmartParking feature becomes available in a few weeks.

If you are an existing customer, be sure to update your AWS policies to enable ParkMyCloud to access your AWS CloudWatch data. Detailed instructions can be found in our support portal.

Feedback? Anything else you’d like to see ParkMyCloud do? Let us know

Read more ›

Three Big AWS re:Invent Announcements from 2017

AWS re:Invent announcements were in full swing at the conference last week. In addition to all the sessions, workshops, pub crawl, DJ-spinning and all sorts of educational experiences and entertainment options, the technology announcements are really what drive the buzzl. While it’s impossible to cover them all, we picked three big announcements from this year’s AWS re:Invent that will certainly be game-changers:  

New EC2 Instance Types

M5 EC2 instances are the next generation of general purpose EC2 instances. As a general purpose instance, its good for the use of running web & app servers, hosting enterprise applications, supporting online games and building cache fleets.

What sets M5 instances apart is that they were made for high-demand workloads, provide a 14% better price performance than M4 instances per core, and designed based on Custom Intel® Xeon® Platinum 8175M series processors running at 2.5 GHz. The new instances comes with a full package of resources allocations, complete with optimized compute, memory, and storage.

H1 EC2 Instances are the latest in storage optimized instances, designed with lots of space for high performance big data workloads. These instances are powered by “Broadwell” – 2.3 GHz Intel®Xeon® E5 2686 v4 processors, offering more memory compared to D2 instances. H1 instances provide low cost storage, high disk throughput, and high sequential disk I/O access to large data sets. They’re designed for data scientists running big data applications like Elastic MapReduce, big workload clusters, data processing applications like Apache Kafka, distributed filing systems, and networking filing systems.

Public Preview of EC2 Bare Metal Instances

AWS customers now have the ability to run workloads on bare metal servers. Peter DeSantis, VP of Global Infrastructure at AWS, calls it the “best of both worlds” because customers can run an operating system directly on the hardware, yet still reap the benefits of using the cloud by paying as they go instead of up-front. The public preview of bare metal instances are ideal in scenarios where workloads needs access to certain hardware features, and workloads with restrictions due to licensing can still benefit from the AWS cloud offerings. The public preview of the i3.metal instance is only the first of an entire series dedicated to bare metal instances, with more in line to roll out over the next few months.

Spot Instance Hibernation

Among several changes announced to AWS spot instances was a notable new feature – hibernation for spot instances. Instead of terminating a spot instance when it is interrupted, it will now hibernate instead, saving all of your data into an EBS volume. The instance will reboot as soon as spare capacity is available for the given instance type. Hibernation is useful because you won’t be charged for using the instance while it’s in hibernation, storage is charged at standard EBS storage rates, and you can still terminate your instances in hibernation by cancelling your bid at any time.

Conclusion

AWS re:Invent announcements are always exciting. As the largest and most successful public cloud provider to date, Amazon keeps us on our toes and continues giving us so much to look forward to. In the ongoing war between the big three cloud providers, these innovations will certainly drive the competition to innovate and provide even better options for enterprises to choose from. As always, we’ll continue to cover new announcements, product launches, and more as AWS continues to innovate and increase their offerings at a frenetic pace.

Read more ›

Cloud Service Provider Comparison – Part Two: IBM vs Oracle

For the second part of our cloud service provider comparison, we’ll continue our discussion of “secondary” cloud providers with two longtime tech industry giants: IBM vs Oracle.

We always talk about the “big three” cloud providers: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). We’ve covered Azure vs AWS, Google vs AWS, and most recently, the rise of Alibaba as the next biggest cloud provider. But what about the rest? IBM and Oracle have solidified themselves in the technology world, but will their offerings bring them success in the public cloud? And if so, does one of them have a better chance?

IBM

  • At the end of June 2017, IBM made waves when it outperformed Amazon in total cloud computing revenue at $15.1 billion to $14.5 billion over a year-long period
  • However, Amazon is still way ahead when it comes to the IaaS market
    • For 2016, Amazon had the highest IaaS revenue, followed by Microsoft, Alibaba, and Google, respectively. IBM did not make the top 5.
    • Alibaba had the highest IaaS growth rate, followed by Google, Microsoft, and Amazon, respectively.
  • IBM was the fourth biggest cloud provider – before Alibaba took over
  • In Q1 of 2017, Synergy rankings showed that IBM has 4 percent of the public cloud market share, just behind Alibaba’s 5 percent
    • AWS had 44 percent, Azure – 11 percent, and Google Cloud – 6 percent

The reality that Alibaba knocked IBM out of fourth place in the ongoing saga of the cloud wars is a bit unsettling, but remember that the enterprise cloud is still just beginning. After all, the term “cloud computing” was only coined just a few years ago, in 2006. As we look forward, IBM and Amazon just released their own television ad campaigns, and the differences in their messaging are an indication of how each provider plans to move forward.

As enterprises continue their shift to the cloud, TV ads tell us a lot about a provider’s purpose, overall message, and target audience. In the IBM ad, “The Cloud for Enterprise, Yours,” the cloud is presented not as a cloud at all, but as an entity “built for your business, designed for your data, and secure to the core.” This messaging opens an otherwise confusing, sometimes difficult to understand service for business leaders, to something tangible, something that makes sense, something that was built for their enterprise. That type of message goes a long way with people who don’t know the first thing about cloud computing.

On the other hand, Amazon’s ad targets a different audience entirely: “the builders” – developers, programmers, and architects who already have a full understanding and reliance on AWS for their building needs. In contrast to IBM, whose ad is all about how their cloud is helping businesses through the power of data and innovation, blockchain, and more, Amazon went straight for the technical experts who know exactly what they’re doing, no explanation necessary. The ad was also perfectly timed for the arrival of AWS re:Invent in Las Vegas this past week (ParkMyCloud was there as a sponsor!), gearing up their technical followers for the big event.

IBM positioned itself as a cloud provider for business leaders as the shift to the cloud only gets bigger. Amazon positioned itself as a haven for technical experts, the people writing code and continuously managing applications and other processes. It will take some time before we see the results of how this messaging plays out, but it certainly says something about who each provider wants to impress. And while pretty much everyone agrees that Amazon is currently leading the cloud, the numbers don’t lie – let’s not forget that IBM outperformed them in overall cloud revenue.

Oracle

  • Oracle’s cloud business is still ramping up, particularly in terms of IaaS
  • In fiscal Q1 of 2018, growth was at 51 percent, down from a 60 percent average in the last four quarters
    • Q4 for fiscal 2017 was at 58 percent
  • Since last quarter, shares have gone down by 10 percent

If things weren’t looking good for Oracle before, they may have just taken a turn for the worst. This past week AWS re:Invent, we witnessed CEO Andy Jassy make quite a dig at Oracle during his keynote speech. There was a cartoon involved, featuring Oracle founder Larry Ellison, and the message was clear: AWS is taking business away from Oracle.

Oracle’s success largely comes from its database business, which is still their biggest revenue producer as many companies use their databases to run critical parts of their operations. AWS decided to take them head on with a database on their own, directly targeting their enterprise customers. After AWS launched Aurora, in competition with Oracle’s SQL database, they efficiently started peeling away longtime Oracle customers. Oracle’s response? Build their own cloud, competing directly with the biggest and most successful cloud provider thus far, AWS.

But in spite of their current position, we can’t rule out Oracle just yet. For customers who still rely on Oracle’s database or other software, the cloud is a welcome offering and probably an easier option. And in an attempt to make things harder for AWS, Oracle made some changes to its licensing, and doubled the cost of using its database on AWS in hopes that customers who already use Oracle’s database will find their cloud a cheaper, more appealing option. However, this also backfired to some degree as customers using AWS cloud in conjunction with Oracle’s database did not appreciate the spike in price. Ultimately, the decision could prove itself to be more beneficial to AWS customers, prompting them to switch their database instead of their cloud provider.

And this brings us back to re:Invent, where Andy Jassy announced a new, serverless database service – Aurora Serverless. Again, this new offering is in direct competition with Oracle’s database, and once it goes live, only time will tell if Oracle can take the heat.

IBM vs Oracle: The Takeaway

IBM vs Oracle – does either of them stand a chance against the bigger, more well known cloud providers? So far, it’s looking pretty good for IBM. They have their sights set on huge success with the introduction of Watson, the AI supercomputer that generated a lot of buzz when it won Jeopardy. They’ve also taken a new approach with their TV ad campaign, setting themselves apart from Amazon with an entirely different audience, wooing business leaders as the best choice in terms of business and innovation strategy. And of course, they’ve taken the lead in cloud computing revenue, which is nothing to scoff at.

On the other hand, Oracle is struggling to find its place, and Amazon is calling them out. With the announcement of Aurora Serverless, we’ll be looking to see how this new offering impacts Oracle as it takes a direct hit to it’s flagship product – their database business. If Oracle wants to keep up and hold it’s own against other cloud providers, they might be wise to take a note from IBM and innovate with a new approach entirely.

In the ongoing battle for the ultimate cloud provider, Amazon’s lead is certainly not a guarantee. Not only are Google and Azure coming in strong, but Alibaba is well on its way, and other secondary providers like IBM and Oracle are working on innovations and improvements to secure their place in the ranks.

In the end, we always find it helpful to come back to one of our favorite Andy Jassy quotes regarding the cloud battle:

“There won’t be just one successful player. There won’t be 30 because scale really matters here in regards to cost structure, as well as the breadth of services, but there are going to be multiple successful players, and who those are I think is still to be written.”

As we continue making comparisons between cloud providers, keeping up to date with ongoing advancements and innovations behind their offerings, we welcome you to participate! Please share any thoughts or feedback in the comment section, we’d love to hear your take!

Read more ›

New in ParkMyCloud: AWS Utilization Metric Tracking

We are happy to share the latest release in ParkMyCloud: you can now see resource utilization data for your AWS EC2 instances! This data is viewable through customizable heatmaps.

This update gives you information about how your resources are being used – and it also provides the necessary information that will allow ParkMyCloud to make optimal parking and rightsizing recommendations when this feature is released next month. This is part of our ongoing efforts to do what we do best – save you money, automatically.

Utilization metrics that ParkMyCloud will now report on include:

  • Average CPU utilization
  • Peak CPU utilization
  • Total instance store read operations
  • Total instance store write operations
  • Average network data in
  • Average network data out
  • Average network packets in
  • Average network packets out

Here is an example of an instance utilization heatmap, which allows you to see when your instances are used most often:

In a few weeks, we will release the ability for ParkMyCloud to recommend parking schedules for your instances based on these metrics. In order to take advantage of this, you will need to have several weeks’ worth of CloudWatch data already logged, so that we can recommend based on your typical usage. Start your ParkMyCloud trial today to start tracking your usage patterns so you can get usage-based parking recommendations.

If you are an existing customer, you will need to update your AWS policies to enable ParkMyCloud to access your AWS CloudWatch data. Detailed instructions can be found in our support portal.

Feedback? Anything else you’d like to see ParkMyCloud do? Let us know!

Read more ›

Cloud Service Provider Comparison – Who Will be the Next Big Provider? Part One: Alibaba

When making a cloud service provider comparison, you would probably think of the “big three” providers: Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP). Thus far, AWS has led the cloud market, but the other two are gaining market share, driving us to make comparisons between Azure vs AWS and Google vs AWS. But that’s not the whole story.

In recent years, a few other “secondary” cloud providers have made their way into the market, offering more options to choose from. Are they worth looking at, and could one of them become the next big provider?

Andy Jassy, CEO of AWS, says: “There won’t be just one successful player. There won’t be 30 because scale really matters here in regards to cost structure, as well as the breadth of services, but there are going to be multiple successful players, and who those are I think is still to be written. But I would expect several of the older guard players to have businesses here as they have large installed enterprise customer bases and a large sales force and things of that sort.”

So for our next cloud service provider comparison, we are going to do an overview of what could arguably be the next biggest provider in the public cloud market (after all, we need to add a 4th cloud provider to the ParkMyCloud arsenal).:

Alibaba

Alibaba is a cloud provider not widely known about in the U.S., but it’s taking China by storm and giving Amazon a run for its money in Asia. It’s hard to imagine a cloud provider (or e-commerce giant) more successful than what we have seen with Amazon, let alone a provider that isn’t part of the big three, but Alibaba has their sights set on surpassing AWS to dominate the world wide cloud computing market.

Take a look at some recent headlines:

Guess Who’s King of Cloud Revenue Growth? It’s Not Amazon or Microsoft

Alibaba Just Had Its Amazon AWS Moment

Alibaba Declares War on Amazon’s Surging Cloud Computing Business

What we know so far about Alibaba:

  • In 2016: Cloud revenue was $675 million, surpassing Google Cloud’s $500 million. First quarter revenue was $359 million and in the second quarter rose to $447 million.
  • Alibaba was dubbed the highest ranking cloud provider in terms of revenue growth, with sales increasing 126.5 percent from 2015 ($298 million) to 2016
  • Gartner research places Alibaba’s cloud in fourth place among cloud providers, ahead of IBM and Oracle

Alibaba Cloud was introduced to cloud computing just three years after Amazon launched AWS. Since then, Alibaba has grown at a faster pace than Amazon, largely due to their domination of the Chinese market, and is now the 5th largest cloud provider in the world.

Alibaba’s growth is attributed in part to the booming Chinese economy, as the Chinese government continues digitizing, bringing its agencies online and into the cloud. In addition, as the principal e-commerce system in China, Alibaba holds the status as the “Amazon of Asia.” Simon Hu, senior vice president of Alibaba Group and president of Alibaba Cloud, claims that Alibaba will surpass AWS as the top provider by 2019.

Our Take

For the time being, Amazon is still dominating the U.S. cloud market, exceeding $400 billion in comparison to Alibaba’s $250 billion. Still, Alibaba Cloud is growing at incredible speed, with triple digit year-over-year growth over the last several quarters. As the dominant cloud provider in China, Alibaba is positioned to continue growing, and is still in its early stages of growth in the cloud computing market. Only time will reveal what Alibaba Cloud will do, but in the meantime, we’ll definitely be keeping a lookout. After all, we have customers in 20 countries around the world, not just in the U.S.  

Next Up: IBM & Oracle

Apart from the big three cloud providers, Alibaba is clearly making a name for itself with a fourth place ranking in the world of cloud computing. While this cloud provider is clearly gaining traction, a few more have made their introduction in recent years. Here’s a snapshot of the next 2 providers in our cloud service provider comparison:

IBM

  • At the end of June 2017, IBM made waves when it outperformed Amazon in total cloud computing revenue at $15.1 billion to $14.5 billion over a year-long period
  • However, Amazon is still way ahead when it comes to the IaaS market
    • For 2016, Amazon had the highest IaaS revenue, followed by Microsoft, Alibaba, and Google, respectively. IBM did not make the top 5.
    • Alibaba had the highest IaaS growth rate, followed by Google, Microsoft, and Amazon, respectively.
  • IBM was the fourth biggest cloud provider – before Alibaba took over
  • In Q1 of 2017, Synergy rankings showed that IBM has 4 percent of the public cloud market share, just behind Alibaba’s 5 percent
    • AWS had 44 percent, Azure – 11 percent, and Google Cloud – 6 percent

Oracle

  • Oracle’s cloud business is still ramping up, particularly in terms of IaaS
  • In fiscal Q1 of 2018, growth was at 51 percent, down from a 60 percent average in the last four quarters
    • Q4 for fiscal 2017 was at 58 percent
  • Since last quarter, shares have gone down by 10 percent

When making a cloud service provider comparison, don’t limit yourself to the “big three” of AWS, Azure, and GCP. They might dominate the market now, but as other providers grow, innovate, and increase their following in the cloud wars – we’ll continue to track and compare as earnings are reported.

Read more ›

How to Optimize Costs When Using Blue-Green Deployments

Blue-green deployments are a great way to minimize downtime and risk — however, users should remember to keep cost in mind as well when optimizing deployments.

Why You Should Use Blue-Green Deployments

One approach to continuous deployment of applications that has really taken off in popularity recently is the use of blue-green deployments.

The main idea behind this system is to have two full production deployments in existence that are running the last two versions of code, with only the latest version actively in use.  For instance, if the current version of your software is running in your “blue” environment, your next deployment would take place in the “green” environment. When you’re ready to flip the switch, you start pointing users at green instead of blue.

This deployment method has a couple of great benefits. First, it helps with minimizing downtime when cutting over to newly-deployed code. Instead of upgrading your current system and having to make users wait until the upgrade is complete, the cutover downtime is minimized. Second, along the same lines, you have a fresh deployment each time instead of upgrading an existing system repeatedly. Third, you have a system that has been already working for you that you can roll back to if necessary.

How to Optimize Costs With Two Production Deployments

Of course, running two production environments means that you are paying twice the cost for your infrastructure. ParkMyCloud users have asked how they can optimize costs while using the blue-green deployment strategy.  We use AWS internally for our blue-green deployments, so we’ll discuss some options in terms of AWS terminology, but you can use other clouds like Azure and Google as well.

One approach is to use AWS Auto-Scaling Groups as your deployment mechanism. With ASGs, you decide how many instances you want as a minimum, a maximum, and a desired amount for your environment. When setting up ASGs in ParkMyCloud, you can have two different settings for min/max/desired for when the ASG is “on” and “off”.  This way, you can have an ASG for blue and one for green, then use ParkMyCloud to set the min/max/desired as needed, so each of these environments is only running when necessary, and not wasting money.

Another option is to use Logical Groups in ParkMyCloud. This allows you to group together instances into one entity, so you could have a database and a web server start and stop together.  If you go this route, you can put all of your blue instances together in a group, then start the whole group up when you are ready to switch over. When going between blue and green, you can just update the logical group to have the newest instances as you deploy. Again, this allows you to park the inactive environment, saving its cost.

If your continuous deployment is fully automated, a third option is to utilize the ParkMyCloud API to change schedules and toggle servers as deployments are completed. Typically, you’ll want your current active deployment on an “always on” schedule, so ParkMyCloud will turn things on even if someone tries to turn them off, and the standby deployment on an “always off” schedule so you are saving money.

This idea of using ParkMyCloud with blue-green deployments is one way to start implementing Continuous Cost Control in your pipeline. This way, you can save money while delivering software quickly and automatically. Try it out with ParkMyCloud today and get the most out of your cloud!

Read more ›

New 451 Research Report on ParkMyCloud’s Multi-Cloud Scheduling Software

Analyst 451 Research has released a new report on ParkMyCloud, highlighting that “ParkMyCloud continues to build out its multi-cloud scheduling software, maintaining the clean interface but adding functionality with a reporting dashboard, single sign-on and notifications, including a Slackbot for automated parking.”

It’s true! We’ve been steadily adding features to ParkMyCloud as our customers ask for them. Recent examples include:

  • Mobile app – easy access to your ParkMyCloud account for cost management on the go
  • RDS parking – park AWS RDS instances, just like EC2
  • Slack integration – get notifications and manage your continuous cost control via Slack

Here’s the full “451 take” on ParkMyCloud:

“ParkMyCloud is one of a handful of products that automate cloud resource scheduling via a lightweight SaaS application. With support for Azure and Google Cloud Platform as well as AWS, it offers a bird’s-eye view of provisioned public cloud resources and a slick interface for ‘parking’ idle capacity, either according to a schedule or ad hoc. With a clear ROI story and plans to improve the user experience with a mobile app and a more robust policy engine, the company benefits from a focus on doing one thing and doing it well.”

That “clear ROI story” that 451 Research noted is clear to our customers, too. In fact, most customers have an ROI of less than two months of using the product. The savings rapidly pays for the cost of premium features.

They also noted that the number of instances managed in the platform has tripled, just from Q2 to Q3 this year. More and more AWS, Azure, and GCP users are relying on ParkMyCloud for continuous cost control.

So if you are evaluating cloud cost control (ParkMyCloud), we encourage you to check out the full 451 Research analysis. Download and read the report here: ParkMyCloud automates scheduling of AWS, Azure, and GCP resources.  

Ready to join the ParkMyCloud following and start controlling your cloud spend? Start a free trial of ParkMyCloud today.

Read more ›

Cloud Per-Second Billing – How Much Does It Really Save?

It has been a little over a month since Amazon and Google switched some of their cloud services to per-second billing and so the first invoices with the revised billing are hitting your inboxes right about now. If you are not seeing the cost savings you hoped for, it may be a good time to look again at what services were slated for the pricing change, and how you are using them.

Google Cloud Platform

Starting with the easiest one, Google Cloud Platform (GCP), you may not be seeing a significant change, as most of their services were already billing at the per-minute level, and some were already at the per-second level. The services moved to per-second billing (with a one-minute minimum) included Compute Engine, Container Engine, Cloud Dataproc, and App Engine VMs.  Moving from per-minute billing to per-second billing is not likely to change a GCP service bill by more than a fraction of a percent.

Let’s consider the example of an organization that has ten GCP n1-standard-8 Compute Engine machines in Oregon at a base cost of $0.3800 per hour as of the date of this blog. Under per-minute billing, the worst-case scenario would be to shut a system down one second into the next minute, for a cost difference of about $0.0063. Even if each of the ten systems were assigned to the QA or development organization, and they were shut down at the end of every work day, say 22 days out of the month, your worst-case scenario would be an extra charge of 22 days x 10 systems x $0.0063 = $1.3860. Under per-second billing, the worst case is to shut down at the beginning of a second, with a highest possible cost for these same machines (sparing you the math) being about $0.02. So, the best this example organization can hope to save over a month with these machine with per-second billing is $1.39.

Amazon Web Services

On the Amazon Web Services (AWS) side of the fence, the change is both bigger and smaller.  It is bigger in that they took the leap from per-hour to per-second billing for On-Demand, Reserved, and Spot EC2 instances and provisioned EBS, but smaller in that it is only for Linux-based instances; Windows instances are still at per-hour.

Still, if you are running a lot of Linux instances, this change can be significant enough to notice.  Looking at the same example as before, let’s run the same calculation with the roughly equivalent t2.2xlarge instance type, charged at $0.3712 per hour. Under per-hour billing, the worst-case scenario is to shut a system down even a second into the next higher hour. In this example, the cost would be an extra charge of 22 days x 10 systems x $0.3712 = $81.664. Under per-second billing, the worst case is the same $0.02 as with GCP (with fractions of cents difference lost in the noise). So, under AWS, one can hope to see significantly different numbers in the bill.

The scenario above is equally relevant to other situations where instances get turned on and off on a frequent basis, driving those fractions of an hour or a minute of “lost” time. Another common example would be auto-scaling groups that dynamically resize based on load, and see enough change over time to bring instances in and out of the group. (Auto-scale groups are frequently used as a high-availability mechanism, so their elastic growth capabilities are not always used, and so savings will not always be seen.) Finally, Spot instances are built on the premise of bringing them up and down frequently, and they will also enjoy the shift to per-second billing.

However, as you look at your cloud service bill, do keep in mind some of the nuances that still apply:

  • Windows: GCP applies per-second billing to Windows; AWS is still on one-hour billing for Windows.
  • Marketplace Linux: Some Linux instances in the AWS Marketplace that have a separate hourly charge are also still on hourly billing (perhaps due to contracts or licensing arrangements with the vendors?), so you may want to reconsider which flavor of Linux you want to use.
  • Reserved instances: AWS does strive to “use up” all of the pre-purchased time for reserved instances, spreading it across multiple machines with fractions of usage time, and per-second billing can really stretch the value of these instances.
  • Minimum of one-minute charge: Both GCP and AWS will charge for at least a minute from instance start before per-second billing comes into play.

Overall, per-second billing is a great improvement for consumers of cloud resources…and will probably drive us all more than ever to make each second count.

Read more ›

ParkMyCloud Launches App to Meet Demand for Mobile Cloud Cost Optimization

Cloud Cost Optimization Just Got Easier With the New ParkMyCloud Mobile App

November 8, 2017 (Dulles, VA) – ParkMyCloud, the leading enterprise platform for continuous cost control in public cloud, announced today the release of a new iOS app that allows users to park idle instances directly from their mobile devices. The app makes it easy for ParkMyCloud customers to reduce cloud waste and cut monthly cloud spend by 65% or more, now with even more capability and ease of use.

Before release of the app, current users were invited to participate in a beta test and offer feedback. Keith Nichols, CTO of FurstPerson, said, “Overall love it. I was out to dinner last Friday and got an emergency call to restart an instance that was parked – and I had my phone with me and was able to use the app without needing to drive home to login to my laptop.”

ParkMyCloud CTO Bill Supernor adds that “In addition to reducing cloud costs, ParkMyCloud stands for simplicity and ease of use. Our customers are thrilled to have control over cloud resources with a mobile app, making reducing cloud spend that much easier, even when they are on the go.”

ParkMyCloud is a recognized leader in cloud cost optimization. The new mobile app is another example of how the platform provider is making the experience of managing cloud costs easier and more accessible for enterprise customers. An Android version of the app is currently in development. ParkMyCloud also plans to release utilization-based parking later this year, to further automate instance off times and maximize savings.

About ParkMyCloud

ParkMyCloud is a SaaS platform that helps enterprises optimize their public cloud spend by automatically reducing resource waste — think “Nest for the cloud”. ParkMyCloud has helped customers such as McDonald’s, Capital One, Unilever, Foster Moore, and Sage Software dramatically cut their cloud bills by up to 65%, delivering millions of dollars in savings for customers using Amazon Web Services, Microsoft Azure, and Google Cloud Platform. For more information, visit http://www.parkmycloud.com.

Contact

Katy Stalcup

kstalcup@parkmycloud.com

(571) 334-3291

Read more ›

Complex Cloud Pricing Models Mean You Need Automated Cost Control

Cloud pricing models can be complex. In fact, it’s often difficult for public cloud users to decipher a) what they’re spending, b) whether they need to be spending that much, and c) how to save on their cloud costs. The good news is that this doesn’t need to be an ongoing battle. Once you get a handle on what you’re spending, you can automate the cost control process to ensure that you only spend what you need to.

By the way, I recently talked about this on The Cloudcast podcast – if you prefer to listen, check out the episode.

All Cloud Pricing Models Require Cost Management

automate cloud cost savingsThe major cloud service providers – Amazon Web Services, Microsoft Azure, and Google Cloud Platform – offer several pricing models for compute services – by usage, Reserved, and Spot pricing.

The basic model is by usage – typically this has been per-hour, although AWS and Google both recently announced per-second billing (more on this next week.) This requires careful cost management, so users can determine whether they’re paying for resources that are running when they’re not actually needed. This could be paying for non-production instances on nights and weekends when no one is using them, or paying for oversized instances that are not optimally utilized.

Then there are Reserved Instances, which allow you to pre-pay partially or entirely. The billing calculation is done on the back end, so it still requires management effort to ensure that the instances you are running are actually eligible for the Reserved Instances you’ve paid for.

As to whether these are actually a good choice for you, see the following blog post: Are AWS Reserved Instances Better Than On-Demand? It’s about AWS Reserved Instances, although similar principles apply to Azure Reserved Instances.

Spot instances allow you to bid on and use spare compute capacity for a cheap price, but their inherent risk means that you have to build fault-tolerant applications in order to take advantage of this cost-saving option.

However You’re Paying, You Need to Automate

The bottom line is that while visibility into the costs incurred by your cloud pricing model is an important first step, in order to actually reduce and optimize your cloud spend, you need to be able to take automated actions to reduce infrastructure costs.

To this end, our customers told us that they would like the ability to park instances based on utilization data. So, we’re currently developing this capability, which will be released in early December. Following that, we will add the ability for ParkMyCloud to give you right sizing recommendations – so not only will you be able to automatically park your idle instances, you’ll also be able to automatically size instances to correctly fit your workloads so you’re not overpaying.

Though cloud pricing can be complicated, with governance and automated savings measures in place, you can put cost worries to the back of your mind and focus on your primary objectives.

Read more ›

Google Cloud Platform vs AWS: Is the answer obvious? Maybe not.

Google Cloud Platform vs AWS: what’s the deal? A few months ago, we asked the same question about Azure vs AWS. While Microsoft continues to see growth, and Amazon maintains a steady lead among cloud providers, Google is stepping in. Now that Google Cloud Platform has solidly secured its spot to round out the “big three” cloud providers, we think it’s time to take a closer look and see how the underdog matches up to the 800-pound gorilla.

Is Google Cloud catching up to AWS?

As they’ve been known to do, Amazon, Google, and Microsoft all released their recent quarterly earnings on the same day. At first glance, the headlines tell it all:

The natural conclusion is that AWS continues to dominate in the cloud war. With all major cloud providers reporting earnings at the same time, we have an ideal opportunity to examine the numbers and determine if there’s more to the story. Here’s what the quarterly earning reports tell us:

  • AWS reported $4.6 billion in revenue for the quarter and is on its way to $18 billion in revenue for year, a 42% year-over-year increase, taking the top spot among cloud providers
  • Google’s revenue has cloud sales lumped together with revenue from the Google Play app store, summing up to a total of $3.4 billion for the last quarter
  • Although Google did not report specific revenue for Google Cloud Platform (GCP), Canalys estimates earnings at $870 million for the quarter – a 76% year-over-year growth

 

  • It’s also important to note that Google is just getting started. Also included in their report was an increase in new hires, a total of 2,495 in the last quarter, and most of them going to positions in their cloud sector

The Obvious: Google is not surpassing AWS

When it comes to Google Cloud Platform vs AWS, presently we have a clear winner. Amazon continues to have the advantage as the biggest and most successful cloud provider on the market. While AWS is growing at a smaller rate now than both Google Cloud and Azure, Amazon’s growth is still more impressive given that it has the largest market share of all three. AWS is the clear competitor to beat as the first successful cloud provider, with the widest range of services, and a strong familiarity among developers.

The Less Obvious: Google is gaining ground

While it’s easy to write off Google Cloud Platform, AWS is not untouchable. Let’s not forget that 76% year-over-year growth is nothing to scoff at. AWS has already solidified itself in the cloud market, but Google Cloud is just beginning to take off.

Where is Google actually gaining ground?

We know that AWS is at the forefront of cloud providers today. At the same time, AWS is now only one among three major cloud providers. Google Cloud Platform has more in store for its cloud business in 2018.

Google’s stock continues to rise. With nearly 2,495 new hires added to the headcount, a vast majority of them being cloud-related jobs, it’s clear that Google is serious about expanding its role in the cloud market. Deals have been made with major retailer Kohl’s department store, and payments processor giant Paypal. Google CEO Sundar Pichai lists the cloud platform as one of the top three priorities for the company, confirming that they will continue expanding their cloud sales headcount.

In discussing Google’s recent quarterly earnings, Pichai added his thoughts on why he believes the Google Cloud Platform is on a set path for strong growth. He credits their success to customer confidence in Google’s impressive technology and a lead in machine learning, naming the company’s open-source software TensorFlow as a prime example. Another key component to growth is strategic partnerships, such as the recent announcement of a deal with Cisco, in addition to teaming up with VMware and Pivotal.

Driving Google’s growth is also the fact that the cloud market itself is growing fast. The move to the cloud has prompted large enterprises to use multiple cloud providers in building their applications, such as Home Depot Inc. and Target Corp., who rely on a combination of cloud vendors. Home Depot in particular uses both Azure and Google Cloud Platform, and a spokesman for the home improvement retailer explains why that was that intentional: “Our philosophy here is to be cloud agnostic, as much as we can.” This philosophy goes to show that as long as there is more than one major cloud provider in the mix, enterprises will continue trying, comparing, and adopting more than one at a time, making way for Google Cloud to gain further ground.

Andy Jassy, CEO of AWS, put it best:

“There won’t be just one successful player. There won’t be 30 because scale really matters here in regards to cost structure, as well as the breadth of services, but there are going to be multiple successful players, and who those are I think is still to be written. But I would expect several of the older guard players to have businesses here as they have large installed enterprise customer bases and a large sales force and things of that sort.”

Google Cloud Platform vs. AWS: Why does it matter?

Google Cloud Platform vs AWS is only one battle to consider in the ongoing cloud war. The truth is, market performance is only one factor in choosing the best cloud provider, and as we always say, the specific needs of your business are what will drive your decision.

What we do know: the public cloud is not just growing, it’s booming.

Referring back to our Azure vs AWS comparison, the basic questions still remain the same when it comes to choosing the best cloud provider:

  • Are the public cloud offerings to new customers easily comprehensible?
  • What is the pricing structure and how much do the products cost?
  • Are there adequate customer support and growth options?
  • Are there useful surrounding management tools?
  • Will our DevOps processes translate to these offerings?
  • Can the PaaS offerings speed time-to-value and simplify things sufficiently, to drive stickiness?
  • What security measures does the cloud provider have in place?

Right now AWS is certainly in the lead among major cloud providers, but for how long? We will continue to track and compare cloud providers as earnings are reported, offers are increased, and price options grow and change. To be continued in 2018…

Read more ›

AWS IAM Roles and Ways to Use them to Improve Security

What are AWS IAM Roles?

Within AWS Identity and Access Management system (IAM) there are a number of different identity mechanisms that can be configured to secure your AWS environment, such as Users, Groups, and AWS IAM Roles. Users are clearly the humans in the picture, and Groups are collections of Users, but Roles can be a bit more obscure. Roles are defined as a set of permissions that grant access to actions and resources in AWS. Unlike Users, which are tied to a specific Identity and a specific AWS account, an IAM Role can be used by or assumed by IAM User accounts or by services within AWS, and can give access to Users from another account altogether.

To better understand Roles, I like the metaphor of a hat.  When we say a Role is assumed by a user – it is like saying someone can assume certain rights or privileges because of what hat they are wearing.  In any company (especially startups), we sometimes say someone “wears a lot of hats” – meaning that person temporarily takes on a number of different Roles, depending on what is needed. Mail delivery person, phone operator, IT support, code developer, appliance repairman…all in the space of a couple hours.

IAM Roles are similar to wearing different hats this in that they temporarily let an IAM User or a service get permissions to do things they would not normally get to do.  These permissions are attached to the Role itself, and are conveyed to anyone or anything that assumes the role.  Like Users, Roles have credentials that can be used to authenticate the Role identity.

Here are a couple ways in which you can use IAM Roles to improve your security:

EC2 Instances

All too often, we see software products that rely on credentials (username/password) for services or accounts that are either hard-coded into an application or written into some file on disk. Frequently the developer had no choice, as the system had to be able to automatically restart and reconnect if the machine rebooted, without anyone to manually type in credentials during the rebootwhen the system rebooted. If the code is examined, or file system is compromised, then the credentials are exposed, potentially compromisingand can potentially used to compromise other systems and services. In addition, such credentials make it really difficult to periodically change the password. Even in AWS we sometimes see developers hard-code API Key IDs and Keys into apps in order to get access to some AWS service. This is a security accident waiting to happen, and can be avoided through the use of IAM Roles.

With AWS, we can assign a single IAM Role to an EC2 instance. This assignment is usually made when the instance is launched, but can also be done at runtime if needed. Applications running on the server retrieve the Role’s security credentials by pulling them out of the instance metadata through a simple web command. These credentials have an additional advantage over potentially long-lived, hard-coded credentials, in that they are changed or rotated frequently, so even if somehow compromised, they can only be used for a brief period.

Another key security advantage of Roles is that they can be limited to just the access/rights privileges needed to get a specific job done. Amazon’s documentation for roles gives the example of an application that only needs to be able to read files out of S3. In this case, one can assign a Role that contains read-only permissions for a specific S3 bucket, and the Role’s configuration can say that the role can only be used by EC2 instances. This is an example of the security principle of “least privilege,”, where the minimum privileges necessary are assigned, limiting the risk of damage if the credential is compromised. In the same sense that you would not give all of your users “Administrator” privileges, you should not create a single “Allow Everything” Role that you assign everywhere. Instead create a different Role specific to the needs of each system or group of systems.

Delegation

Sometimes one company needs to give access to their resources to another company. Before IAM Roles, (and before AWS) the common ways to do that were to share account logins (with the same issues identified earlier with hardcoded credentials) or to use complicated PKI/certificate based systems. If both companies using AWS, sharing access is much easier with Role-based Delegation. There are several ways to configure IAM Roles for delegation, but for now we will just focus on delegation between accounts from two different organizations.

At ParkMyCloud, our customers use Delegation to let us read the state of their EC2, RDS, and scaling group instances, and then start and stop them per the schedules they configure in our management console.

To configure Role Delegation, a customer first creates an account with the service provider, and is given the provider’s AWS Account ID and an External ID. The External ID is a unique number for each customer generated by the service provider.

The administrator of the customer environment creates an IAM Policy with a constrained set of access (principle of “least privilege” again), and then assigns that policy to a new Role (like “ParkMyCloudAccess”), specifically assigned to the provider’s Account ID and External ID.  When done, the resulting IAM Role is given a specific Amazon Resource Name (ARN), which is a unique string that identifies the role.  The customer then enters that role in the service provider’s management console, which is then able to assume the role.  Like the EC2 example, when the ParkMyCloud service needs to start a customer EC2 instance, it calls the AssumeRole API, which verifies our service is properly authenticated, and returns temporary security credentials needed to manage the customer environment.

Conclusions

AWS IAM Roles make some tasks a lot simpler by flexibly assigning roles to instances and other accounts. IAM Roles can help make your environment more secure by:

  • Using the principle of Least Privilege in IAM policies to isolate the systems and services to only those needed to do a specific job.
  • Prevent hard coding of credentials in code or files, minimizing danger from exposure, and removing the risk of long-unchanged passwords.
  • Minimizing common accounts and passwords by allowing controlled cross-account access.
Read more ›

3 Things Companies Using Cloud Computing Should Make Sure Their Employees Do

These days, there’s a huge range of companies using cloud computing, especially public cloud. While your infrastructure size and range of services used may vary, there are a few things every organization should keep in mind. Here are the top 3 we recommend for anyone in your organization who touches your cloud infrastructure.

Keep it Secure

OK, so this one is obvious, but it bears repeating every time. Keep your cloud access secure.

For one, make sure your cloud provider keys don’t end up on GitHub… it’s happened too many times.

(there are a few open source tools out there that can help search your GitHub for this very problem, check out AWSLabs’s git-secrets).

Organizations should also enforce user governance and use Role-Based Access Control (RBAC) to ensure that only the people who need access to specific resources can access them.

Keep Costs in Check

There’s an inherent problem created when you make computing a pay-as-you-go utility, as public cloud has done: it’s easy to waste money.

First of all, the default for computing resources is that they’re “always on” unless you specifically turn them off. That means you’re always paying for it.

Additionally, over-provisioning is prevalent – 55% of all public cloud resources are not correctly sized for their resources. The last is perhaps the most brutal: 15% of spend is on resources which are no longer used. It’s like discovering that you’re still paying for that gym membership you signed up for last year, despite the fact that you haven’t set foot inside. Completely wasted money.

In order to keep costs in check, companies using cloud computing need to ensure they have cost controls in place to eliminate and prevent cloud waste – which, by the way, is the problem we set out to solve when we created ParkMyCloud.

Keep Learning

Third, companies should ensure that their IT and development teams continue their professional development on cloud computing topics, whether by taking training courses or attending local Meetup groups to network with and learn from peers. We have a soft spot in our hearts for our local AWS DC Meetup, which we help organize, but there are great meetups in cities across the world on AWS, Azure, Google Cloud, and more.

Best yet, go to the source itself. Microsoft Azure has a huge events calendar, though AWS re:Invent is probably the biggest. It’s an enormous gathering for learning, training, and announcements of new products and services (and it’s pretty fun, too).

We’re a sponsor of AWS re:Invent 2017 – let us know if you’re going and would like to book time for a conversation or demo of ParkMyCloud while you’re there, or just stop by booth #1402!

Read more ›

AWS Lambda + ParkMyCloud = Supercharged Automation

Among the variety of AWS services and functionality, AWS Lambda seems to be taking off with hackers and tinkerers. The idea of “serverless” architecture is quite a shift in the way we think about applications, tools, and services, but it’s a shift that is opening up some new ideas and approaches to problem solving.  

If you haven’t had a chance to check out Lambda, it’s a “function-as-a-service” platform that allows you to run scripts or code on demand, without having to set up servers with the proper packages and environments installed. Your lambda function can trigger from a variety of sources and events, such as HTTP requests, API calls, S3 bucket changes, and more. The function can scale up automatically, so more compute resources will be used if necessary without any human intervention. The code can be written in Node.js, Python, Java, and C#.

Some pretty cool ideas already exist for lambda functions to automate processes.  One example from AWS is to respond to a Github event to trigger an action, such as the next step in a build process.  There’s also a guide on how to use React and Lambda to make an interactive website that has no server.

For those of you who are already using ParkMyCloud to schedule resources, you may be looking to plug in to your CI/CD pipeline to achieve Continuous Cost Control.  I’ve come up with a few ideas of how to use Lambda along with ParkMyCloud to supercharge your AWS cloud savings.  Let’s take a look at a few options:

Make ParkMyCloud API calls from Lambda

With ParkMyCloud’s API available to control your schedules programmatically, you could make calls to ParkMyCloud from Lambda based on events that occur.  The API allows you to do things like list resources and schedules, assign schedules to resources, snooze schedules to temporarily override them, or cancel a snooze or schedule.

For instance, if a user logs in remotely to the VPN, it could trigger a Lambda call to snooze the schedules for that user’s instances.  Alternatively, a Lambda function could change the schedules of your Auto Scaling Group based on average requests to your website.  If you store data in S3 for batch processing, a trigger from an S3 bucket can tell Lambda to notify ParkMyCloud that the batch is ready and the processing servers need to come online.

Send notifications from ParkMyCloud to Lambda

With ParkMyCloud’s notification system, you can send events that occur in the ParkMyCloud system to a webhook or email.  The events can be actions taken by schedules that are applied to resources, user actions that are done in the UI, team and schedule assignments from policies, or errors that occur during parking.

By sending schedule events, you could use a Lambda function to tell your monitoring tool when servers are being shut down from schedules.  This could also be a method for letting your build server know that the build environment has fully started before the rest of your CI/CD tools take over.  You could also send user events to Lambda to feed into a log tool like Splunk or Logstash.  Policy events can be sent to Lambda to trigger an update to your CMDB with information on the team and schedule that’s applied to a new server.

Think outside the box!

Are you already using AWS Lambda to kick off functions and run scripts in your environment?  Try combining Lambda with ParkMyCloud and let us know what cool tricks you come up with for supercharging your automation and saving on your cloud bill! Stop by Booth 1402 at AWS re:Invent this year and tell us.

Read more ›

5 Favorite AWS Training Resources

When it comes to AWS training resources, there’s no shortage of information out there. Considering the wide range of videos, tutorials, blogs, and more, it’s hard knowing where to look or how to begin. Finding the best resource depends on your learning style, your needs for AWS, and getting the most updated information available. With this in mind, we came up with our 5 favorite AWS training resources, sure to give you the tools you need to learn AWS:

1. AWS Self-Paced Labs

What better way to learn that at your own pace? AWS self-paced labs give you hands-on learning in a live AWS environment, with AWS services, and actual scenarios you would encounter in the cloud. Among the recommended labs you’ll find an Introduction to Amazon Elastic Compute Cloud (EC2), and for more advanced users, a lab on Creating Amazon EC2 Instances with Microsoft Windows. If you’re up for an adventure, enroll in a learning quest and immerse yourself in a collection of labs that will help you master any AWS scenario at your own pace. Once completed, you will earn a badge that you can boast on your resume, LinkedIn, website, etc.  

2. AWS Free Tier

Sometimes the best way to learn something is by jumping right in. With the AWS Free Tier, you can try AWS services for free. This is a great way to test out AWS for your business, or for the developers out there, to try services like AWS CodePipeLine, AWS Data Pipeline, and more. While you are still getting a hands-on opportunity to learn a number of AWS services, the only downside is that there are certain usage limits. You can track your usage with a billing alarm to avoid unwanted charges, or you can try ParkMyCloud and park your instances when they’re not in use to get the most out of your free tier experience. In fact, ParkMyCloud started its journey by using AWS’ free tier – we eat our own dog food!

3. AWS Documentation

AWS Documentation is like a virtual encyclopedia of tools, terms, training, and everything AWS. You’ll find white papers, case studies, tutorials, cloud computing basics, and so much more. This resource is a one-stop-shop for all of your AWS documentation needs, whether you’re a beginner or advanced user. No matter where you are in your AWS training journey, AWS documentation is always a useful reference and certainly deserves a spot in your bookmarks.

4. YouTube

So far, we’ve gone straight to the source for 3 out of 5 of our favorite AWS training resources. Amazon really does a great job of providing hands-on training, tutorials, and documentation for users with a range of experience. However, YouTube opens up a whole new world of video training that includes contributions from not only Amazon, but other great resources as well. Besides the obvious Amazon Web Services channel, there are also popular and highly rated videos by Edureka, Simplilearn, Eli the Computer Guy, and more.

5. Bloggers

As cloud technology usage continues to expand and evolve, blogs are a great way to stay up to speed with AWS and the world of cloud computing. Of course, in addition to labs, a free-trial, extensive documentation, and their own YouTube channel, AWS also has their own blog. Since AWS actually has a number of blogs that vary by region and technology, we recommend that you start by following Jeff Barr – Chief Evangelist at Amazon Web Services, and primary contributor. Edureka was mentioned in our recommended YouTube channels, they also have a blog that covers plenty of AWS topics. In addition, the CloudThat blog is an excellent resource for AWS and all things cloud, and was co-founded by Bhaves Goswami – a former member of the AWS product development team.

 


There’s plenty of information out there when it comes to AWS training resources. We picked our 5 favorite resources for their reliability, quality, and range of information. Whether you’re new to AWS or consider yourself an expert, these resources are sure to help you find what you’re looking for.

Read more ›

3 Enterprise Cloud Management Challenges You Should Be Thinking About

Enterprise cloud management is a top priority. As the shift towards multi-cloud environments continues, so has the need to consider the potential challenges. Whether you already use the public cloud, or are considering making the switch, you probably want to know what the risks are. Here are three you should be thinking about.

1. Multi-Cloud Environments

As the ParkMyCloud platform supports AWS, Azure, and Google, we’ve noticed that multi-cloud strategies are becoming increasingly common among enterprises. There are a number of reasons why it would be beneficial to utilize more than one cloud provider. We have discussed risk mitigation as a common reason, along with price protection and workload optimization. As multi-cloud strategies become more popular, the advantages are clear. However, every strategy comes with its challenges, and it’s important for CIOs to be aware of the associated risks.

Without the use of cloud management tools, multi-cloud management is complex and sometimes difficult to navigate. Different cloud providers have different price models, product features, APIs, and terminology. Compliance requirements are also a factor that must be considered when dealing with multiple providers. Meeting and maintaining requirements for one cloud provider is complicated enough, let alone multiple. And don’t forget you need a single pane to view your multi-cloud infrastructure.

2. Cost Control

Cost control is a first priority among cloud computing trends. Enterprise Management Associates (EMA) conducted a research study and identified key reasons why there is a need for cloud cost control, among them were inefficient use of cloud resources, unpredictable billing, and contractual obligation or technological dependency.

Managing your cloud environment and controlling costs requires a great deal of time and strategy, taking away from the initiatives your enterprise really needs to be focusing on. The good news is that we offer a solution to cost control that will save 65% or more on your monthly cloud bills – just by simply parking your idle cloud resources. ParkMyCloud was one of the top three vendors recommended by EMA as a Rapid ROI Utility. If you’re interested in seeing why, we offer a 14-day free trial.

3. Security & Governance

In discussing a multi-cloud strategy and its challenges, the bigger picture also includes security and governance. As we have mentioned, a multi-cloud environment is complex, complicated, and requires native or 3rd party tools to maintain vigilance. Aside from legal compliance based on the industry your company is in, the cloud also comes with standard security issues and of course the possibility of cloud breaches. In this vein, as we talk to customers they often worry about too many users being granted console access to create and terminate cloud resources which can lead to waste. A key here is limiting user access based on roles or Role-based Access Controls (RBAC). At ParkMyCloud we recognize that visibility and control is important in today’s complex cloud world. That’s why in designing our platform, we provide the sysadmin the ability to delegate access based on a user’s role and the ability to authenticate leveraging SSO using SAML integration . This approach brings security benefits without losing the appeal of a multi-cloud strategy.

Our Solution

Enterprise cloud management is an inevitable priority as the shift towards a multi-cloud environment continues. Multiple cloud services add complexity to the challenges of IT and cloud management. Cost control is time consuming and needs to be automated and monitored constantly. Security and governance is a must and it’s necessary to ensure that users and resources are optimally governed. As the need for cloud management continues to grow, cloud automation tools like ParkMyCloud provide a means to effectively manage cloud resources, minimize challenges, and save you money.

Read more ›

How to Get the Cheapest Cloud Computing

Are you looking for the cheapest cloud computing available? Depending on your current situation, there are a few ways you might find the least expensive cloud offering that fits your needs.

If you don’t currently use the public cloud, or if you’re willing to have infrastructure in multiple clouds, you’re probably looking for the cheapest cloud provider. If you have existing infrastructure, there are a few approaches you can take to minimize costs and ensure they don’t spiral out of control.

Find the Cloud Provider that Offers the Cheapest Cloud Computing

There are a variety of small cloud providers that attempt to compete by dropping their prices. If you work for a small business and prefer a no-frills experience, perhaps one of these is right for you.

However, there’s a reason that the “big three” cloud providers – Amazon Web Services (AWS), Microsoft Azure, and Google Cloud – dominate the market. They offer a wide range of product lines, and are continually innovating. They have a low frequency of outages, and their scale requires a straightforward onboarding process and plenty of documentation.

Whatever provider you decide on, ensure that you’ll have access to all the services you need – is there a computing product, storage, databases? How good is the customer support?

For more information about the three major providers’ pricing, please see this whitepaper on AWS vs. Google Cloud Pricing and this article comparing AWS vs. Azure pricing.

Locked In? How to Get the Cheapest Cloud Computing from Your Current Provider

Of course, if your organization is already locked into a cloud computing provider, comparing providers won’t do you much good. Here’s a short checklist of things you should do to ensure you’re getting the cheapest cloud computing possible from your current provider:

  • Use Reserved Instances for production – Reserved instances can save money – as long as you use them the right way. More here. (This article is about AWS RIs, but similar principles apply to Azure’s RIs and Google’s Committed Use discounts.)
  • Only pay for what you actually need – there are a few common ways that users inadvertently waste money, such as using larger instances than they need, and running development/testing instances 24/7 rather than only when they’re needed. (Here at ParkMyCloud, we’re all about reducing this waste – try it out.)
  • Ask – it never hurts to contact your provider and ask if there’s anything you could be doing to get a cheaper price. If you use Microsoft Azure, you may want to sign up for an Enterprise License Agreement. Or maybe you qualify for AWS startup credits.

Get Credit for Your Efforts

While finding the cheapest cloud computing is, of course, beneficial to your organization’s common good, there’s no need to let your work in spending reduction go unnoticed. Make sure that you track your organization’s spending and show your team where you are reducing spend.

We’ve recently made this task easier than ever for ParkMyCloud users. Now, you can not only create and customize reports of your cloud spending and savings, but you can also schedule these reports to be emailed out. Users are already putting this to work by having savings reports automatically emailed to their bosses and department heads, to ensure that leadership is aware of the cost savings gained… and so users can get credit for their efforts.

 

 

Read more ›

Managing Microsoft Azure VMs with ParkMyCloud

Microsoft has made it easy for companies to get started using Microsoft Azure VMs for development and beyond. However, as an organization’s usage grows past a few servers, it becomes necessary to manage both costs and users and can become complex quickly. ParkMyCloud simplifies cloud management of Microsoft Azure VMs by giving you options to create teams of users, groups of instances, and schedule resources easily.

Consider the case of a large Australian financial institution that uses Microsoft Azure as its sole cloud provider. In this case, they currently they have 125 VMs, costing them over $100k on their monthly cloud bill with Microsoft. Their compute spend is about 95% of their total Azure bill.

Using one Azure account for the entire organization, they chose to split it into multiple divisions, such as DEV, UAT, Prod, and DR. These divisions are then split further into multiple applications that run within each division. In order for them to use ParkMyCloud to best optimize their cloud costs, they created teams of users (one per division). They gave each team permissions in order to allow shutdown and startup of individual applications/VMs. A few select admin users have the ability to control all VMs, regardless of where the applications are placed.

The organization also required specific startup/shutdown ordering for their servers. How would ParkMyCloud handle this need? This looks like a perfect use case for logical groups in ParkMyCloud.

For detailed instructions on how to manage logical groups with ParkMyCloud, see our user guide.

Putting this into context, let’s say that you have a DB and a web server grouped together. You want the DB to start first and stop last, therefore you would need to set the DB to have a start delay of 0 and a stop delay of 5. For the web server, you would set a start delay of 5 and stop delay of 0.

Of course, you could also manage logical groups of Microsoft Azure VMs with tags, scripts, and Azure automation. However, we know firsthand that the alternative solution involves complexities and requires constant upkeep – and who wants that?

ParkMyCloud offers the advantage of not only to cutting your cloud costs, but also making cloud management simpler, easier, and more effective. To experience all great the benefits of our platform, start a free trial today!  

Read more ›

7 AWS Security Best Practices with ParkMyCloud

Besides cost control, one of the biggest concerns from IT administrators is utilizing AWS security best practices to keep their infrastructure safe.  While there are some great tools that specialize in cloud and information security, there are some security benefits of ParkMyCloud that are not often considered when hardening a cloud infrastructure.

1. Keep Instances Off When Not In Use

Scheduling your instances to be turned off on nights and weekends when you aren’t using them saves you a ton of money on your cloud bill, but  also provides security and protection.  Leaving servers and databases on 24/7 is just asking for someone to try to break in and connect to servers within your infrastructure, especially during off-hours when you don’t have as many IT staff keeping an eye on things.  By aggressively scheduling your resources to be off as much as possible, you minimize the opportunity for outside attacks on those servers.

2. User Governance

Your users are trustworthy and need to access lots of servers to do their job, but why give them more access than necessary?  Limiting what servers, databases, and auto scaling groups everyone can see to only what they need keeps accidents from happening and limits mistakes.  ParkMyCloud lets you separate users into teams, with designated Team Leads to manage the individual Team Members and limits their control to just start / stop.

3. Single Sign On

In addition to governing user access to resources, ParkMyCloud integrates with all major SSO providers for SAML authentication for your users.  This includes Okta, Ping Identity, OneLogin, Centrify, Azure AD, ADFS, and Google Apps.  By using one of these providers, you can keep identity management centralized and offer multi-factor authentication through those SAML connections.

4. Audit Logs and Notifications

Every user action in ParkMyCloud is tracked in an Audit Log that is available to super admins.  These audit logs can also be downloaded as a CSV if you want to import them into something like Splunk or Logstash for log management.  Audit logs can help you see when schedules are snoozed or changed, policies are updated, or teams are created or changed.

In addition, those audit log entries can be sent as notifications to Slack channels, email addresses, or through webhooks to other tools.  This lets you keep an eye on either specific teams or the entire organization within ParkMyCloud.

5. Minimal Connection Permissions

ParkMyCloud connects to AWS through an IAM Role (preferred) or an IAM User.  The AWS policy that is required uses the bare minimum of necessary actions, which boils down to Describe, Start, and Stop for each resource type (EC2, ASG, and RDS). This means you don’t have to worry about ParkMyCloud doing something to your AWS account that you don’t intend.  For Azure connections, ParkMyCloud requires a similarly-limited Limited Access Role, and the connection to Google Cloud requires a limited Service Account.

6. Restrict Scheduling Based on Names or Tags

The ParkMyCloud policy engine is a powerful way to automate your resource scheduling and team management, but it can also be used to prevent schedules from being applied to certain systems. For instance, if you have a prod database that you want to keep up 24/7, you can use a policy to never let any user apply a schedule (even if they wanted to).  These policies can be applied based on tags, naming conventions, AWS regions, or account names.

7. Full Cloud Visibility

One great benefit of ParkMyCloud is the ability to see across all of your cloud providers (AWS, Microsoft Azure, and Google Cloud), cloud accounts, and regions within a cloud. This viewability not only provides management benefits, but helps with security by keeping all resources in one list. This prevents rogue instances from running in regions you don’t normally look at, and can help you identify resources that don’t need to be running or even stopped.

Conclusion

As you continue to strive to follow AWS security best practices, consider adding ParkMyCloud to your security toolkit.  While you’re saving money for your team, you can also get these 7 benefits to help secure your infrastructure and sleep better at night.  Start a free trial of ParkMyCloud today to start reaping the benefits!

Read more ›

Reduce RDS Costs with ParkMyCloud

Thanks to the ability to shut down instances with a start/stop scheduler, users of Amazon’s database service can finally save time and reduce RDS costs. Until June 2017, the only way to accomplish this feat was by copying and deleting instances, running the risk of losing transaction logs and automatic backups. While Amazon’s development of the start/stop scheduler is useful and provides a level of cost savings, it also comes with issues of its own.

For one, the start/stop scheduler is not foolproof. The process for stopping and starting non-production RDS instances is manual, relying on the user to create and consistently manage the schedule. Having to manually switch off when instances are not in use, and then restart when access is needed again, is a helpful advantage but also leaves room for human error. Complicating things further, RDS instances that have been shut down will automatically be restarted after seven days, again relying on the user to switch those instances back off if they’re not needed at the time.

Why Scripting is not the Best Answer

One way of minimizing potential for error is by automating the stop/start schedule yourself with writing your own scripts. While that could work, you would need to consider the number of non-production instances deployed on AWS RDS, and plan for a schedule that would allow developers to have access when needed, which could very well be at varying times throughout the day. All factors considered, the process of writing and maintaining scheduling scripts is one that takes extra time and costs money as well. Ultimately, setting up and maintaining your own schedule could increase your cloud spend more than it reduces RDS costs.

When you start thinking about the cost of paying developers, the amount of scripts that would have to be written, and the ongoing maintenance required, buying into an automated scheduling process is a no-brainer.

How ParkMyCloud Reduces RDS Costs

Automated Scheduling

ParkMyCloud saves you time and money by automating the scheduling process of stopping and starting AWS RDS instances (in addition to Microsoft Azure VMs and Google Cloud Compute instances, but that’s another post). At the same time, you get total visibility and full autonomy over your account.

The process is simple. With you as the account manager, ParkMyCloud conducts a discovery of all the company accounts, and determines which instances are most suitable for parking. From there, you have the option of implementing company-wide schedules for non-production instances, or giving each development team the ability to create schedules of their own.

Flexible Parking

ParkMyCloud takes saving on RDS costs to a whole new level with parking schedules. Different schedules can be applied to different instances, or they can be parked permanently and put on “snooze” when access is needed. Amazon’s seven-day automatic restart of switched off instances is a non-issue with our platform, and snoozed instances can be re-parked when access is no longer needed, so there’s no more relying on the user to do it manually.

For the most part, we find that companies will want to park their non-production instances outside the normal working hours of Monday to Friday, let’s say from 8:00am to 8:00pm. By parking your instances outside of those days and hours, ParkMyCloud can reduce your cloud spend by 65% – even more if you implement a parking schedule and use the snooze option.

Valuable Insight

Because you have total visibility over the account, you can reduce RDS costs even further by having a bird’s eye view of your company’s cloud use. You’ll be able to tell which of your instances are underused, terminate them, and possibly benefit further from selecting a cheaper plan (really soon). You’ll be able to see all RDS instances across all regions and AWS accounts in one simple view. You can also view the parking schedules for each instance and see how much each schedule is saving, potentially reducing costs even further. The viewability of your account and access to information provides a great resource for budgeting and planning.

Conclusion

The AWS start/stop scheduler is useful, but has to be done manually. Writing your own scripts sounds helpful, but it’s actually time consuming, and not fully cost-effective. ParkMyCloud automates the process while still putting you in control, reducing RDS costs and saving you time and money.

See the benefits of ParkMyCloud for yourself by taking advantage of our two-week free trial. Test our cloud cost control platform in your own environment, without any need for a credit card or signed contract, and see why our simple, cost-effective tool is the key to reducing RDS costs. We offer a variety of competitive pricing plans to choose from, or a limited-function version that you can continue to use for free after the trial ends.

To start your free trial today, sign up here.

Read more ›
Page 1 of 612345...Last »
Copyright © ParkMyCloud 2016. All rights reserved|Privacy Policy