New Microsoft Teams Bot to Control Cloud Costs

Today we’d like to announce a new Microsoft Teams bot that allows you to fully interact with ParkMyCloud directly through your chat window, without having to access the web GUI. By combining this chatbot with a direct notifications feed of any ParkMyCloud activities through our webhook integration, you can manage your continuous cost control from the Microsoft Teams channels you live in every day — making it easy to save 65% or more on your instance costs.

Organizations who are utilizing DevOps principles are increasingly utilizing ChatOps to manipulate their environments and provide a self-service platform to access the servers and databases they require for their work. There are a few different chat systems and bot platforms available – we also have a chat bot for Slack – but one that is growing rapidly in popularity is Microsoft Teams.

By setting up the Microsoft Teams bot to interact with your ParkMyCloud account, you can allow users to:

  • Assign schedules
  • Temporarily override schedules on parked instances
  • Toggle instances to turn off or on as needed

Combine this with notifications from ParkMyCloud, and you can have full visibility into your cost control initiatives right from your standard Microsoft Teams chat channels. Notifications allow you to have ParkMyCloud post messages for things like schedule changes or instances that are being turned off automatically.

Now, with the new ParkMyCloud Teams bot, you can reply back to those notifications to:

  • Snooze the schedule
  • Turn a system back on temporarily
  • Assign a new schedule.

The chatbot is open-source, so you can feel free to modify the bot as necessary to fit your environment or use cases. It’s written in NodeJS using the botbuilder library from Microsoft, but even if you’re not a NodeJS expert, we tried to make it easy to edit the commands and responses. We’d love to have you send your ideas and modifications back to us for rapid improvement.

If you haven’t already signed up for ParkMyCloud to help save you 65% on your cloud bills, then start a free trial and get the Microsoft Teams bot hooked up for easy ChatOps control. You’ll find that ParkMyCloud can make continuous cost control easy and help reduce your cloud spend, all while integrating with your favorite DevOps tools.

 

Read more ›

Why the Advantages of Multi-Cloud May Not Outweigh the Challenges

The time is ripe to take a fresh look at the advantages of multi-cloud. In the past 12 months, we’ve seen a huge increase in the number of our customers who use multiple public clouds – now more than 20% of our customers use multiple public clouds. With this trend in mind, we wanted to take a look at the positives of a multi-cloud strategy as well as the risks – because of course there’s no “easy button.”

What is Multi-Cloud?

First off, let’s define multi-cloud. Clearly, we’re talking about using one or more clouds, but clouds come in different flavors. For example, multi-cloud incorporates the idea of hybrid cloud – a mix of public and private Clouds. But multi-cloud can also mean two or more public clouds or two or more private clouds.

According to the RightScale 2018 State of the Cloud Report, 81% of Enterprises have a multi-cloud strategy:

What are the advantages of multi-cloud?

So why are businesses heading this direction with their infrastructure? Simple reasons include the following:

  • Risk Mitigation – create resilient architectures
  • Managing vendor lock-in – get price protection
  • Optimization – place your workloads to optimize for cost and performance
  • Cloud providers’ unique capabilities – take advantage of offerings in AI, IOT, Machine Learning, and more

When I asked our CTO what he sees as the advantages of a multi-cloud strategy, he highlighted risk management. ParkMyCloud’s own platform was born in the cloud, we run on AWS, we have a multi-region architecture with redundancy (let’s call this multi-cloud ‘light’), and if we went multi-cloud we would leverage another public cloud for risk mitigation.

Specifically, risk management from the perspective of one vendor having an infrastructure meltdown or attack. AWS had an issue about 15 months ago year when S3 was offline in US-East-1 region for 5+ hours affecting many companies, large and small, and software from web apps to smartphones apps were affected (including ours). There have also been issues of certain AWS regions getting a DDoS attack that have affected service availability.

Having a backup to another cloud service provider (CSP) or Private Cloud in these cases could have ensured 100% uptime. In the case of Alibaba and other cloud vendors, they may have a much stronger presence in certain geographic regions due to a long term presence. When any of the vendors just start getting a toe-hold in a region, their environment has minimal redundancy and safeguards in place that provide the desired high-availability, so another provider in the same region may be safer from that availability perspective.

Do the advantages of multi-cloud outweigh the challenges?

Now let’s say you want to go multi-cloud, what does this mean to you? From our own experience integrating with AWS, Azure, and Google Cloud, we’ve seen that each cloud has its own set of interfaces and own challenges. It is not a “write once, runs everywhere” situation between the vendors, and any cloud or network management utility system needs to do the work to provide deep integration with each CSP.  

Further, the nuances of configuring and managing each CSP require both broad and deep knowledge, and it is rare to find employees with the essential expertise for multiple clouds – so more staff is needed to manage multi-cloud with confidence that it is being done in a way that is both secure and highly available. With everyone trying to play catch-up with AWS, and with AWS itself evolving at a breakneck pace, it is very difficult for an individual or organization to best utilize one CSP, let alone multiple clouds.

Things like a common container environment can help mitigate these issues somewhat by isolating engineers from the nuances of virtual machine management, but the issues of network, infrastructure, cost optimization, security, and availability remain very CSP-specific.

On paper there are advantages of having a multi-cloud strategy. In practice, like many things, it ain’t easy.

Read more ›

Why Your Spring Cleaning Should Include Unused Cloud Resources

Given that spring is very much in the air – at least it is here in Northern Virginia – our attention has turned to tidying up the yard and getting things in good shape for summer. While things are not so seasonally-focused in the world of cloud, the metaphor of taking time out to clean things up applies to unused cloud resources as well. We have even seen some call this ‘cloud pruning’ (not to be confused with the Japanese gardening method).

Cloud pruning is important for improving both cost and performance of your infrastructure. So what are some of the ways you can go about cleaning up, optimizing, and ensuring that our cloud environments are in great shape?

Delete Old Snapshots

Let’s start with focusing on items that we no longer need. One of the most common types of unused cloud resources is old Snapshots. These are your old EBS volumes on AWS, your storage disks (blobs) on Azure, and persistent disks on GCP. If you have had some form of backup strategy then it’s likely that you will understand the need to manage the number of snapshots you keep for a particular volume, and the need to delete older, unneeded snapshots. Cleaning these up immediately helps save on your storage costs and there are a number of best practices documenting how to streamline this process as well as a number of free and paid-for tools to help support this process.

Delete Old Machine Images

A Machine Image provides the information required to launch an instance, which is a virtual server in the cloud. In AWS these are called AMIs, in Azure they’re called Managed Images, and in GCP Custom Images. When these images are no longer needed, it is possible to deregister them. However, depending on your configuration you are likely to continue to incur costs, as typically the snapshot that was created when the image was first created will continue to incur storage costs. Therefore, if you are finished with an AMI, be sure to ensure that you also delete its accompanying snapshot. Managing your old AMIs does require work, but there are a number of methods to streamline these processes made available both by the cloud providers as well as third-party vendors to manage this type of unused cloud resources.

Optimize Containers

With the widespread adoption of containers in the last few years and much of the focus on their specific benefits, few have paid attention to ensuring these containers are optimized for performance and cost. One of the most effective ways to maximize the benefits of containers is to host multiple containerized application workloads within a single larger instance (typically large or x-large VM) rather than on a number of smaller, separate VMs. In particular, this is something you would could utilize in your dev and test environments rather than in production, where you may just have one machine available to deploy to. As containerization continues to evolve, services such as AWS’s Fargate are enabling much more control of the resources required to run your containers beyond what is available today using traditional VMs. In particular, the ability to specify the exact CPU and memory your code requires (and thus the amount you pay) scales exactly with how many containers you are running.

So alongside pruning your trees or sweeping your deck and taking care of your outside spaces this spring, remember to take a look around your cloud environment and look for opportunities to remove unused cloud resources to optimize not only for cost, but also performance.

Read more ›

How to Use Google Preemptible VMs to Get 80% Savings

Google Cloud has always had a knack for non-standard virtual machines, and their option of creating Google preemptible VMs is no different. Traditional virtual machines are long-running servers with standard operating systems that are only shut down when you say they can be shut down. On the other hand, preemptible VMs last no longer than 24 hours and can be stopped on a moment’s notice (and may not be available at all). So why use them?

Use Cases for Google Preemptible VMs

As with most trade-offs, the biggest reason is cost. Preemptible VMs can save you up to 80% compared to a normal on-demand virtual machine. (By the way – AWS users will want to use Spot Instances for the same reason, and Azure users can check out Low Priority VMs). This is a huge savings if the workload you’re trying to run consists of short-lived processes or things that are not urgent and can be done any time. This can include things like financial modeling, rendering and encoding, and even some parts of your CI/CD pipeline or code testing framework.

How to Create a Google Preemptible VM

To create a preemptible VM, you can use the Google Cloud Platform console, the ‘gcloud’ command line tool, or the Google Cloud API. The process is the same as creating a standard VM: you select your instance size, networking options, disk setup, and SSH keys, with the one minor change that you enable the ‘preemptible’ flag during setup. The other change you’ll want to make is to create a shutdown script to decide what happens to your processes and data if the instance is stopped without your knowledge. This script can even perform different actions if the instance was preempted as opposed to shut down from something you did.

One nice benefit of Google preemptible VMs is the ability to attach local SSD drives and GPUs to the instances. This means you can get added extensibility and performance for the workload that you are running, while still saving money. You can also have preemptible instances in a managed instance group for high scalability when the instances are available. This can help you process more of your jobs at once when the preemptible virtual machines are able to run.

How to Use Google Preemptible VMs to Optimize Costs

Our customers who have the most cost-effective use of Google resources often mix Google preemptible VMs with other instance types based on the workloads. For instance, production systems that need to be up 24/7 can buy committed-use discounts for up to 57% savings on those servers. Non-production systems, like dev, test, QA, and staging, can use on-demand resources with schedules managed by ParkMyCloud to save 65%. Then, any batch workloads or non-urgent jobs can use Google preemptible VMs to run whenever available for up to 80% savings. Questions about optimizing cloud costs? We’re happy to help – email us or use the chat client on this page (staffed by real people, including me!).

Read more ›

New in ParkMyCloud: Park GCP Groups, Schedule Override, New Trial Experience, and More

Today, we share the latest update in ParkMyCloud, which highlights new types of GCP resources you can park, and updates for new and existing users alike.

Park GCP Groups

Now in ParkMyCloud, you can manage and optimize costs for your GCP Managed Instance Groups, both with and without Autoscaling. You can set parking schedules on these groups, but rather than simply turning them “on” and “off”, you can set “high” and “low” stages for your groups, for which you set a maximum and minimum number of resources, respectively. Some additional details:

  • GCP Managed Groups with Autoscaling can have a minimum size of 1 instance in the Low/Off state, and thus they can never be fully shut off to zero instances.
  • GCP Managed Groups without Autoscaling can have a minimum size of 0 instances in the Low/Off state, and can be fully shut off.
  • The Console will show the members of GCP Unmanaged Groups as “regular” resources, allowing them to be scheduled/controlled individually. You wish to assign them to ParkMyCloud Logical Groups in order to start and stop them as a set.

In order to allow ParkMyCloud to support management of GCP Instance Groups, please update your ParkMyCloud Access Role to include the latest set of permissions defined here in the User Guide.

What about other cloud service providers? ParkMyCloud already supports parking for AWS Auto Scaling groups. Management of Azure’s equivalent, Azure scale sets, is coming later this month.

Schedule “Snooze” is now “Override”

ParkMyCloud has long allowed you to “snooze” parking schedules — as in, snooze the on/off actions of the schedule, not the resource. But it was confusing — when people heard “snooze”, they incorrectly assumed it meant, “put the resource to sleep”.

So we’ve renamed it “override”. When you override a schedule on a resource, you can set it to your preferred state of running or parked, either for a set duration (e.g., override the schedule for 3 hours) or until a set time (e.g., override the schedule until 8:00 a.m. on May 16). After that time, normal schedule actions will resume.

For Existing Users…

This release includes a number of other updates that will interest existing users of ParkmyCloud:

    • Recommendations Export: The recommendations screen can now be exported to CSV, via a new Export button, for easy sharing and analysis.
    • Online Help: Each page on the console now has a “?” link to context-sensitive help from the PMC User Guide.
    • Teams: Superadmins now appear as greyed-out users on all team lists, showing their visibility into all teams.
    • Notifications: User-level notifications are now more obvious with a link from the org/team-level notifications screen.
  • Resources Screen:
    • The Schedule/Start/Stop/Team/Group buttons are now always visible, and only enabled when appropriate instances are checked, depending on the function of the button.
    • The resources screen is now more mobile-device friendly. There used to be an issue with how the screen scrolled, which is now fixed.
    • Performance improvements for customers with large numbers of schedules and recommendations.

For New Users…

Don’t tell the existing users above, but we’ve improved the ParkMyCloud free trial for new users. When you start a 14-day free trial, you will now be given Enterprise tier access to the product – that means unlimited instances, teams, users, and cloud accounts across providers in your trial ParkMyCloud account, access to the user import/export feature, database parking, SmartParking, and more. Check it out with a free trial.

Read more ›

5 Ways to Get Discounts on Cloud Resources

Whether you’re just getting started on public cloud, or you’ve gotten a bill that blew your budget out of the water, it’s a good idea to research ways to get discounts on cloud resources. There’s no reason to pay list price when so many cost-savings measures are available (and your peers are probably taking advantage of them!) Here are our top five ways to get discounts on cloud.

1. Buy in Advance

By purchasing your compute power in advance, you can get a discounted rate — the notable examples being AWS Reserved Instances, Azure Reserved Instances, and Google Committed Use Discounts.

So will these save you money? Actually, that’s a great question. There are several factors that weigh into the answer:

  • How much you pay upfront (for example AWS offers all-upfront, partial-upfront, or no-upfront)
  • Contract term: 1-year or 3-year term – the longer term will save more, but there’s risk involved in committing for that long
  • If the cloud provider cuts their prices during your contract term (and they probably will), you’ll save less

This blog post about AWS Reserved Instances digs into these issues further. Bottom line: paying in advance can save you money, but proceed with caution.

2. Use Your Resources More

The primary example of “spending more to save more” in the cloud computing world is Google Sustained Use Discounts. This is a cool option for automatic savings – as long as you use an instance for at least 25% of the month, GCP will charge you less than list price.

But just like the advanced purchasing options above, there are several factors to account for before assuming this will really save you “up to 60%” of the cost. It may actually be better to just turn off your resources when you’re not using them – more in this post about Google Sustained Use Discounts.

3. If You’re Big: Enterprise Agreements and Volume Discounts

Anyone who’s shopped at Costco isn’t surprised that buying in bulk can get you a discount. Last week, Twitter announced that it will be using Google Cloud Platform for cold data storage and flexible compute Hadoop clusters — at an estimated list price of $10,000,000/month. Of course, it’s unthinkable that they would actually pay that much – as such a high-profile customer, Twitter is likely to have massive discounts on GCP’s list prices. We often hear from our Azure customers that they chose Azure due to pre-existing Microsoft Enterprise Agreements that give them substantial discounts.

If you have or foresee a large volume of infrastructure costs, make sure to look into:

4. If You’re Small: Startup Credits

Each of the major cloud providers offers free credit programs to startups to lure them and get locked in on their services – but that’s not a bad thing. We’ve talked to startups focused on anything from education to location services who have gotten their money’s worth out of these credits while they focus on growth.

If you work for a startup, check out:

5. Wait

So far, history tells us that if you wait a few months, your public cloud provider will drop their prices, giving you a built-in discount.

If you stick with the existing resource types, rather than flocking to the newer, shinier models, you should be all set. The same AWS m1.large instance that cost $0.40/hour in 2008 now goes for $0.175. We’ll just say that’s not exactly on pace with inflation.

It’s Okay if You Don’t Get Discounts on Cloud

What if you’re not a startup, you’re not an enterprise, and you just need some regular compute and database infrastructure now? Should you worry if you don’t get discounts on cloud list prices? No sweat. Even by paying list price, it’s still possible to optimize your spend. Make sure you’re combing through your bill every so often to find orphaned or unused resources that need to be deleted.

Additionally, right-size your resources and turn them off when you’re not using them to pay only for what you actually need – you’ll save money, even without a discount.

Read more ›

Alibaba Cloud Market Share 2018: The Next Big Cloud Provider?

After reviewing Q1 earnings for the ‘big three’ cloud providers last week, it’s obvious AWS is still number one overall. While the other CSPs are growing faster than expected, what about the Alibaba cloud market share? Alibaba made big waves in Asia, dominating in China with accelerating cloud revenue. Here’s the deal with Alibaba Cloud, and why it should not be overlooked in 2018.

Alibaba Cloud at a Glance

Following, Amazon, Google, and Microsoft, Alibaba made headlines of its own when they reported cloud revenue for the March quarter:

Here’s what the quarterly earnings report tells us:

  • Alibaba’s annual cloud revenue reached $2.1 billion for the quarter, up 103 percent.
    • In comparison, AWS growth was at 49 percent for the same period, although Alibaba’s cloud revenue can’t quite compare with the $5.4 billion AWS generated in the fourth quarter.
  • Cloud computing revenue saw 101 percent year-over-year growth for fiscal 2018.
  • Alibaba´s IaaS segment is dominating in China, with almost 47.6 percent of the market share, up from 43 percent only a year ago, crediting growth from recent customer additions and value added products.

What’s clear:

Alibaba is growing its market presence, not only with a firm hold over Asia, but also securing a spot as one of the top five cloud providers worldwide. Synergy Research Group reported Q1 2018 market share numbers: Amazon 33%, Microsoft 13%, IBM 8%, Google 6% and Alibaba 4%.

In comparison to other cloud providers, Alibaba might be in last place among the top five, but they also show consistently steady, upward growth, and land only a hair shy of catching up to Google at 6 percent. And while AWS has a third of the total market share, Alibaba holding onto nearly half of China’s market share is nothing to scoff at.

Not to mention the company added 316 new products and features to their cloud platform in the fourth quarter alone, added a data center in Indonesia, and acquired or partnered with major enterprise customers including China National Petroleum Corporation, Malaysia Digital Economy Corporation, and Cathay Pacific, showing no signs of slowing down anytime soon.  

Alibaba Cloud Market Share – 2018 and Beyond:

The opening of a data center in Indonesia expanded Alibaba Cloud’s reach to 18 countries and regions worldwide, setting their sights high and with the expectation of continued growth. If that’s not a clear indication of future success, the outlook of company executives sheds more light.

Daniel Zhang, CEO of Alibaba Group, says “Alibaba Group had an excellent quarter and fiscal year, driven by robust growth in our core commerce business and investments we have made over the past several years in longer-term growth initiatives. […] During the past year we also doubled down on technology development, cloud computing, logistics, digital entertainment and local services so that we are in a position to capture consumption growth in China and other emerging markets.”

Maggie Wu, CFO of Alibaba Group, echoes this sentiment, saying “Looking ahead to fiscal 2019, we expect overall revenue growth above 60%, reflecting our confidence in our core business as well as positive momentum in new businesses. We expect our new growth initiatives will drive long-term, sustainable value for our customers and partners and increase our total addressable market.”

So as the Alibaba cloud market share grows, could they be the next big cloud provider in 2018? Will they jump into the ‘big three’ or will it become a ‘big five,’ including IBM’s market share? What we know for sure is that we can expect more growth, and that’s a good thing for all of us because growth drives competition, innovation, and better offerings for all. So while we continue looking at AWS, Azure, Google, and IBM in the next year, we’ll also be keeping an eye on Alibaba Cloud and other up-and-coming providers to see what they bring to the table.  

Read more ›

Do Google Sustained Use Discounts Really Save You Money?

When looking to keep Google Cloud Platform (GCP) costs in control, the first place users turn are the discount options offered by the cloud service provider itself, such as Google’s Sustained Use discounts. The question is: do Google Sustained Use discounts actually save you money, when you could just turn the instance off?

How Google Sustained Use discounts work

The idea of the Sustained Use discount is that the longer you run a VM instance in any given month, the bigger discount you will get from the list price. The following shows the incremental discount, and its cumulative impact on a hypothetical $100/month VM instance, where the percentages are against the baseline 730-hour month.

I have to say here that the GCP prices listed can be somewhat misleading unless you read the fine print where it says “Note: Listed monthly pricing includes applicable, automatic sustained use discounts, assuming the instance runs for a 730 hour month.”  What this means to us is that the list prices of the instances are actually much higher, but their progressive discount means that no one ever actually pays list price. That said – the list price is what you need to know in order to estimate the actual cost you will pay if you do not plan to leave the instance up for 730 hours/month.

For example, the price shown on the GCP pricing link for an n1-standard-8 instance in the Iowa region is (as of this writing) $194.1800. The list price for this instance would be $194.1800/0.7 = $277.40. This is the figure that must be used as the entry point for the table above to calculate the actual cost, given a certain level of utilization.

What if you parked the VM instance instead?

Here at ParkMyCloud, we’re all about scheduling resources to turn off when you’re not using them, i.e., “parking” them. With this mindset, I wondered about the impact of the sustained use discounts on the schedule-based savings. The following chart plots the cost of that n1-standard-8 VM instance, showing Google sustained use discounts combined with a parking schedule.

We can definitely see progressively more sustained use savings added to progressively less schedule-based savings.  I am sure this would end up getting described as the typical hype of “the more you spend, the more you save!”  But, the reality of the matter must intrude here and show the more you spend…the more you spend!

Looking at what this means for ParkMyCloud users, here is the monthly uptime for a few common parking schedules, and the associated cost:

These are a far cry from the $277.40 list price, and even the $194.18 max discounted price. From this, it can be seen that even with the most wide-open “work day” schedule of 12 hours per weekday, the schedule is barely nudging over the 182.5 hours needed to hit the first price break of 20%. And even then, the 20% discount is only applied to those hours above 182.5 hours. A welcome discount to be sure, but not very enormously impactful to the bottom line.

Another way our users keep these utilization hours low is by keeping their VM instances “always parked” and temporarily overriding the schedule for a set number of hours (such as for an 8-hour workday) when their non-production resources are needed. When the duration of the override expires, the instance is automatically shut down. Giving the best possible savings, and usually never even hitting the first GCP discount tier.

Do Google Sustained Use discounts save you money?

In short: definitely! At least, they do save you money over the price listed by Google. Do they save you the maximum amount of money possible? No, not if it’s a non-production VM instance that is only needed during a regular workday (although it’s close).

To get the optimal savings on your resources, keep them running only when you’re actually using them, and park them when you’re not. If you meet the threshold of 25% usage for the month, Google’s Sustained Use discounts will kick in, and further lower your cost from the list price. These two savings options combined will optimize your costs and provide the maximum savings.

Read more ›

AWS vs Azure vs Google Cloud Market Share 2018: Is AWS Still in the Lead?

Q1 earnings are in for the ‘big three’ cloud providers and you know what that means – it’s time for an AWS vs Azure vs Google Cloud market share comparison. Let’s take a look at all three providers side-by-side to see where they stand.

Are Azure and Google catching up to AWS?

As they’ve been known to do, Amazon, Google, and Microsoft all released their quarterly earnings reports around the same time over this past week, giving us an opportunity to look at the big picture. Before we dive in, let’s take a look at the headlines for the media interpretation of the AWS vs Azure vs Google Cloud Market Share:

Here’s what the quarterly earning reports tell us:

  • AWS, the longest-standing cloud provider of the three, has managed to maintain its growth rate with a 49 percent increase to $5.44 bn for the quarter.
  • Meanwhile, Microsoft is picking up speed and picking up share. While Microsoft did not break down cloud profits for Azure specifically, they reported that Azure alone made impressive revenue growth at 93 percent. This was slightly down from 98 percent growth last quarter.
  • Microsoft’s Intelligent Cloud division as a whole, which includes Azure, went up 17 percent to $7.9bn.
  • Alphabet, Google’s parent company, released quarterly earnings for Google, but did not break down any cloud revenue. Reported revenue growth was at 26 percent year-over-year to $31.16 billion in Q1. In Q1 of last year, Google revenue growth was at 22 percent between Q1 of 2016 and Q1 of 2017.  
  • Canalys ranked the three providers in terms of market share, with AWS in the lead owning about a third of the market, Microsoft in second with about 15 percent, and Google sitting around 5 percent:

What’s clear:

In the case of AWS vs Azure vs Google Cloud market share – AWS still has the lead.

AWS is maintaining their lead and holding the lion’s share of the market. Could Amazon’s head start in the cloud business still have something to do with it? Jeff Bezos seems to think so:

“AWS had the unusual advantage of a seven-year head start before facing like-minded competition. As a result, the AWS services are by far the most evolved and most functionality-rich.”

Whether it’s true or not, it says something that Microsoft and Google aren’t breaking down their cloud numbers just yet.

Nonetheless, customers do want a choice when it comes to cloud providers. Yes, AWS is still seeing big growth, but Microsoft has sustained high levels of revenue with Azure, narrowing the gap, and suggesting that they might be the cloud provider to give Amazon a run for its money.

And if one thing is for sure – it’s that the cloud isn’t going anywhere. Synergy Research Group reported that the cloud market is still growing. In Q1-Q3 of last year, year-over-year growth was at 45-43 percent for cloud infrastructure services. In Q1 of this year, growth jumped to 51 percent –  Q1 2018 market share numbers are: Amazon 33%, Microsoft 13%, IBM 8%, Google 6% and Alibaba 4%.

AWS vs Azure vs Google Cloud Market Share – And the winner is:

With similar findings to Canalys, Synergy says that the cloud market continues to rapidly expand and AWS still controls a third of the market, being bigger than its next four competitors combined.

“It is particularly notable that growth at AWS has actually accelerated despite its scale. The rapid growth of Microsoft, Google and Alibaba is not causing any drop-off in AWS market share.”

It looks like the story line hasn’t changed just yet. With that said, Amazon is getting some healthy competition from Microsoft, and as the cloud grows, so does customers’ willingness to use other service providers. So while AWS keeps rolling along, we’ll be looking to see what other providers offer in the next quarter, and if their dominance will prevail.

Read more ›

How to: ChatOps Cloud Cost Control

The latest time-saving automation to add to your DevOps tool belt: ChatOps cloud cost control. That’s right – you may already be using ChatOps to make your life easier, but did you know that amongst the advantages, you can also use it to control your cloud resources?

Whatever communication platform you’re already using for chatting with your team members, you can use for chatting with your applications and services. And with the increasing rise of ChatOps, that brings us to one of the questions we’ve been getting asked more frequently by our DevOps users: how can I manage schedules and instances from Slack, Microsoft Teams, Atlassian Stride, and other chat programs?  

One of the cool things you can do using ChatOps is control your cloud resources through ParkMyCloud. Learn how it’s done in this quick YouTube demo:

ParkMyCloud has the ability to send messages to chat rooms via notifications and receive commands from chat bots via the API. This video details the Slackbot specifically, but similar bots can be used with Microsoft Teams or Atlassian Stride. There are multiple settings you can configure within Slack to manage your account, including notifications to let you know when a schedule is shutting an instance down. You can also set up the ability to override a schedule and turn the system on from Slack. Watch the video for a brief overview of how to:

  • Set up a notification that uses the Slack type
  • Adjust settings to be notified of user actions, parking actions, policy actions, and more
  • Set up the ParkMyCloud Slackbot to respond to notifications

Once you set up Slack with ParkMyCloud, you’ll be able to do anything you normally would in the UI or API, including snooze and toggle instances to override their schedules, receive notifications and be able to control your account directly from your Slack chat room. The Slackbot is available on our GitHub. Give it a try, and enjoy full ChatOps control of your cloud costs!

Read more ›

Cloud Computing Green Initiatives on the Rise

Over the past couple of months, we have seen a lot of articles about the Big Three cloud providers and their efforts to be environmentally friendly and make cloud computing green. What are Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) doing to make their IaaS services as green as possible? Does moving to the cloud help enterprises with their green initiatives and use of renewable energy?

It seems the cloud providers are focused on using renewable energy like solar and wind to power their massive data centers and are very actively touting that fact.

For example, Microsoft recently announced a new renewable energy initiative, the Sunseap project. This project, Microsoft’s first Asian clean energy deal, will install solar panels on hundreds of rooftops in Singapore, which they claim will generate 60MW to power Microsoft’s Singapore datacenter — making Microsoft Azure, Office 365 and numerous other cloud services. This deal is the third international clean energy announcement, following two wind deals announced in Ireland and The Netherlands in 2017. That’s pretty cool in my book, so kudos to them.

Google made a similar announcement recently, albeit a little more general, where they tout that Google is now buying enough renewable energy to match the power used in its data centers and offices. Google said that last year its total purchase of energy from sources including wind and solar exceeded the amount of electricity used by its operations around the world. According to a recent blog written by Google, they are the first public cloud, and company of their size, to have achieved that feat, so says Urs Hölzle, Google’s senior vice president of technical infrastructure. Now we can’t verify this but let’s take them at face value given the data in the chart below:

One observation we have in looking at this chart – where are IBM and Oracle? Once again, the Big Three always seem to be several steps ahead.

Speaking of, we’ve looked at Microsoft and Google, what about AWS? According to AWS’s self-reports, it seems that they are behind both Google and Microsoft in terms of relying 100% on renewable energy. AWS states a long-term commitment to achieve 100% renewable energy usage for their global infrastructure footprint, and had set a goal to be powered by 50% renewable energy by the end of 2017 (we could not find a recent 2018 update).

Moving to the cloud has many benefits – time to market, agility, innovation, lower upfront cost, and the commitment to renewable energy.! There’s one other way for cloud computing to be more sustainable – and that’s by all of us using fewer resources. In our small little way, ParkMyCloud helps – we help you turn cloud stuff off when its not being used, kind of like following your kids around the house and shutting off the lights, your at-home green initiative – you know you can automate that using Nest, right? Saving money in the process? That’s a win-win.

Read more ›

AWS Secrets Manager Makes Security Simpler

A few weeks ago, Amazon released their AWS Secrets Manager for public use. This is a very welcome announcement. Despite the fact that everyone knows security and encryption are important in cloud applications and infrastructure, simple security measures are often overlooked. More people and applications use plain-text passwords and hand-modified config files than you would think, often with the mindset that “we’ll secure it later.” This is a big security risk, as anyone with access to the config file now knows the password, so an easy-to-use secret management can be a real game changer.

Generally, secret management requires knowledge, infrastructure, time, and additional complexity to ensure your security needs were met. It also usually involves an additional tool like Hashicorp Vault, Chef Vault, or git-crypt. AWS also has a tool to manage encryption keys called Key Management Service, which some people use for secret management, but is really more suited for encryption and decryption.

Now with AWS Secrets Manager, secrets and credentials can be stored securely, while still being easily accessed from other AWS services. Setup is very quick, and doesn’t require any new instances or installation of software or tools. You also don’t need to know details about encryption or best practices, and the solution is much less complex than most free tools.

So what kinds of things will this service help with? The biggest benefit is for applications and services that have moved to a microservices architecture, where individual pieces of the application that live in AWS are all talking to each other via APIs or message queues. For example, if you’re using Amazon’s RDS service, credentials for your database can be encrypted, accessed via the API or AWS CLI, automatically rotated, and accessed based on IAM policies. There’s also built-in Lambda integration, so you can run scripts to customize things like your secret rotation policy.

Pricing for this service is along the same general lines as other AWS services.  Currently, each secret costs $0.40 to store, and costs $0.05 for every 10,000 API calls to access those secrets.  Considering the time and effort it normally takes for proper secret management, this can be a very cost-effective way to store secrets for use in your AWS environment.

Data breaches happen all the time — in 2018 alone, there have already been breaches involving Facebook, Under Armour/MyFitnessPal, and Saks Fifth Avenue. There is no better time than now to review your system and account security. AWS Secrets Manager is a quick and easy way to implement some security best practices for your microservices-based applications so you and your team can securely store and rotate secrets that might have normally been in plain-text or sitting in a config file. We look forward to implementing this in our own AWS accounts!

Read more ›

How Upgrade Cycles Impact the Cost per Instance in Cloud Computing

I have recently spent an increasing amount of time discussing (arguing) about whether the cost per instance in cloud computing is going up or down. The reason for this is that while objective analysis by reputable third parties shows that computing costs are reducing, what we observe from our own standpoint is that the average cost per instance that customers are managing in the ParkMyCloud platform is actually increasing. Following on from a recent blog by our CTO (The Cost of Cloud Computing Is, in Fact, Dropping Dramatically) we decided to undertake some more detailed analysis to look at this phenomenon.

We identified a cohort of our customers who had been with ParkMyCloud for at least one full year and looked at what happened to their average cost per instance over a one-year time period. What we discovered was that the average cost per instance, as charged by the cloud provider, had indeed increased from $214 to $329 per instance per month for our customers using Amazon, Microsoft and Google clouds – a 65% increase. Set against the backdrop of the reported falling costs of cloud computing, this clearly seems to be an anomaly. Or is it?

Digging a little deeper, we discovered that two-thirds of our customers were spending an increased amount per instance per month over the last 12 months and only one third were paying the same amount or less than before. Interestingly, of those who saw a price increase, one third saw their average cost per instance increase by more than 25%.

So what do we think is happening? One possible explanation is something we will refer to as The Apple Upgrade Syndrome. Each time there is an iPhone upgrade cycle, Apple’s product marketing gurus carefully price the new products — and they also adjust the pricing on their older products. When we walk into the Apple Store to peruse the new offerings, we have a clear choice of either purchasing the previous flagship model at a discounted price, or the new, sexy upgraded model at a price premium. A rational actor should buy the discounted model, which just the day before was hundreds of dollars more. But that’s not what most of us do. What we want is the new model with the additional bells and whistles (e.g. face tracking technology and studio lighting settings for the camera) and are willing to pay the extra. As a result, despite the overall cost of mobile computing falling, your monthly phone bill keeps increasing.

We believe that the same phenomenon is at work in cloud computing when the new generations of instances are released, and the cloud computing buyers decide to trade-up to these new more powerful instances (e.g. more cores, more memory, etc.), despite the fact that previous generations of instances might actually have their prices reduced. So while Amazon, Microsoft or Google might pronounce a “25 percent improvement in price-performance” for a new generation of instances, the reality is that new instances cost more and therefore drive up the monthly spend.

Next, we’ll share a more in-depth analysis that will review the instance types driving these increases. At the end of the day, we are all likely correct. The cost of cloud computing is indeed going down, but the average cost per instance is actually going up.

Read more ›

Application Containerization: Pros and Cons

What is Application Containerization?

Application containerization is more than just a new buzz-word in cloud computing; it is changing the way in which resources are deployed into the cloud. However, many people are still coming to grips with the concept of application containerization, how it works, and the benefits it can deliver.

Most people understand the term “cloud computing” relates to the renting of computing services over the Internet from Cloud Service Providers (AWS, Azure, Google, etc.). Cloud computing breaks down into three broad categories – Infrastructure-as-a-Service (IaaS), Platform-as-a-Service (PaaS) and Software-as-a-Service (SaaS) – often called the “cloud computing stack” because they build on top of one another.

The benefits of cloud computing are easily seen at the IaaS level; where, rather than building a physical, on-premises IT infrastructure, businesses can simply pay for the computing services they need as they want them, on demand. The advantages of cost, scalability, flexibility and low maintenance overheads have driven IaaS cloud computing to be a $50 billion industry in little more than a decade.

However, IaaS cloud computing also has its issues. In order to take advantage of the benefits, businesses have to rent virtual machines (VMs or “instances”) which replicate the features of a physical IT environment. This means paying for a server complete with its own operating system and the software required to run the operating system, even if you only want to launch a single application.

Where Application Containerization Comes Into the Picture

By comparison, application containerization allows businesses to launch individual applications without the need to rent an entire VM. It does this by “virtualizing” an operating system and giving containers access to a single operating system kernel – each container comprising the application and the software required for the application to run (settings, libraries, storage, etc.).

The process of application containerization allows multiple applications to be distributed across a single host operating system without requiring their own VM, which can lead to significant cost savings. Whereas previously, a server hosting eight applications in eight VMs would have eight copies of the operating system running in each VM, ten containers can share the same operating system.

In addition to significant cost savings, application containerization allows for greater portability. This can accelerate the process of testing applications across different operating systems because there is no waiting for the operating system to boot up. Furthermore, if the application crashes during testing, it only takes down the isolated container rather than the entire operating system.

One further benefit of application containerization is that containers can be clustered together for easy scalability or to work together as micro-services. In the latter case, if an application requires updating or replacing, it can be done in isolation of other applications and without the need to stop the entire service. The lower costs, greater portability and minimal downtime are three reasons why application containerization has become more than just a new buzzword in cloud computing and is changing the way in which resources are deployed into the cloud.

The Downsides of Application Containerization

Unfortunately there are downsides to application containerization. Some of these – for example, container networking – are being resolved as more businesses take advantage of application containerization. However, container security and complexity are remaining issues, as is the potential for costs to spiral out of control as they often do when businesses adopt new technologies.

The security issue evolves from the process of containers sharing the same operating system. If a vulnerability in the operating system or the kernel is exploited, it will affect the security of all the applications connected to the operating system. Consequently, security policies have to be turned on for every application, with activities other than essential ones forbidden.

Containers also add more operational complexity than you might at first assume, adding more to orchestrate and requiring additional management.

With regard to costs, the risk exists that developers will launch multiple containers and fail to terminate them when they are no longer required. Due to the number of containers being launched compared to VMs, it will not take long for container-related cloud waste to match that of VM-related cloud waste – estimated to be $12.9 billion per year in this blog post.

The problem with controlling cloud spend using cloud management software is that many solutions fail to identify unused containers because the solutions are host-centric rather than role-centric. For an effective way to control cloud spend, speak with ParkMyCloud about our cloud cost management software.  

Read more ›

DevFinOps: Why Finance Needs to be Integrated with Development and Operations

The formation of DevOps brought together two distinct worlds, causing a shift in IT culture that can only be made better (and more cost effective) by the integration of financial strategy  – enter DevFinOps. We say this partially in jest… yeah, we know, you’ve had enough of the Dev-blank-blank mashups. But really, this is something that we’ve been preaching about since the start of ParkMyCloud. As long as the public cloud remains a utility, everyone should be responsible for controlling the cost of their cloud use, meaning “continuous cost control” should be integrated into the processes of continuous integration and delivery.  

What is DevFinOps?

Hear us out — you at least need to start thinking of financial management as an element in the DevOps process. Time and time again, we see DevOps teams overspend and face major organizational challenges when inevitably the Finance team (or the CTO) starts enforcing a stricter budget. Cost control becomes a project, derailing forward development motion by rerouting valuable resources toward implementing spend management processes.  

It doesn’t need to be this way.

As financial resources are finite, they should be an integrated element from the very beginning when possible, and otherwise as soon as possible. Our product manager, Andy Richman,  recently discussed this concept further in a podcast for The CloudCast.

There are a number of ways that finance can be integrated into DevOps, but one near and dear to our hearts is with automated cloud cost control. A mental disconnect between cloud resources and their costs causes strain on budgets and top-down pressure to get spending under control.

Changing the Mindset: Cloud is a Utility

The reason for this disconnect is that as development and operations have moved to the cloud, the way we assess costs has changed profoundly in the same way that infrastructure has changed. A move to the cloud is a move to pay-as-you-go compute resources.

This is due to the change in pricing structure and mindset that happened with the shift from traditional infrastructure to public cloud. As one of our customers put it:

“It’s been a challenge educating our team on the cloud model. They’re learning that there’s a direct monetary impact for every hour that an idle instance is running. The world of physical servers was all CapEx driven, requiring big up-front costs, and ending in systems running full time. Now the model is OpEx, and getting our people to see the benefits of the new cost-per-hour model has been challenging but rewarding.”

In a world where IT costs already tend to exceed budgets, there’s an added struggle to calculating long-term cost estimates for applications that are developed, built and run on a utility. But wasn’t the public cloud supposed to be more cost effective? Yes, but only if every team and individual is aware of their usage, accountable for it, and empowered with tools that will give them insight and control over what they use. The public cloud needs to be thought of like any other utility.

Take your monthly electric bill, for example. If everyone in the office left the lights on 24 hours a day and 7 days a week, those costs would add up rather quickly. Meanwhile, you’d be wasting money on all those nights and weekends that your beautifully lit office is completely empty. But that doesn’t happen because in most cases, people understand that lights cost money, so people have automated this process in the office by using sensors either based on motion (usage) or time-based schedules. Now apply that same thinking to the cloud and it’s easy to see why cost-effectiveness goes down the drain when individuals and teams aren’t aware or accountable for they resources they’re using.

Financial decisions regarding IT infrastructure fall into the category of IT asset management (ITAM), an area that merges the financial, contractual and inventory components of an IT project to support lifecycle management and strategic decision-making. That brings us back to DevFinOps: an expansion of ITAM, fixing financial cost and value of IT assets directly into IT infrastructure, updating calculations in real time and simplifying the budgeting process.

Why this is important now that you’re on cloud

DevFinOps proposes a more effective way to estimate costs is by breaking them down into smaller estimates over time as parts of the work get completed, integrating financial planning directly into IT and cloud development operations. To do this, the DevOps team needs visibility into how and when resources are being used and an understanding on opportunities for saving.

Like we’ve been saying: the public cloud is a utility – you pay for what you use. And with that in mind, the easiest way to waste money is by leaving your instances or VMs running 24 hours a day and 7 days a week, and the easiest way to save money is just as simple: turn them off when they’re idle. In a future post, we’ll discuss further on how you can implement this process for your organization using automated cost control – stay tuned.

Read more ›

Dear Daniel Ek: We Made You a Playlist About Your Google Cloud Spend.

Dear Daniel Ek,

Congrats on Spotify’s IPO! It’s certainly an exciting time for you and the whole company. We’re a startup ourselves, and it’s inspiring to see you shaking up the norms and succeeding on your first day on the stock exchange.

Of course, with big growth comes big operational changes. Makes sense. As cloud enthusiasts ourselves, we were particularly interested to see that you committed to 365 million euros/$447 million in Google Cloud spend over the next three years.

Congrats on choosing an innovative cloud provider that will surely serve your infrastructure needs well.

But we’d like to issue a word of warning. No, not about competing with Google – about something that hits the bottom line more directly, which I’m sure will concern you.

Maybe a playlist on our favorite music streaming service is the best way to say this:

What do we mean when we say not to waste money on Google Cloud resources you don’t need?

In fact, we estimate that up to $90 million of that spend could be on compute hours that no one is actually using – meaning it’s complete wasted.

How did we get there? On average, ⅔ of cloud spend is spent on compute. Of that, 44% is on non-production resources such as those used for development, testing, staging, and QA. Typically, those resources are only needed for about 35% of hours during the week (a 40- hour work week plus a margin of error), meaning the other 65% of hours in the week are not needed. More here.

That’s not to mention potential waste on oversized resources, orphaned volumes, PaaS services, and more.

Companies like McDonald’s, Unilever, and Sysco have chosen ParkMyCloud to reduce that waste by automatically detecting usage and then turning those resources off when they’re not needed – all while providing simple, governed access to their end users.

Daniel, we know you won’t want your team to waste money on your Google Cloud spend.

We’re here when you’re ready.

Cheers,

Jay Chapel

CEO, ParkMyCloud

Read more ›

Announcing SmartParking for Google Cloud Platform: Automated, Custom On/Off Schedules Based on GCP Metric Data

Today we’re excited to announce the latest cloud provider compatible with ParkMyCloud’s SmartParkingTM – Google Cloud Platform! In addition to AWS and Azure, Google users will now benefit from the use of SmartParking to get automatic, custom on/off schedules for cloud resources based actual usage metrics.

The method is simple: ParkMyCloud will import GCP metric data to look for usage patterns for your GCP virtual machine instances. With your utilization data, ParkMyCloud creates recommended schedules for each instance to turn off when they are typically idle, eliminating potential cloud waste and saving you money on your Google Cloud bill every month. You will no longer have to go through the process of creating your own schedule or manually shutting your VMs off – unless you want to. SmartParking automates the scheduling for you, minimizing idle time and cutting costs in the process.

Customized Scheduling – Not “One-Size-Fits-All”

SmartParking’s benefits are not “one-size-fits-all.” The recommended schedules can be customized like an investment portfolio – choose between “conservative”, “balanced”, or “aggressive” based on your preferences.

And like an investment, a bigger risk comes with a bigger reward. When receiving recommendations based on your GCP metric data, you’ll have the power to decide which of the custom schedules is best for you. If you’re going for maximum savings, aggressive SmartParking is your best bet since you’ll be parked most of the time, but with a small “risk” of occasionally finding an instance parked when needed. But in the event that this does happen – no fear! You can still use ParkMyCloud’s “snooze button” to override the schedule and get the instance turned back on — and you can give your team governed access to do the same.

If you’d rather completely avoid having your instances shut off when needed, you can opt for a conservative schedule. Conservative SmartParking only recommends a parking schedule during times that instances are never used, ensuring that you won’t miss a beat when it comes to having instances off during any given time that you’ve ever used them.

If you’re worried about the risk of aggressive parking for maximum savings, but want more opportunities to save than conservative schedules will give you, then a “balanced” SmartParking schedule is a happy medium.

What People are Saying: Save More, Easier than Ever

Since ParkMyCloud debuted SmartParking in January for AWS, adding Azure in March, customers have given positive feedback to the new functionality:

“ParkMyCloud has helped my team save so much on our AWS bill already, and SmartParking will make it even easier,” said Tosin Ojediran, DevOps Engineer at a FinTech company. “The automatic schedules will save us time and make sure our instances are never running when they don’t need to be.”

ParkMyCloud customer Sysco Foods has more than 500 users across 50 teams using ParkMyCloud to manage their AWS environments. “When I’m asked by a team how they should use the tool, they’re exceedingly happy that they can go in and see when systems are idle,” Kurt Brochu, Sysco Foods’ Senior Manager of the Cloud Enablement Team, said of SmartParking. “To me, the magic is that the platform empowers the end user to make decisions for the betterment of the business.”

Already a ParkMyCloud user? Log in to your account to try out the new SmartParking. Note that you will need to update the permissions that ParkMyCloud has to access your GCP metric data — see the user guide for instructions on that.

Not yet a ParkMyCloud user? Start a free trial here.

Read more ›

ParkMyCloud Announces “SmartParking” for Google Cloud Platform

Cloud Cost Optimization Engine Automates Savings for “Big Three” Cloud Providers

Example of an automatically-generated SmartParking recommendation based on instance usage patterns.

April 3, 2018 (Dulles, VA) – ParkMyCloud, the leading enterprise platform for continuous cost control in public cloud, announced today that it has released SmartParkingTM for Google Cloud Platform (GCP), allowing GCP users to automate cloud cost optimization.

The ParkMyCloud platform helps customers of the “big three” cloud providers, Amazon Web Services (AWS), Microsoft Azure, and GCP, save money on cloud resources by automatically integrating cost control into their DevOps processes. ParkMyCloud does this by scheduling cloud resources to turn off when they are not needed – which they call “parking”.

This expansion of the SmartParking functionality to Google Cloud Platform comes on the heels of the release of SmartParking for Microsoft Azure in March. It uses Google Cloud Platform metric data to find patterns in instance utilization. ParkMyCloud then uses that data to automatically recommend specific parking schedules for each instance, designed to turn them off when they are typically idle. This maximizes savings on cloud resources by shutting the instances down when they are not being used. Using ParkMyCloud’s automated cost-saving technology, individual AWS and Azure customers have saved over $1 million on their cloud bills. Now, Google Cloud users, too, can eliminate wasted spend.

“We are consistently saving $15,000-25,000 a month using ParkMyCloud,” said Bill Gullicksen, Director of IT, QCentive. “It’s the perfect tool for what we needed, and in addition to the automation it also lets us provide access to developers, QA testers, and even data analysts and business folks to turn instances on and off without having to write a bunch of complex policies ”

“We’ve gotten great feedback on SmartParking from our AWS and Azure customers,” said ParkMyCloud CTO Bill Supernor. “Our Google customers have been looking forward to the same optimization opportunities, and we’re happy to deliver.”

Now that SmartParking has been achieved for all three major cloud providers, ParkMyCloud plans to develop cost-savings measures for other services within each cloud provider, and to expand to additional cloud providers.

About ParkMyCloud

ParkMyCloud is a SaaS platform that automatically identifies and eliminates public cloud resource waste, reducing spending by 65% or more — think “Nest for the cloud.” AWS, Azure and Google users such as McDonald’s, Sysco Foods, Unilever, Avid, and Sage Software have used ParkMyCloud to cut their cloud spending by millions of dollars. ParkMyCloud helps companies like these optimize and govern cloud usage by integrating cost control into their DevOps processes. For more information, visit https://www.parkmycloud.com.

Contact

Katy Stalcup

kstalcup@parkmycloud.com

(571) 334-3291

Read more ›

Looking for a Google Cloud Instance Scheduling Solution? As You Wish

Like other cloud providers, the Google Cloud Platform (GCP) charges for compute virtual machine instances by the amount of time they are running — which may lead you to search for a Google Cloud instance scheduling solution. If your GCP instances are only busy during or after normal business hours, or only at certain times of the week or month, you can save money by shutting these instances down when they are not being used.

GCP set-scheduling Command

If you were to do a Google search on “google cloud instance scheduling,” hoping to find out how to shut your compute instances down when they are not in use, you would see numerous promising links. The first couple of references appear to discuss how to set instance availability policies and mention a gcloud command line interface for “compute instances set-scheduling”. However, a little digging shows that these interfaces and commands simply describe how to fine-tune what happens when the underlying hardware for your virtual machine goes down for maintenance. The options in this case are to migrate the VM to another host (which appears to be a live migration), or to terminate the VM, and if the instance should be restarted if it is terminated. The documentation for the command goes so far as to say that the command is intended to let you set “scheduling options.”  While it is great to have control over these behaviors, I feel I have to paraphrase Inigo Montoya – You keep using that word “scheduling” – I do not think it means what you think it means…

GCP Compute Task Scheduling

The next thing that looks schedule-like is the GCP Cron Service. This is a highly reliable networked version of the Unix cron service, letting you leverage the GCP App Engine services to do all sorts of interesting things. One article describes how to use the Cron Service and App Engine to schedule tasks to execute on your Compute Instances. With some App Engine code, you could use this system to start and stop instances as part of regularly recurring task sequences. This could be an excellent technique for controlling instances for scheduled builds, or calculations that happen at the same time of a day/week/month/etc.

While very useful for certain tasks, this technique really lacks flexibility. GCP Cron Service schedules are configured by creating a cron.yaml file inside the app engine application. The Cron Service triggers events in the application, and getting the application to do things like start/stop instances are left as an exercise for the developer. If you need to modify the schedule, you need to go back in and modify the cron.yaml. Also, it can be non-intuitive to build a schedule around your working hours, in that you would need one event for when you want to start an instance, and another when you want to stop it. If you want to set multiple instances to be on different schedules, they would each need to have their own events. This brings us to the final issue, which is that any given application is limited to 20 events for free, up to a maximum of 250 events for a paid application.

ParkMyCloud Google Cloud Instance Scheduling

Google Cloud Platform and ParkMyCloud – mawwage – that dweam within a dweam….

Given the lack of other viable instance scheduling options, we at ParkMyCloud created a SaaS app to automate instance scheduling, helping organizations cut cloud costs by 65% or more on their monthly cloud bill with AWS, Azure, and, of course, Google Cloud.

We aim to provide a number of benefits that you won’t find with, say, the GCP Cron Service. ParkMyCloud:

  • Automates the process of switching non-production instances on and off with a simple, easy-to-use platform – more reliable than the manual process of switching GCP Compute instances off via the GCP console.
  • Provides a single-pane-of-glass view, allowing you to consolidate multiple clouds, multiple accounts within each cloud, and multiple regions within each account, all in one easy-to-use interface.
  • Does not require a developer background, coding, or custom scripting. It is also more flexible and cost-effective than having developers write scheduling scripts.
  • Can be used with a mobile phone or tablet.
  • Avoids the hard-coded schedules of the Cron Service. Users can temporarily override schedules if they need to use an instance on short notice.
  • Supports Teams and User Roles (with optional SSO), ensuring users will only have access to the resources you grant.
  • Helps you identify idle instances by monitoring instance performance metrics, displaying utilization heatmaps, and automatically generating utilization-based “SmartParking” schedule recommendations, which you can accept or modify as you wish..

Getting started with ParkMyCloud is easy. Simply register for a free trial with your email address and connect to your Google Cloud Platform to allow ParkMyCloud to discover and manage your resources. A 14-day free trial free gives your organization the opportunity to evaluate the benefits of ParkMyCloud while you only pay for the cloud computing power you use. At the end of the trial, there is no obligation on you to continue with our service, and all the money your organization has saved is, of course, yours to keep.

Have fun storming the castle!

Read more ›

How to Choose a CI/CD Tool: Cost Scaling, Languages, and Platforms, and More

How should CI/CD tool cost scaling, language support, and platform support affect your implementation decisions? In a previous post, we looked the factors you should consider when choosing between a SaaS CI/CD tool vs. a self-hosted CI/CD solution. In this post, we will take a look at a number of other factors that should be considered when evaluating a SaaS CI/CD tool to determine if it’s the right fit for your organization, including cost scalability and language/platform support.

CI/CD Tool Cost Scaling

One thing that is important to keep in mind when deciding to use a paid subscription-based service is how the cost scales with your usage. There are a number of factors that can affect cost. Particularly, some CI/CD SaaS services limit the number of build processes that can be run concurrently. For example, Codeship’s free plan allows only one concurrent build at a time. Travis CI’s travis-ci.org product offers up to 5 concurrent builds for open source projects, but (interestingly) their $69 USD/mo plan on travis-ci.com only offers 1 concurrent build. All of this means that increased throughput will likely result in increased cost. If you expect to maintain a steady level of throughput (that is, you don’t expect to add significantly more developers, which would require additional CI/CD throughput) then perhaps limits on the number of concurrent build processes is not a concern for you. However, if you’re planning on adding more developers to your team, you’ll likely end up having more build/test jobs that need to be executed. Limits may hamper your team’s productivity.

Another restriction you may run across is a limit on the total number of “build minutes” for a given subscription. In other words, the cumulative number of minutes that all build/test processes can run during a given subscription billing cycle (typically a month) is capped at a certain amount. For example, CircleCI’s free plan is limited to 1,500 build minutes per month, while their paid plans offer unlimited build minutes. Adding more developers to your team will likely result in additional build jobs, which will increase the required amount of build minutes per month, which may affect your cost. Additionally, increasing the complexity of your build/test process may result in longer build/test times, which will further increase the number of build minutes you’ll need during each billing cycle. The takeaway here is that if you have a solid understanding of how your team and your build processes are likely to scale in the future, then you should be well equipped to make a decision on whether the cost of a build minute-limited plan will scale adequately to meet your organization’s needs.

Though not directly related to cost scaling, it’s important to note that some CI/CD SaaS providers place a limit on the length of time allowed for any single build/test job, independent of any cumulative per-billing-cycle limitations. For example, Travis CI’s travis-ci.org product limits build jobs to 50 minutes, while jobs on their travis-ci.com product are limited to 120 minutes per build. Similarly, Atlassian’s Bitbucket Pipelines limits builds to 2 hours per job. These limits are probably more than sufficient for most teams, but if you have any long-running build/test processes, you should make sure that your jobs will fit within the time constraints set by your CI/CD provider.

CI/CD Language and Platform Support

Not all languages and platforms are supported by all SaaS CI/CD providers. Support for programming languages, operating systems, containers, and third-party software installation are just a few of the factors that need to be considered when evaluating a SaaS CI/CD tool. If your team requires Microsoft Windows build servers, you are immediately limited to a very small set of options, of which AppVeyor is arguably the most popular. If you need to build and test iOS or Android apps, you have a few more options, such as Travis CI, fastlane, and Bitrise, among others.

Programming languages are another area of consideration. Most providers support the most popular languages, but if you’re using a less popular language, you’ll need to choose carefully. For instance, Travis CI supports a huge list of programming languages, but most other SaaS CI/CD providers support only a handful by comparison. If your project is written in D, Erlang, Rust, or some other less mainstream language, many SaaS CI/CD providers may be a no-go right from the start.

Further consideration is required when dealing with Docker containers. Some SaaS CI/CD providers offer first-class support for Docker containers, while other providers do not support them at all. If Docker is an integral part of your development and build process, some providers may be immediately disqualified from consideration due to this point alone.

Final Thoughts

As you can see, when it comes to determining the CI/CD tool that’s right for your team, there are numerous factors that should be considered, especially with regard to CI/CD tool cost. Fortunately, many SaaS CI/CD providers offer a free version of their service, which gives you the opportunity to test drive the service to ensure that it supports the languages, platforms, and services that your team uses. Just remember to keep cost scaling in mind before making your decision, as the cost of “changing horses” can be expensive should you find that your CI/CD tool cost scales disproportionately with the rest of your business.

In a future post, we will explore third-party integrations with CI/CD tools, with a focus on continuous delivery.

Read more ›
Page 1 of 1212345...10...Last »
Copyright © ParkMyCloud 2016-2018. All rights reserved|Privacy Policy