Like other cloud providers, the Google Cloud Platform (GCP) charges for compute virtual machine instances by the amount of time they are running — which may lead you to search for a Google Cloud instance scheduling solution. If your GCP instances are only busy during or after normal business hours, or only at certain times of the week or month, you can save money by shutting these instances down when they are not being used.
GCP set-scheduling Command
If you were to do a Google search on “google cloud instance scheduling,” hoping to find out how to shut your compute instances down when they are not in use, you would see numerous promising links. The first couple of references appear to discuss how to set instance availability policies and mention a gcloud command line interface for “compute instances set-scheduling”. However, a little digging shows that these interfaces and commands simply describe how to fine-tune what happens when the underlying hardware for your virtual machine goes down for maintenance. The options in this case are to migrate the VM to another host (which appears to be a live migration), or to terminate the VM, and if the instance should be restarted if it is terminated. The documentation for the command goes so far as to say that the command is intended to let you set “scheduling options.” While it is great to have control over these behaviors, I feel I have to paraphrase Inigo Montoya – You keep using that word “scheduling” – I do not think it means what you think it means…
GCP Compute Task Scheduling
The next thing that looks schedule-like is the GCP Cron Service. This is a highly reliable networked version of the Unix cron service, letting you leverage the GCP App Engine services to do all sorts of interesting things. One article describes how to use the Cron Service and App Engine to schedule tasks to execute on your Compute Instances. With some App Engine code, you could use this system to start and stop instances as part of regularly recurring task sequences. This could be an excellent technique for controlling instances for scheduled builds, or calculations that happen at the same time of a day/week/month/etc.
While very useful for certain tasks, this technique really lacks flexibility. GCP Cron Service schedules are configured by creating a cron.yaml file inside the app engine application. The Cron Service triggers events in the application, and getting the application to do things like start/stop instances are left as an exercise for the developer. If you need to modify the schedule, you need to go back in and modify the cron.yaml. Also, it can be non-intuitive to build a schedule around your working hours, in that you would need one event for when you want to start an instance, and another when you want to stop it. If you want to set multiple instances to be on different schedules, they would each need to have their own events. This brings us to the final issue, which is that any given application is limited to 20 events for free, up to a maximum of 250 events for a paid application.
ParkMyCloud Google Cloud Instance Scheduling
Google Cloud Platform and ParkMyCloud – mawwage – that dweam within a dweam….
Given the lack of other viable instance scheduling options, we at ParkMyCloud created a SaaS app to automate instance scheduling, helping organizations cut cloud costs by 65% or more on their monthly cloud bill with AWS, Azure, and, of course, Google Cloud.
We aim to provide a number of benefits that you won’t find with, say, the GCP Cron Service. ParkMyCloud:
- Automates the process of switching non-production instances on and off with a simple, easy-to-use platform – more reliable than the manual process of switching GCP Compute instances off via the GCP console.
- Provides a single-pane-of-glass view, allowing you to consolidate multiple clouds, multiple accounts within each cloud, and multiple regions within each account, all in one easy-to-use interface.
- Does not require a developer background, coding, or custom scripting. It is also more flexible and cost-effective than having developers write scheduling scripts.
- Can be used with a mobile phone or tablet.
- Avoids the hard-coded schedules of the Cron Service. Users can temporarily override schedules if they need to use an instance on short notice.
- Supports Teams and User Roles (with optional SSO), ensuring users will only have access to the resources you grant.
- Helps you identify idle instances by monitoring instance performance metrics, displaying utilization heatmaps, and automatically generating utilization-based “SmartParking” schedule recommendations, which you can accept or modify as you wish..
Getting started with ParkMyCloud is easy. Simply register for a free trial with your email address and connect to your Google Cloud Platform to allow ParkMyCloud to discover and manage your resources. A 14-day free trial free gives your organization the opportunity to evaluate the benefits of ParkMyCloud while you only pay for the cloud computing power you use. At the end of the trial, there is no obligation on you to continue with our service, and all the money your organization has saved is, of course, yours to keep.
Have fun storming the castle!
How should CI/CD tool cost scaling, language support, and platform support affect your implementation decisions? In a previous post, we looked the factors you should consider when choosing between a SaaS CI/CD tool vs. a self-hosted CI/CD solution. In this post, we will take a look at a number of other factors that should be considered when evaluating a SaaS CI/CD tool to determine if it’s the right fit for your organization, including cost scalability and language/platform support.
CI/CD Tool Cost Scaling
One thing that is important to keep in mind when deciding to use a paid subscription-based service is how the cost scales with your usage. There are a number of factors that can affect cost. Particularly, some CI/CD SaaS services limit the number of build processes that can be run concurrently. For example, Codeship’s free plan allows only one concurrent build at a time. Travis CI’s travis-ci.org product offers up to 5 concurrent builds for open source projects, but (interestingly) their $69 USD/mo plan on travis-ci.com only offers 1 concurrent build. All of this means that increased throughput will likely result in increased cost. If you expect to maintain a steady level of throughput (that is, you don’t expect to add significantly more developers, which would require additional CI/CD throughput) then perhaps limits on the number of concurrent build processes is not a concern for you. However, if you’re planning on adding more developers to your team, you’ll likely end up having more build/test jobs that need to be executed. Limits may hamper your team’s productivity.
Another restriction you may run across is a limit on the total number of “build minutes” for a given subscription. In other words, the cumulative number of minutes that all build/test processes can run during a given subscription billing cycle (typically a month) is capped at a certain amount. For example, CircleCI’s free plan is limited to 1,500 build minutes per month, while their paid plans offer unlimited build minutes. Adding more developers to your team will likely result in additional build jobs, which will increase the required amount of build minutes per month, which may affect your cost. Additionally, increasing the complexity of your build/test process may result in longer build/test times, which will further increase the number of build minutes you’ll need during each billing cycle. The takeaway here is that if you have a solid understanding of how your team and your build processes are likely to scale in the future, then you should be well equipped to make a decision on whether the cost of a build minute-limited plan will scale adequately to meet your organization’s needs.
Though not directly related to cost scaling, it’s important to note that some CI/CD SaaS providers place a limit on the length of time allowed for any single build/test job, independent of any cumulative per-billing-cycle limitations. For example, Travis CI’s travis-ci.org product limits build jobs to 50 minutes, while jobs on their travis-ci.com product are limited to 120 minutes per build. Similarly, Atlassian’s Bitbucket Pipelines limits builds to 2 hours per job. These limits are probably more than sufficient for most teams, but if you have any long-running build/test processes, you should make sure that your jobs will fit within the time constraints set by your CI/CD provider.
CI/CD Language and Platform Support
Not all languages and platforms are supported by all SaaS CI/CD providers. Support for programming languages, operating systems, containers, and third-party software installation are just a few of the factors that need to be considered when evaluating a SaaS CI/CD tool. If your team requires Microsoft Windows build servers, you are immediately limited to a very small set of options, of which AppVeyor is arguably the most popular. If you need to build and test iOS or Android apps, you have a few more options, such as Travis CI, fastlane, and Bitrise, among others.
Programming languages are another area of consideration. Most providers support the most popular languages, but if you’re using a less popular language, you’ll need to choose carefully. For instance, Travis CI supports a huge list of programming languages, but most other SaaS CI/CD providers support only a handful by comparison. If your project is written in D, Erlang, Rust, or some other less mainstream language, many SaaS CI/CD providers may be a no-go right from the start.
Further consideration is required when dealing with Docker containers. Some SaaS CI/CD providers offer first-class support for Docker containers, while other providers do not support them at all. If Docker is an integral part of your development and build process, some providers may be immediately disqualified from consideration due to this point alone.
As you can see, when it comes to determining the CI/CD tool that’s right for your team, there are numerous factors that should be considered, especially with regard to CI/CD tool cost. Fortunately, many SaaS CI/CD providers offer a free version of their service, which gives you the opportunity to test drive the service to ensure that it supports the languages, platforms, and services that your team uses. Just remember to keep cost scaling in mind before making your decision, as the cost of “changing horses” can be expensive should you find that your CI/CD tool cost scales disproportionately with the rest of your business.
In a future post, we will explore third-party integrations with CI/CD tools, with a focus on continuous delivery.
We talked with Kurt Brochu, Senior Manager of the Cloud Enablement Team at Sysco Foods, about how his company has been using ParkMyCloud to empower end users to keep costs in check with the implementation of their cloud-only strategy.
Thanks for taking the time to speak with us today. I know we chatted before at re:Invent, where you gave us some great feedback, and we’re excited to hear more about your use of ParkMyCloud since it rolled out to your other teams.
To get started, can your describe your role at Sysco and what you do?
I’m senior manager here in charge of the cloud enablement team. The focus is on public cloud offerings, where we function as the support tier for the teams that consume those services. I also have ownership of ensuring that cost containment and appropriateness of use is being performed, as well as security and connectivity, network services, authentication, and DNS.
We don’t consider ourselves IT, our department is referred to as Business Technology. Our CTO brought us on 3 or 4 years ago with the expectation that we understand the business needs, wants, and desires, to actually service them as they would need versus passively telling them that their server is up or down.
As well as security and the dev team, teams using cloud also include areas that are customer facing, like sales, or internal, like finance, business reporting, asset management, and the list goes on.
Tell us about your company’s cloud usage.
We’ve had our own private cloud since 2003, offered on-prem. We’ve been in public cloud since 2013. Now, our position has gone from a “cloud-first” to a “cloud-only” strategy in the sense that any new workload that comes along is primarily put in public cloud. We primarily use AWS and are adding workloads to Azure as well.
Talk to me about how cost control fits into your cloud-only strategy. How did you realize there was a problem?
We were seeing around 20% month over month growth in expenditure between our two public clouds. Our budget wasn’t prepared for that type of growth.
We realized that some of the teams that had ability to auto-generate workloads weren’t best managing their resources. There wasn’t an easy way to show the expenses in a visual manner to present them to Sysco, or to give them some means to manage the state of their workloads.
The teams were good at building other pipelines for bringing workloads online but they didn’t have day-to-day capabilities.
How did you discover ParkMyCloud as a solution to your cost control problem?
We first stumbled upon ParkMyCloud at the 2016 AWS re:Invent conference and were immediately intrigued but didn’t have the cycles to look into it until this past summer, when when we made the switch from cloud-first to a cloud-only strategy.
We’ve been running ParkMyCloud since the week before re:Invent in 2017. From there, we had our first presentation to our leadership team in December 2017, where we showed that the uptick in savings was dramatic. It’s leveled off right now because we have a lot of new workloads coming in, but the savings are still noticeable. We still have developers who think that their dev system has to be always be on and at will, but they don’t understand that now that we have ParkMyCloud, making it “at will” is as simple as an API call or the click of a button. I expect to see our savings to grow over the rest of the calendar year.
We have 50+ teams and over 500 users on ParkMyCloud now.
That’s great to hear! So how much are you saving on your cloud costs with ParkMyCloud?
Our lifetime savings thus far is $28,000, and the tool has paid for itself pretty quickly.
We have one team who has over 40% savings on their workloads. They were spending on average about $10,000 a month, and now it’s at $5,800 because they leverage ParkMyCloud’s simplified scheduling start/stop capabilities.
What other benefits are you getting from your use of the platform?
What I really like is that we have given most of our senior directors, who actually own the budgets, access to the tool as well. It lets the senior directors, as well as the executives when I present to them, see the actual cost savings. It gives you the ability to shine light in places that people don’t like to have the light shine.
The development team at ParkMyCloud has also been very open to receiving suggestions and capabilities that will help us improve savings and increase user adoption.
That’s great, and please continue to submit your feedback and requests to us! And in that regard, have you tried our SmartParking feature to get recommended schedules based on your usage?
Yes, we have started to. When I’m asked by a team to show them how we suggest they use the tool, they get to decide whether or not to enforce it. I’ll say that they are exceedingly happy by the fact that they can go and see their usage. One developer is telling their team that the feature has to be on at all times.
Are there any other cost savings measures that you use in conjunction with ParkMyCloud or in addition?
We pull numbers and look at Amazon’s best prices guide for sizing. We also take the recommendations from ParkMyCloud and we cross compare those.
Do you have any other feedback for us?
The magic of ParkMyCloud is that it empowers the end user to make decisions for the betterment of business, and gives us the needed visibility to do our jobs effectively. That’s the bottom line. Each user has a decision: I can spend money on wasted resources or I can save it where I can and apply the savings to other projects. Once you start to understand that, then you have that “AHA” moment.
Before using ParkMyCloud, most developers have no awareness of the expense of their workloads. This tool allows me to unfilter that data so they can see, for example: this workload is $293 a month, every month. If you look at your entire environment, you’re spending $17,000 a month, but if you take it down just for the weekend, you could be saving $2-3,000 a month or more depending on how aggressive you want to be, without hurting your ability to support the business. It’s that “AHA” moment that is satisfying to watch.
That’s what we noticed immediately when we looked at the summary reports – the uptick that appears right after you have these presentations with the team makes your heart feel good.
Well thank you Kurt, again we really appreciate you taking the time to speak with us.
One common thing I hear from people evaluating tools for individual/team use is the need for a multitude of service and cloud API integrations available. At ParkMyCloud, we have the same mindset, which is why we’ve recently published a whole list of API endpoints that can be used to combine cloud cost savings with your existing tools and services to achieve continuous cost control!
The updated list of APIs is available at https://prod-api.parkmycloud.com/ and is publically documented so you can check it out now. Once you’re a ParkMyCloud customer, you can get an API key from us to start leveraging the cost-saving power of ParkMyCloud outside of our simple-to-use UI. There are tons of things you can write with these new APIs, so here’s a few ideas, based on what pain points you need to automate:
1. If You Have Lots of Users: Active Directory Group-to-Role Mapping
With exposed APIs for creating Teams and Users in ParkMyCloud, pulling information from Active Directory or LDAP can make your initial setup a breeze. Each instance in ParkMyCloud is on exactly one team, while users can be a part of zero or more teams. This team membership is what determines the list of resources available to that user.
AD users typically have group memberships based on what level of access they need to AD-connected systems. By querying Active Directory programmatically, you can get a list of users and what groups they are a part of, then create corresponding teams and assign access based on those groups — saving yourself from doing it manually and keeping governance measures in place.
2. If You Have Lots of Cloud Accounts: Programmatically Import Them for Cost Control
More and more companies are moving to separate AWS accounts, Azure subscriptions, or Google Cloud projects and accounts. This makes it much easier to separate dev from test, QA from staging, and production from everything else that is non-production. I’ve talked to customers who have hundreds of accounts, to who I’d recommend creating a programmatic way to add them.
3. If You’re a Managed Service Provider: Manage Customer Accounts in One Fell Swoop
If you’re a managed service provider (MSP), you’re always looking for additional value to add to the services you provide to your customers. The challenge you’ll face is that you have many customers, each with multiple cloud accounts, but you must keep distinct separation between clients. With ParkMyCloud, you can easily add additional value by automating your customers’ cloud cost savings. To manage several customers at once, use the ParkMyCloud API to easily connect multiple cloud accounts and create policies to handle each customers’ resources automatically. As the admin, you can use teams to keep things separated for multi-tenancy and user governance.
4. If You Spend Your Day in CI/CD Tools: Integrate Continuous Cost Control, Too
Plugging into CI/CD tools like Bamboo and Jenkins has always been the bread-and-butter use case for our API. Software build management and continuous deployment often has lots of large servers sitting idle in between code commits, so ParkMyCloud can manage those instances with schedules that are easily overridden using the “snooze” API. Once an instance is snoozed (which is like a temporary override), it can then be toggled, used for the build or deployment, then at the end of the snooze it gets powered back off. These couple Rest API calls can get added to the beginning of any build job, so you get cost savings with minimal changes to your setup.
5. If The Higher-Ups are On Your Back: Visually Track Savings Data
The ParkMyCloud API allows you to pull the current month-to-date savings information, along with the current estimated 30-day savings that your instance schedules are creating. You can take this data and show it proudly on your existing intranet homepage or monitoring dashboard. You’ll show the bosses how much you’re saving on cloud costs and be the hero of the office, and it can really help drive more users and groups to save even more money.
6. If You’re A Huge Slacker (or Hipchatter): Utilize ChatOps For Quick Instance Management
Slack and HipChat have changed the game in the DevOps world. Not only can you get rapid feedback (one of the core DevOps tenants) from all of your tools and colleagues, but you can also use ChatBots to control your environment and workloads. ParkMyCloud can send notifications to your chat programs via Webhooks, and you can use ChatBots to send API calls back to ParkMyCloud. Get notified in your dev Slack channel about a system being shut down, then immediately respond in the channel to override the schedule and let your team know that you’ve got it taken care of.
Got Any Cool Cloud API Integration Ideas?
We’d love to hear your ideas for how you might utilize these cloud API integrations with ParkMyCloud. Once you get that script written, open a pull request on our public Github repo so we can share it with the ParkMyCloud community. If you’ve got an idea but don’t know where to start, comment below or share the idea with us on Twitter and we can chat about implementation details. If you’re a programmer who wants to work on some integrations but aren’t yet saving money on your cloud bill, check out our free trial of ParkMyCloud today!
NCAA, Google Cloud? What does the cloud have to do with March Madness? Actually, public cloud is increasingly being used and promoted in sports. When you watch the tournament on NCAA, Google Cloud ads will show prominently. Plus, the NCAA has chosen to run its infrastructure on Google Cloud Platform (GCP).
(By the way, have your done your bracket yet? I just did mine – I went chalk and picked Villanova. Couldn’t see my WVU Mountaineers winning it all).
So we will see and hear a lot of Google Cloud in the coming weeks. Google recently announced a multiyear sponsorship deal with the NCAA and will run these ads throughout the upcoming NCAA basketball tournament. Google is hoping to expand its cloud business by taking complex topics such as cloud computing, machine learning and artificial intelligence and making them relatable to a wider audience.
So why does is matter that NCAA and Google Cloud will appear so prominently together this March Madness?
First of all, Google Cloud is always matching wits with the other major cloud providers — and in this case, they’ve had their hooks in various mainstream sporting leagues and events for several years. For example, did you notice the partnership between AWS and the National Football League (NFL)? Both AWS and NFL promote machine-learning capabilities — software that helps recognize patterns and make predictions — to quickly analyze data captured during games. The data could provide new kinds of statistics for fans and insights that could help coaches.
Second, there’s the infrastructure that supports these huge events. I can tell you as a sports fan that me and my mates will all be live streaming football, basketball, golf and soccer (yes the English Premier League) on our phones and tablets wherever we are. We do this while watching the kids play sports, working in the office, and even while we are playing golf – hook it up to cart (a buggy for my UK mates). Many of these content providers are using AWS, Microsoft Azure, GCP, and IBM Cloud to get this content to us in real time, and to analyze it and provide valuable insights for a better user experience.
Or take a look at the Masters golf tournament. Usually IBM and ATT are big sponsors, although the Masters is usually very hush hush about a lot of this. Last year there was a lot of talk of IBM Watson, the Masters and the surreal experience they were able to deliver. This is a really good read on what went on behind the scenes and how Watson and IBM’s cloud delivered that experience. IBM used Machine learning, Visual recognition, Speech-to-text, and cognitive computing to build a phenomenal user experience for Masters viewers and visitors.
The NCAA and Google Cloud are not just ad partners, but the NCAA is also a GCP customer. The NCAA is migrating 80+ years of historical and play-by-play data, from 90 championships and 24 sports to GCP. To start, the NCAA will tap into decades of historical basketball data using BigQuery, Cloud Spanner, Datalab, Cloud Machine Learning and Cloud Dataflow, to power the analysis of team and player performance. So Google Cloud not only gets advertising prominence for one of the most-watched events of the year, it gets a high-profile customer and one of the coolest use cases out there.
Enjoy the tournament – let’s go Cats!