How should CI/CD tool cost scaling, language support, and platform support affect your implementation decisions? In a previous post, we looked at the factors you should consider when choosing between a SaaS CI/CD tool vs. a self-hosted CI/CD solution. In this post, we will take a look at a number of other factors that should be considered when evaluating a SaaS CI/CD tool to determine if it’s the right fit for your organization, including cost scalability and language/platform support.
CI/CD Tool Cost Scaling
One thing that is important to keep in mind when deciding to use a paid subscription-based service is how the cost scales with your usage. There are a number of factors that can affect cost. Particularly, some CI/CD SaaS services limit the number of build processes that can be run concurrently. For example, Codeship’s free plan allows only one concurrent build at a time. Travis CI’s travis-ci.org product offers up to 5 concurrent builds for open source projects, but (interestingly) their $69 USD/mo plan on travis-ci.com only offers 1 concurrent build. All of this means that increased throughput will likely result in increased cost. If you expect to maintain a steady level of throughput (that is, you don’t expect to add significantly more developers, which would require additional CI/CD throughput) then perhaps limits on the number of concurrent build processes is not a concern for you. However, if you’re planning on adding more developers to your team, you’ll likely end up having more build/test jobs that need to be executed. Limits may hamper your team’s productivity.
Another restriction you may run across is a limit on the total number of “build minutes” for a given subscription. In other words, the cumulative number of minutes that all build/test processes can run during a given subscription billing cycle (typically a month) is capped at a certain amount. For example, CircleCI’s free plan is limited to 1,500 build minutes per month, while their paid plans offer unlimited build minutes. Adding more developers to your team will likely result in additional build jobs, which will increase the required amount of build minutes per month, which may affect your cost. Additionally, increasing the complexity of your build/test process may result in longer build/test times, which will further increase the number of build minutes you’ll need during each billing cycle. The takeaway here is that if you have a solid understanding of how your team and your build processes are likely to scale in the future, then you should be well equipped to make a decision on whether the cost of a build minute-limited plan will scale adequately to meet your organization’s needs.
Though not directly related to cost scaling, it’s important to note that some CI/CD SaaS providers place a limit on the length of time allowed for any single build/test job, independent of any cumulative per-billing-cycle limitations. For example, Travis CI’s travis-ci.org product limits build jobs to 50 minutes, while jobs on their travis-ci.com product are limited to 120 minutes per build. Similarly, Atlassian’s Bitbucket Pipelines limits builds to 2 hours per job. These limits are probably more than sufficient for most teams, but if you have any long-running build/test processes, you should make sure that your jobs will fit within the time constraints set by your CI/CD provider.
CI/CD Language and Platform Support
Not all languages and platforms are supported by all SaaS CI/CD providers. Support for programming languages, operating systems, containers, and third-party software installation are just a few of the factors that need to be considered when evaluating a SaaS CI/CD tool. If your team requires Microsoft Windows build servers, you are immediately limited to a very small set of options, of which AppVeyor is arguably the most popular. If you need to build and test iOS or Android apps, you have a few more options, such as Travis CI, fastlane, and Bitrise, among others.
Programming languages are another area of consideration. Most providers support the most popular languages, but if you’re using a less popular language, you’ll need to choose carefully. For instance, Travis CI supports a huge list of programming languages, but most other SaaS CI/CD providers support only a handful by comparison. If your project is written in D, Erlang, Rust, or some other less mainstream language, many SaaS CI/CD providers may be a no-go right from the start.
Further consideration is required when dealing with Docker containers. Some SaaS CI/CD providers offer first-class support for Docker containers, while other providers do not support them at all. If Docker is an integral part of your development and build process, some providers may be immediately disqualified from consideration due to this point alone.
As you can see, when it comes to determining the CI/CD tool that’s right for your team, there are numerous factors that should be considered, especially with regard to CI/CD tool cost. Fortunately, many SaaS CI/CD providers offer a free version of their service, which gives you the opportunity to test drive the service to ensure that it supports the languages, platforms, and services that your team uses. Just remember to keep cost scaling in mind before making your decision, as the cost of “changing horses” can be expensive should you find that your CI/CD tool cost scales disproportionately with the rest of your business.
In a future post, we will explore third-party integrations with CI/CD tools, with a focus on continuous delivery.
We talked with Kurt Brochu, Senior Manager of the Cloud Enablement Team at Sysco Foods, about how his company has been using ParkMyCloud to empower end users to keep costs in check with the implementation of their cloud-only strategy.
Thanks for taking the time to speak with us today. I know we chatted before at re:Invent, where you gave us some great feedback, and we’re excited to hear more about your use of ParkMyCloud since it rolled out to your other teams.
To get started, can you describe your role at Sysco and what you do?
I’m a senior manager here in charge of the cloud enablement team. The focus is on public cloud offerings, where we function as the support tier for the teams that consume those services. I also have ownership of ensuring that cost containment and appropriateness of use is being performed, as well as security and connectivity, network services, authentication, and DNS.
We don’t consider ourselves IT, our department is referred to as Business Technology. Our CTO brought us on 3 or 4 years ago with the expectation that we understand the business needs, wants, and desires, to actually service them as they would need versus passively telling them that their server is up or down.
As well as security and the dev team, teams using cloud also include areas that are customer facing, like sales, or internal, like finance, business reporting, asset management, and the list goes on.
Tell us about your company’s cloud usage.
We’ve had our own private cloud since 2003, offered on-prem. We’ve been in public cloud since 2013. Now, our position has gone from a “cloud-first” to a “cloud-only” strategy in the sense that any new workload that comes along is primarily put in public cloud. We primarily use AWS and are adding workloads to Azure as well.
Talk to me about how cost control fits into your cloud-only strategy. How did you realize there was a problem?
We were seeing around 20% month over month growth in expenditure between our two public clouds. Our budget wasn’t prepared for that type of growth.
We realized that some of the teams that had the ability to auto-generate workloads weren’t best managing their resources. There wasn’t an easy way to show the expenses in a visual manner to present them to Sysco, or to give them some means to manage the state of their workloads.
The teams were good at building other pipelines for bringing workloads online but they didn’t have day-to-day capabilities.
How did you discover ParkMyCloud as a solution to your cost control problem?
We first stumbled upon ParkMyCloud at the 2016 AWS re:Invent conference and were immediately intrigued but didn’t have the cycles to look into it until this past summer, when we made the switch from cloud-first to a cloud-only strategy.
We’ve been running ParkMyCloud since the week before re:Invent in 2017. From there, we had our first presentation to our leadership team in December 2017, where we showed that the uptick in savings was dramatic. It’s leveled off right now because we have a lot of new workloads coming in, but the savings are still noticeable. We still have developers who think that their dev system has to always be on and at will, but they don’t understand that now that we have ParkMyCloud, making it “at will” is as simple as an API call or the click of a button. I expect to see our savings to grow over the rest of the calendar year.
We have 50+ teams and over 500 users on ParkMyCloud now.
That’s great to hear! So how much are you saving on your cloud costs with ParkMyCloud?
Our lifetime savings thus far is $28,000, and the tool has paid for itself pretty quickly.
We have one team who has over 40% savings on their workloads. They were spending on average about $10,000 a month, and now it’s at $5,800 because they leverage ParkMyCloud’s simplified scheduling start/stop capabilities.
What other benefits are you getting from your use of the platform?
What I really like is that we have given most of our senior directors, who actually own the budgets, access to the tool as well. It lets the senior directors, as well as the executives when I present to them, see the actual cost savings. It gives you the ability to shine light in places that people don’t like to have the light shine.
The development team at ParkMyCloud has also been very open to receiving suggestions and capabilities that will help us improve savings and increase user adoption.
That’s great, and please continue to submit your feedback and requests to us! And in that regard, have you tried our SmartParking feature to get recommended schedules based on your usage?
Yes, we have started to. When I’m asked by a team to show them how we suggest they use the tool, they get to decide whether or not to enforce it. I’ll say that they are exceedingly happy by the fact that they can go and see their usage. One developer is telling their team that the feature has to be on at all times.
Are there any other cost savings measures that you use in conjunction with ParkMyCloud or in addition?
We pull numbers and look at Amazon’s best prices guide for sizing. We also take the recommendations from ParkMyCloud and we cross compare those.
Do you have any other feedback for us?
The magic of ParkMyCloud is that it empowers the end user to make decisions for the betterment of business, and gives us the needed visibility to do our jobs effectively. That’s the bottom line. Each user has a decision: I can spend money on wasted resources or I can save it where I can and apply the savings to other projects. Once you start to understand that, then you have that “AHA” moment.
Before using ParkMyCloud, most developers have no awareness of the expense of their workloads. This tool allows me to unfilter that data so they can see, for example: this workload is $293 a month, every month. If you look at your entire environment, you’re spending $17,000 a month, but if you take it down just for the weekend, you could be saving $2-3,000 a month or more depending on how aggressive you want to be, without hurting your ability to support the business. It’s that “AHA” moment that is satisfying to watch.
That’s what we noticed immediately when we looked at the summary reports – the uptick that appears right after you have these presentations with the team makes your heart feel good.
Well thank you Kurt, again we really appreciate you taking the time to speak with us.
One common thing I hear from people evaluating tools for individual/team use is the need for a multitude of service and cloud API integrations available. At ParkMyCloud, we have the same mindset, which is why we’ve recently published a whole list of API endpoints that can be used to combine cloud cost savings with your existing tools and services to achieve continuous cost control!
The updated list of APIs is available at https://prod-api.parkmycloud.com/ and is publically documented so you can check it out now. Once you’re a ParkMyCloud customer, you can get an API key from us to start leveraging the cost-saving power of ParkMyCloud outside of our simple-to-use UI. There are tons of things you can write with these new APIs, so here are a few ideas, based on what pain points you need to automate:
1. If You Have Lots of Users: Active Directory Group-to-Role Mapping
With exposed APIs for creating Teams and Users in ParkMyCloud, pulling information from Active Directory or LDAP can make your initial setup a breeze. Each instance in ParkMyCloud is on exactly one team, while users can be a part of zero or more teams. This team membership is what determines the list of resources available to that user.
AD users typically have group memberships based on what level of access they need to AD-connected systems. By querying Active Directory programmatically, you can get a list of users and what groups they are a part of, then create corresponding teams and assign access based on those groups — saving yourself from doing it manually and keeping governance measures in place.
2. If You Have Lots of Cloud Accounts: Programmatically Import Them for Cost Control
More and more companies are moving to separate AWS accounts, Azure subscriptions, or Google Cloud projects and accounts. This makes it much easier to separate dev from test, QA from staging, and production from everything else that is non-production. I’ve talked to customers who have hundreds of accounts, to who I’d recommend creating a programmatic way to add them.
3. If You’re a Managed Service Provider: Manage Customer Accounts in One Fell Swoop
If you’re a managed service provider (MSP), you’re always looking for additional value to add to the services you provide to your customers. The challenge you’ll face is that you have many customers, each with multiple cloud accounts, but you must keep a distinct separation between clients. With ParkMyCloud, you can easily add additional value by automating your customers’ cloud cost savings. To manage several customers at once, use the ParkMyCloud API to easily connect multiple cloud accounts and create policies to handle each customers’ resources automatically. As the admin, you can use teams to keep things separated for multi-tenancy and user governance.
4. If You Spend Your Day in CI/CD Tools: Integrate Continuous Cost Control, Too
Plugging into CI/CD tools like Bamboo and Jenkins has always been the bread-and-butter use case for our API. Software build management and continuous deployment often has lots of large servers sitting idle in between code commits, so ParkMyCloud can manage those instances with schedules that are easily overridden using the “snooze” API. Once an instance is snoozed (which is like a temporary override), it can then be toggled, used for the build or deployment, then at the end of the snooze it gets powered back off. These couple Rest API calls can get added to the beginning of any build job, so you get cost savings with minimal changes to your setup.
5. If The Higher-Ups are On Your Back: Visually Track Savings Data
The ParkMyCloud API allows you to pull the current month-to-date savings information, along with the current estimated 30-day savings that your instance schedules are creating. You can take this data and show it proudly on your existing intranet homepage or monitoring dashboard. You’ll show the bosses how much you’re saving on cloud costs and be the hero of the office, and it can really help drive more users and groups to save even more money.
6. If You’re A Huge Slacker (or Hipchatter): Utilize ChatOps For Quick Instance Management
Slack and HipChat have changed the game in the DevOps world. Not only can you get rapid feedback (one of the core DevOps tenants) from all of your tools and colleagues, but you can also use ChatBots to control your environment and workloads. ParkMyCloud can send notifications to your chat programs via Webhooks, and you can use ChatBots to send API calls back to ParkMyCloud. Get notified in your dev Slack channel about a system being shut down, then immediately respond in the channel to override the schedule and let your team know that you’ve got it taken care of.
Got Any Cool Cloud API Integration Ideas?
We’d love to hear your ideas for how you might utilize these cloud API integrations with ParkMyCloud. Once you get that script written, open a pull request on our public Github repo so we can share it with the ParkMyCloud community. If you’ve got an idea but don’t know where to start, comment below or share the idea with us on Twitter and we can chat about implementation details. If you’re a programmer who wants to work on some integrations but aren’t yet saving money on your cloud bill, check out our free trial of ParkMyCloud today!
NCAA, Google Cloud? What does the cloud have to do with March Madness? Actually, public cloud is increasingly being used and promoted in sports. When you watch the tournament on NCAA, Google Cloud ads will show prominently. Plus, the NCAA has chosen to run its infrastructure on Google Cloud Platform (GCP).
(By the way, have your done your bracket yet? I just did mine – I went chalk and picked Villanova. Couldn’t see my WVU Mountaineers winning it all).
So we will see and hear a lot of Google Cloud in the coming weeks.Google recently announced a multiyear sponsorship deal with the NCAA and will run these ads throughout the upcoming NCAA basketball tournament. Google is hoping to expand its cloud business by taking complex topics such as cloud computing, machine learning and artificial intelligence and making them relatable to a wider audience.
So why does is matter that NCAA and Google Cloud will appear so prominently together this March Madness?
First of all, Google Cloud is always matching wits with the other major cloud providers — and in this case, they’ve had their hooks in various mainstream sporting leagues and events for several years. For example, did you notice the partnership between AWS and the National Football League (NFL)? Both AWS and NFL promote machine-learning capabilities — software that helps recognize patterns and make predictions — to quickly analyze data captured during games. The data could provide new kinds of statistics for fans and insights that could help coaches.
Second, there’s the infrastructure that supports these huge events. I can tell you as a sports fan that me and my mates will all be live streaming football, basketball, golf and soccer (yes the English Premier League) on our phones and tablets wherever we are. We do this while watching the kids play sports, working in the office, and even while we are playing golf – hook it up to cart (a buggy for my UK mates). Many of these content providers are using AWS, Microsoft Azure, GCP, and IBM Cloud to get this content to us in real time, and to analyze it and provide valuable insights for a better user experience.
Or take a look at the Masters golf tournament. Usually IBM and ATT are big sponsors, although the Masters is usually very hush hush about a lot of this. Last year there was a lot of talk of IBM Watson, the Masters and the surreal experience they were able to deliver. This is a really good read on what went on behind the scenes and how Watson and IBM’s cloud delivered that experience. IBM used Machine learning, Visual recognition, Speech-to-text, and cognitive computing to build a phenomenal user experience for Masters viewers and visitors.
The NCAA and Google Cloud are not just ad partners, but the NCAA is also a GCP customer. The NCAA is migrating 80+ years of historical and play-by-play data, from 90 championships and 24 sports to GCP. To start, the NCAA will tap into decades of historical basketball data usingBigQuery, Cloud Spanner, Datalab, Cloud Machine Learning and Cloud Dataflow, to power the analysis of team and player performance. So Google Cloud not only gets advertising prominence for one of the most-watched events of the year, it gets a high-profile customer and one of the coolest use cases out there.
In this blog we are going to examine how Microsoft Azure region pricing varies and how region selection can help you reduce cloud spending.
How Organizations Select Public Cloud Regions
There are many comparisons that go into pricing differences between AWS vs Azure vs GCP, etc. At the end of the day, however, most organizations select one primary cloud service provider (CSP) for most of their workloads, plus maybe another for multi-cloud redundancy of critical services. Once selected, organizations then typically put many of their workloads in the region closest to their offices, plus maybe some geographic redundancy in their production systems. In other situations, a certain region is selected because that is the first region to support some new CSP feature. As time goes by, other regions become options because either those new features are propagated through the system, or whole new regions are created.
CSP regions tend to cluster around certain larger geographic regions, that I will call “areas” for the purpose of this blog. Looking at Azure in particular, we can see that Azure has three major US areas (Western, Central, and Eastern). The Western and Eastern US areas each have two Azure regions, and the Central area has four Azure regions. The UK, Europe and Australia areas each have two Azure regions. There are a number of other Azure regions as well, but they are far enough dispersed that I would consider them to be areas with a single region.
How Does Azure Region Pricing Vary?
With this regional distribution as a starting point, let’s look next at costs for instances. Here is a somewhat random selection of Azure region pricing data, looking at a variety of instance types (cost data as of approximately March 1, 2018).
While this graphic is a bit busy, there are a couple things that jump out at us:
Within most of the areas, there are clearly more expensive regions and less expensive regions.
The least expensive regions, on average across these instance types are us-west-2, us-west-central, and korea-south.
The most expensive regions are asia-pacific-east, japan-east, and australia-east.
Windows instances are about 1.5-3 times more expensive than their Linux-based counterparts
Let’s zoom-in on Azure Standard_DS2_v2 instance type, which comprises almost 60% of the total population of Azure instances customers are managing in the ParkMyCloud platform.
We can clearly see the relative volatility in the cost of this instance type across regions. And, while the Windows instance is about 1.5-2 times the cost of the Linux instance, the volatility is fairly closely mirrored across the regions.
Of more interest, however, is how the costs can differ within a given area. From that comparison we can see that there is some real savings to be gained by careful region selection within an area:
Over the course of a year, strategic region selection of a Windows DS2 instance could save up to $578 for the asia-pacific regions, $298 for the us-east regions, and $228 for the Korean regions.
How to Save Using Regions
By comparing regions within your desired “area” as illustrated above, the savings over a quantity of instances can be significant. Good region selection is fundamental to controlling Azure costs, and for costs across the other clouds as well.