The latest time-saving automation to add to your DevOps tool belt: ChatOps cloud cost control. That’s right – you may already be using ChatOps to make your life easier, but did you know that amongst the advantages, you can also use it to control your cloud resources?
Whatever communication platform you’re already using for chatting with your team members, you can use for chatting with your applications and services. And with the increasing rise of ChatOps, that brings us to one of the questions we’ve been getting asked more frequently by our DevOps users: how can I manage schedules and instances from Slack, Microsoft Teams, Atlassian Stride, and other chat programs?
One of the cool things you can do using ChatOps is control your cloud resources through ParkMyCloud. Learn how it’s done in this quick YouTube demo:
ParkMyCloud has the ability to send messages to chat rooms via notifications and receive commands from chat bots via the API. This video details the Slackbot specifically, but similar bots can be used with Microsoft Teams or Atlassian Stride. There are multiple settings you can configure within Slack to manage your account, including notifications to let you know when a schedule is shutting an instance down. You can also set up the ability to override a schedule and turn the system on from Slack. Watch the video for a brief overview of how to:
Set up a notification that uses the Slack type
Adjust settings to be notified of user actions, parking actions, policy actions, and more
Set up the ParkMyCloud Slackbot to respond to notifications
Once you set up Slack with ParkMyCloud, you’ll be able to do anything you normally would in the UI or API, including snooze and toggle instances to override their schedules, receive notifications and be able to control your account directly from your Slack chat room. The Slackbot is available on our GitHub. Give it a try, and enjoy full ChatOps control of your cloud costs!
How should CI/CD tool cost scaling, language support, and platform support affect your implementation decisions? In a previous post, we looked at the factors you should consider when choosing between a SaaS CI/CD tool vs. a self-hosted CI/CD solution. In this post, we will take a look at a number of other factors that should be considered when evaluating a SaaS CI/CD tool to determine if it’s the right fit for your organization, including cost scalability and language/platform support.
CI/CD Tool Cost Scaling
One thing that is important to keep in mind when deciding to use a paid subscription-based service is how the cost scales with your usage. There are a number of factors that can affect cost. Particularly, some CI/CD SaaS services limit the number of build processes that can be run concurrently. For example, Codeship’s free plan allows only one concurrent build at a time. Travis CI’s travis-ci.org product offers up to 5 concurrent builds for open source projects, but (interestingly) their $69 USD/mo plan on travis-ci.com only offers 1 concurrent build. All of this means that increased throughput will likely result in increased cost. If you expect to maintain a steady level of throughput (that is, you don’t expect to add significantly more developers, which would require additional CI/CD throughput) then perhaps limits on the number of concurrent build processes is not a concern for you. However, if you’re planning on adding more developers to your team, you’ll likely end up having more build/test jobs that need to be executed. Limits may hamper your team’s productivity.
Another restriction you may run across is a limit on the total number of “build minutes” for a given subscription. In other words, the cumulative number of minutes that all build/test processes can run during a given subscription billing cycle (typically a month) is capped at a certain amount. For example, CircleCI’s free plan is limited to 1,500 build minutes per month, while their paid plans offer unlimited build minutes. Adding more developers to your team will likely result in additional build jobs, which will increase the required amount of build minutes per month, which may affect your cost. Additionally, increasing the complexity of your build/test process may result in longer build/test times, which will further increase the number of build minutes you’ll need during each billing cycle. The takeaway here is that if you have a solid understanding of how your team and your build processes are likely to scale in the future, then you should be well equipped to make a decision on whether the cost of a build minute-limited plan will scale adequately to meet your organization’s needs.
Though not directly related to cost scaling, it’s important to note that some CI/CD SaaS providers place a limit on the length of time allowed for any single build/test job, independent of any cumulative per-billing-cycle limitations. For example, Travis CI’s travis-ci.org product limits build jobs to 50 minutes, while jobs on their travis-ci.com product are limited to 120 minutes per build. Similarly, Atlassian’s Bitbucket Pipelines limits builds to 2 hours per job. These limits are probably more than sufficient for most teams, but if you have any long-running build/test processes, you should make sure that your jobs will fit within the time constraints set by your CI/CD provider.
CI/CD Language and Platform Support
Not all languages and platforms are supported by all SaaS CI/CD providers. Support for programming languages, operating systems, containers, and third-party software installation are just a few of the factors that need to be considered when evaluating a SaaS CI/CD tool. If your team requires Microsoft Windows build servers, you are immediately limited to a very small set of options, of which AppVeyor is arguably the most popular. If you need to build and test iOS or Android apps, you have a few more options, such as Travis CI, fastlane, and Bitrise, among others.
Programming languages are another area of consideration. Most providers support the most popular languages, but if you’re using a less popular language, you’ll need to choose carefully. For instance, Travis CI supports a huge list of programming languages, but most other SaaS CI/CD providers support only a handful by comparison. If your project is written in D, Erlang, Rust, or some other less mainstream language, many SaaS CI/CD providers may be a no-go right from the start.
Further consideration is required when dealing with Docker containers. Some SaaS CI/CD providers offer first-class support for Docker containers, while other providers do not support them at all. If Docker is an integral part of your development and build process, some providers may be immediately disqualified from consideration due to this point alone.
As you can see, when it comes to determining the CI/CD tool that’s right for your team, there are numerous factors that should be considered, especially with regard to CI/CD tool cost. Fortunately, many SaaS CI/CD providers offer a free version of their service, which gives you the opportunity to test drive the service to ensure that it supports the languages, platforms, and services that your team uses. Just remember to keep cost scaling in mind before making your decision, as the cost of “changing horses” can be expensive should you find that your CI/CD tool cost scales disproportionately with the rest of your business.
In a future post, we will explore third-party integrations with CI/CD tools, with a focus on continuous delivery.
You may find yourself deciding whether to choose a CI/CD SaaS tool, or a self-hosted option. The continuous integration/continuous delivery platform market has grown over the last several years as DevOps becomes more mainstream, and now encompasses a huge variety of tools, each with their own flavor. While it’s great to have choices, it means that choosing the right tool can be a difficult decision. There are several factors to consider when choosing the right fit, including hosting, cost, scalability, and integration support. In this post, we will look at one of the biggest points of consideration: whether to choose a SaaS CI/CD service or a self-hosted system. This will be the first entry in a series of posts about how to choose the CI/CD system that is right for your team. Like everything, there are pros and cons to all solutions, and with the vast amount of CI/CD options available today, there’s no such thing as “one size fits all”.
Considerations for Choosing a CI/CD SaaS Tool
First, let’s take a look at the up-side to choosing a CI/CD SaaS tool. Like most SaaS products, one of the biggest benefits is that there is no hardware or software infrastructure to maintain. There’s no need to worry about server maintenance or applying software updates/patches: that’s all handled for you. In addition to the reduced ongoing maintenance burden, most SaaS CI/CD systems tend to be easy to get set up, especially if you’re using a SaaS VCS (Version Control System) like GitHub or Bitbucket.
These are great points, but there are potential down-sides that must be considered. The cost of usage for a SaaS CI/CD solution may not scale nicely with your business. For example, the price of a SaaS CI/CD service may go up as your team gets larger. If you plan on scaling your team significantly, the cost of your CI/CD system could inflate dramatically. Furthermore, not all services support all platforms, tools, and environments. If you plan on introducing any new development technologies, you should make sure that they are supported by the CI/CD provider you choose.
Considerations for a Self-Hosted CI/CD Tool
While there are many attractive points in favor of a SaaS CI/CD service, a self-hosted solution is not without its merits. One potential benefit of a self-hosted solution is extensibility. Some self-hosted services can be customized with plugins/extensions to enable functionality that is not included “out of the box”. Jenkins is a prime example of this, with over 1,000 plugins available. Even without plugins, self-hosted CI/CD tools often have more support for development platforms, languages, and testing frameworks than many SaaS solutions. If there’s not first-class support (or a plugin/extension) for a technology that you use, you can usually make things work with some shell scripts and a little bit of creativity. In addition to extensibility, self-hosted solutions typically have fewer limitations on things like build configurations and concurrent build jobs. This isn’t always the case, however. The free version of TeamCity, a CI/CD tool from JetBrains, is limited to 100 build configurations and 3 build agents. Licenses for additional configurations and build agents are available for purchase, though.
Conversely, there are some potential down-sides to a self-hosted system. Perhaps the biggest of these is that you are required to manage your own infrastructure. This includes applying software updates/patches, and may include management of hardware if you’re not hosting the service on an IaaS platform like AWS, GCP, or Azure. In contrast to a SaaS solution, self-hosted systems may require a time-intensive process to get set up. Between getting the system linked up to your VCS (Version Control System), issue/ticket tracking software, and notification system(s), there can be a steep entrypoint in getting your CI/CD system initialized. In addition to the first-time setup, you may be required to manage authentication and authorization for users in your organization if the system you choose doesn’t support your organization’s user management system (LDAP, Google GSuite, etc.).
It is worth noting that some CI/CD SaaS tool providers offer self-hosted variants of their services. For instance, CircleCI offers an enterprise solution that can be self-hosted on your own networks, and Travis CI offers Travis CI Enterprise, which is optimized for deployment on Amazon EC2 instances. These offerings throw even more into the mix, and should be part of your consideration when determining which tool has the best fit.
As you can see, there are several factors that must be considered when choosing the CI/CD tool that is right for you. In this post, we discussed some of the trade-offs between SaaS and self-hosted systems. In future posts, we will look at other factors such as scalability, cost, and restrictions/limitations.
Recently, I’ve been on a few phone calls where I get asked about cost management of resources built in AWS using Terraform provisioning. One of the great things about working with ParkMyCloud customers is that I get a chance to talk to a lot of different technical teams from various types of businesses. I get a feel for how the modern IT landscape is shifting and trending, plus I get exposed to the variety of tools that are used in real-world use cases, like Atlassian Bamboo, Jenkins, Slack, Okta, and Hashicorp’s Terraform.
Terraform seems to be the biggest player in the “infrastructure as code” arena. If you’re not already familiar with it, the utilization is fairly straightforward and the benefits quickly become apparent. You take a text file, use it to describe your infrastructure down to the finest detail, then run “terraform apply” and it just happens. Then, if you need to change your infrastructure, or revoke any unwanted changes, Terraform can be updated or roll back to a known state. By working together with AWS, Azure, VMware, Oracle, and much more, Terraform can be your one place for infrastructure deployment and provisioning.
How to Use Terraform Provisioning and ParkMyCloud with AWS Autoscaling Groups
I’ve talked to a few customers recently, and they utilize Terraform as their main provisioning tool, while ParkMyCloud is their ongoing cloud governance and cost control tool. Using these two systems together is great, but one main confusion comes in with AWS’s AutoScaling Groups. The question I usually get asked is around how Terraform handles the changes that ParkMyCloud makes when scheduling ASGs, so let’s take a look at the interaction.
When ParkMyCloud “parks” an ASG, it sets the Min/Max/Desired to 0/0/0 by default, then sets the values for “started” to the values you had originally entered for that ASG. If you run “terraform apply” while the ASG is parked, then terraform will complain that the Min/Max/Desired values are 0 and will change them to the values you state. Then, when ParkMyCloud notices this during the next time it pulls from AWS (which is every 10 minutes), it will see that it is started and stop the ASG as normal.
If you change the value of the Min/Max/Desired in Terraform, this will get picked up by ParkMyCloud as the new “on” values, even if the ASG was parked when you updated it. This means you can keep using Terraform to deploy and update the ASG, while still using ParkMyCloud to park the instances when they’re idle.
How to Use Terraform to Set Up ParkMyCloud
If you currently leverage Terraform provisioning for AWS resources but don’t have ParkMyCloud connected yet, you can also utilize Terraform to do the initial setup of ParkMyCloud. Use this handy Terraform script to create the necessary IAM Role and Policy in your AWS account, then paste the ARN output into your ParkMyCloud account for easy setup. Now you’ll be deploying your instances as usual using Terraform provisioning while parking them easily to save money!
One of the more popular trends in public cloud adoption is the use of serverless computing in AWS, Microsoft Azure, and Google Cloud. All of the major public cloud vendors offer serverless computing options, including databases, functions/scripts, load balancers, and more. When designing new or updated applications, many developers are looking at serverless components as an option. This new craze is coming at a time when the last big thing, containers, is still around and a topic of conversation. So, when users are starting up new projects or streamlining applications, will they stick with traditional virtual machines or go with a new paradigm? And out of all these buzzy trends, will anything come out on top and endure?
Virtual Machines: The Status Quo
The “traditional” approach to deployment of an application is to use a fleet of virtual machines running software on your favorite operating system. This approach is what most deployments have been like for 20 years, which means that there are countless resources available for installation, management, and upkeep. However, that also means you and your team have to spend the time and energy to install, manage, and keep that fleet going. You also have to plan for things like high availability, load balancing, and upgrades, as well as decide if these VMs are going to be on-prem or in the cloud. I don’t see the use of virtual machines declining anytime soon, but there are better options for some use cases.
Containers: The New Hotness, But Too Complex to be Useful
Containerization involves isolating an application by making it think it’s the only application on a server, with only the hardware available that you allow. Containers can divide up a virtual machine in a similar way that virtual machines can divide up a physical server. This idea has been around since the early 1980s, but has really started to pick up steam due to the release of Docker in 2013. The main benefits of containerization are the ability to maximize the utilization of physical hardware while deploying pieces of a microservices architecture that can easily run on any OS.
This sounds great in theory, but there are a couple of downsides to this approach. The primary problem is the additional operational complexity, as you still have to manage the physical hardware and the virtual machines, along with the container orchestration without much of a performance boost. The added complexity without removing any current orchestration means that you now have to think about more, not less, You also need to build in redundancy, train your users and developers, and ensure communication between pieces on top of your existing physical and virtual infrastructure.
Speaking of container orchestration, the other main downside is the multitude of options surrounding containers and their management, as there’s no one clear choice of what to use (and it’s hard to tell if any of the existing ones will just go away one day and leave you with a mess). Kubernetes seems to be the front runner in this area, but Apache Mesos and Docker Swarm are big players as well. Which do you choose, and do you force all users and teams to use the same one? What if the company who manages those applications makes a change that you didn’t plan for? There’s a lot of questions and unknowns, along with just having to make the choice that could have ramifications for years to come.
Serverless Computing: Less Setup, More Functionality
When users or developers are working on a project that involves a database and some python scripts, they just want the database and the scripts, not a server that is running database software and a server that runs scripts. That’s because the main idea behind serverless architecture is the goal of trying to eliminate all the overhead that comes along with these requests for specific software. This is a big benefit to those who just want to get something up and running without installing operating systems, tweaking configuration files, and worrying about redundancy and uptime.
This isn’t all sunshine and rainbows, however. One of the big downsides to serverless comes hand-in-hand with that reduced complexity, in that you also typically have reduced customization. Running an older database version or having a long-running python function might not be possible using serverless services. Another possible downside is that you are typically locked in to a vendor once you start developing your applications around serverless architecture, as the APIs are often going to be vendor-specific.
That being said, it appears that the reduced complexity is a big deal for the users who want things to “just work”. Dealing with less headaches and less management so they can get creative and deploy some cool applications is one of the main goals of folks who are trying to push the boundaries of what’s possible. If Amazon, Microsoft, or Google want to handle database patching and python versioning so you don’t have to, then let them deal with it and move on to the fun stuff!
Here at ParkMyCloud, we’re doing a mix of serverless and traditional virtual machines to maximize the benefits and minimize the overhead for what we do. By using serverless where it makes sense without forcing a square peg into a round hole, we can run virtual machines to handle the code we’ve already written while using serverless architecture for things like databases, load balancing, and email messages. We’re starting to see more customers going with this approach as well, who then use ParkMyCloud to keep the costs of virtual machines low when they aren’t in use. (If you’d like to do the same, check out a trial of ParkMyCloud to get your hybrid infrastructure optimized.)
When it comes to development and operations, there are numerous decisions to make that all have pros and cons. Serverless architecture is the latest deployment option available, and it clearly helps reduce complexity and accounts for things that may give you headaches. The reduced mobility is something that containers can handle really well, but involves more complexity in deployment and ongoing management. Software installed on virtual machines is a tried-and-true method, but does mean you are doing a lot of the work yourself. It’s the fact that serverless computing is so simple to implement that makes it more than a trend: this is a paradigm that will endure, where containers won’t.
2017 was a big year for ParkMyCloud and automated cloud cost control. From working closely with our customers and understanding industry trends, we continued to strengthen and grow our cloud cost control platform, continuously innovating and adding new features to make ParkMyCloud easier to use, more automated, and continue doing what we do best: saving you money on your cloud costs. Here are the highlights of what improved in ParkMyCloud during 2017:
Auto-Scheduling for Microsoft Azure
You asked, we answered. After a year of growth and success with optimizing cloud resources for users of Amazon Web Services (AWS), ParkMyCloud broadened its appeal by optimizing and reducing cloud spend for Microsoft Azure. CEO Jay Chapel weighed in, “Support for Azure was the top requested feature, so today’s launch will help us drive even bigger growth during 2017 as we become a go-to resource for DevOps and IT users on all the major cloud service providers.”
In February, signing into ParkMyCloud became easier than ever with support for single sign-on using SAML. Signing in is simple – use your preferred identity provider for a more streamlined experience, reduce the numbers needed to remember and type in, and use SSO for security by keeping a single point of authentication.
Free Tier for ParkMyCloud
This release gave users the option for free cloud optimization using ParkMyCloud – forever. The free tier option was created to support developers who were resorting to writing their own scheduling scripts in order to turn off non-production resources when not in use, saving not only money, but lots of time.
In addition to AWS and Azure, ParkMyCloud added support for Google Cloud Platform, making automated cost savings available for all of the ‘big three’ cloud service providers. With the new addition, ParkMyCloud’s continuous cost control platform covered a majority of the $23 billion public cloud market, enabling enterprises to eliminate wasted cloud spend – an estimated $6 billion problem for 2017, projected to become a $17 billion problem by 2020.
Stop/Start for AWS RDS Instances
In June, ParkMyCloud announced that it would now be offering “parking” for AWS RDS instances, allowing users to automatically put database resources on on/off schedules, so they only pay for what they’re actually using. This was the first parking feature on the market to be fully integrated with AWS’s RDS start/stop capability.
Notifications via Slack and Email
You asked, we answered (again). This user-requested feature improved the user experience by providing notifications about your environment and ParkMyCloud account via email, Slack, and other webooks. Notifications include information about parking actions, system errors, and more. Additionally, ParkMyCloud’s SlackBot allows users to manage resources and schedules through their Slack channel.
Cloud Savings Dashboard
After turning two, ParkMyCloud continued shaping and growing its vision with a new reporting dashboard. This feature made it easy to access reports, providing greater insight information regarding cloud costs, team rosters, and more.
Mobile App for Cloud Cost Optimization
In the last two months of 2017, ParkMyCloud was not about to slow down. Cloud cost optimization reached a new level with the addition of the new ParkMyCloud mobile app. Users are now able to park idle instances directly from their mobile devices. Reduce cloud waste and cut monthly spend by 65% or more, now with even more capability and ease of use.
AWS Utilization Metric Tracking
From this release, ParkMyCloud partnered with CloudWatch to give AWS users resource utilization data for EC2 instances, viewable through customizable heatmaps. The update gives information about how resources are being used, providing necessary information to help ParkMyCloud gear up for its next release coming soon – SmartParking and SmartSizing.
Building on the November release of static heat maps displaying AWS EC2 utilization metrics, ParkMyCloud used the utilization data to create animated heat maps. This new feature helps users better identify usage patterns over time and create automated parking schedules. Data is displayed and mapped to a sequence of time, in the form of an animated “video.”
Coming in 2018…
2017 is over, but there’s no end in sight for ParkMyCloud and automated cloud cost control. In addition to all the features we added last year to make cloud cost automation easy, simple, and more available, we have even more in store for our users in 2018. Coming soon, ParkMyCloud will introduce SmartParking, SmartSizing, PaaS ‘parking’, support for AliCloud and more. Stay tuned for another year of updates, new releases, and saving money on cloud costs with ParkMyCloud.