Historically, the primary benefit of software development in the cloud has been the opportunity to access a massive computing infrastructure without the capital costs of procurement. This opportunity has driven the growth of cloud computing services over the past decade into an industry now worth more than $200 billion.

As cloud computing services have developed, other factors – such as high-speed processing, advances in security and smart computing architectures – have influenced organizations to adopt a digital business strategy and make the change from legacy IT systems to cloud-based services. The opportunity to work from anywhere has also been a driving factor.

However, cloud computing services come at a price. They are not always as scalable as they are implied to be, and the management of cloud-based application can be complex. For this reason, organizations evaluating the benefits of software development in the cloud should also evaluate the benefits of cloud management software.

Cloud management software overcomes many of the issues associated with software development in the cloud. Organizations can reduce cloud computing costs by temporarily stopping non-production instances and VMs when not required. Administrators can also obtain a single view of all their cloud-based applications, data and services to facilitate budget and capacity planning.

If your organization would like to enjoy the benefits of software development in the cloud without experiencing cost, scalability and management issue, you are invited to take advantage of a free thirty-day trial of ParkMyCloud – a versatile Software-as-a-Service app that can reduce cloud compute costs by up to 60% and save organizations valuable time in cloud administration and management.

For further details of our free trial offer, contact us today.

How to Approach the Challenges of DevOps in Large Organizations

devops in large organizations

Implementing DevOps practices in small organizations seems like standard practice, but what if you’re trying to utilize DevOps in large organizations? Trying to modernize workflows can be a challenge for any company, but there are different challenges, risks, and benefits for bigger companies. Let’s take a look at how enterprises might approach a DevOps transformation through a few of the core tenants of DevOps.

Rapid Feedback

There are a few different forms of feedback that come with DevOps: automated feedback about specific code (typically through unit and integration testing software), personal feedback from other team members, consumer feedback from customers using your product, and cross-team feedback throughout the organization. Startups and small companies may find it easier to have open lines of communication between individual team members as well as across teams.

Large organizations will need to make a conscious effort to keep team communication open, On the other hand, they will have more resources available (both money and employees) to field customer and in-house feedback about individual services or larger products. They may also be able to better purchase and implement automated testing and CI/CD tools, which leads to…

Automation

One of the biggest tech benefits to a DevOps approach is automating away the manual tasks that bog down critical projects. Large organizations often have the time, money, and people to set up automated tools, like CI/CD pipelines, unit and integration test suites, and config management systems. The biggest challenge in the enterprise world is trying to make everyone happy.

One approach is to standardize on a single tool for each purpose, such as Jenkins or Chef. This can enable your IT staff to specialize in those tools, but may make some users unhappy with being forced into a tool they may not prefer. The alternative is to allow each team or business unit to use their own preferred software, but this can turn into a “toolset hell” with a mashup of every combination of applications within your organization. Each approach has its pros and cons, and often comes down to a management decision.

Eliminating Silos

Having individual teams that handle their part of the puzzle and nothing else is the biggest hurdle that enterprises face when trying to apply DevOps principles. The combination of ‘dev’ and ‘ops’ (and other disciplines, like ‘sec’ and ‘fin’) is naturally split out in a large organization, so recombining them can be a huge undertaking. Then again, that gap is exactly the problem the DevOps approach seeks to solve.

Some companies solve this by having a separate team that handles the cross-team support and communication. Other companies break down these silos by enabling employees to seamlessly migrate between teams depending on the project or application. The more “devopsy” method is to utilize ChatOps and centralized documentation repositories for open communication and collaboration, which can help break down unify the distinct teams.

Holistic Thinking

The idea of holistic thinking tends to come easier to larger organizations, as successful enterprises typically have a system in place for “big picture” thinking, either through a management or product team, or through a cross-functional committee. That said, communication of this vision down to the employees, along with communication up to that management team, is crucial for enabling outside-the-box thinking to get past any roadblocks and hurdles that are in the way of creating and deploying the end product. Sometimes, the hardest part is convincing programmers that not everything needs to be solved with code!

DevOps in Large Organizations: Challenging but Rewarding

Some folks think that DevOps only applies to startups and small companies, but we’re seeing more and more teams benefit from implementing DevOps in large organizations. The benefits of the above DevOps principles are numerous, but frequently come with a different set of challenges based on your organizational size. Once you are aware of those challenges and have a plan to overcome them, you can start to transform your enterprise to a DevOps shop.

Read more ›

The #1 ParkMyCloud Alternative

Sometimes we ask potential customers what their top ParkMyCloud alternative is. Usually, they don’t have one, but sometimes, they’re considering scripting their own on/off solution instead.

It makes sense: at a glance at the problem of scheduling cloud resources, it’s easy to say, “my team can write a scheduler.” However, there are more factors than you may have considered – including cost optimization over a variety of resources, maintenance time, visibility and reporting, opportunity cost, and more.

11 Things to Include in Your Scripts – Besides Scheduling

While you may be able to write scripts to turn resources on and off on a schedule, there are a number of associated functionalities that would be more difficult and time consuming:

  1. Multi-account/user – scripting typically doesn’t support multi-cloud/multi-user/multi-account access, and it is difficult to support existing team structures and ensure appropriate controls
  2. Schedule override – difficult to let users override schedules when they need to access them while scheduled
  3. Custom usage-based schedules – must determine a way to create custom schedules per resource based on usage analytics
  4. Logical Groups – hard to find a way to let users group resources and start/stop sequentially
  5. Scale group parking – must develop means to create a single view and the ability to manage and start/stop scale groups
  6. On-demand access – must develop a process to enable on-demand access to stopped instances in off hours
  7. Visibility – need to develop custom application to determine cost savings based upon application of automation or removal of schedules (to date we have not encountered anyone who has developed such an application)
  8. Reporting – not only do cost savings need to be tracked, they need to be reportable via ad hoc utilization, savings, and scheduling reports over arbitrary date ranges
  9. Policies – difficult to build custom policies regarding the scheduling of instances like “Never Park” or “Snooze Only”
  10. Standardization – difficult to ensure consistency and standardization of automation approach across entire organization unless highly centralized
  11. Easy-to-use UI for non-developers – no easy way to create a UI that allows you to devolve management of cloud resources to non-technical teams who may not be familiar with the cloud provider console

ParkMyCloud provides you the ability to do all of the above – with no scripting necessary. See a full comparison here.

The Cost of Scripting

If you’re interested in automating on/off times for your cloud resources, then you’re probably interested in optimizing costs. So don’t lose sight of the cost behind “building” – the man-hours and opportunity cost. After all, every time you have your team working on creating solutions for side projects, you distract them from your core business activities.

And it will take more time than you think. In addition to the functionality listed above, consider the following maintenance tasks:

  • Must keep up-to-date on changes to public cloud APIs
  • Must keep up-to-date on change/updates to public cloud services
  • When your business’s desired policies, schedules, or behavior change, must update and test

Is Scripting a Viable ParkMyCloud Alternative?

Of course, it’s up to you to determine whether scripting is a worthwhile ParkMyCloud alternative for your business. We’d say, it’s not worth the cost and sacrifice of value. Besides, ParkMyCloud users save an average of $12 on their cloud bills per dollar spent on the product – that’s an ROI that will keep your finance team happy. And that’s just the paid versions. If it’s still hard for you to justify, then use ParkMyCloud’s free tier – with no cost, there’s no reason to waste your time scripting.

Ready to try the easy way? Get started.

Read more ›

How to: ChatOps Cloud Cost Control

The latest time-saving automation to add to your DevOps tool belt: ChatOps cloud cost control. That’s right – you may already be using ChatOps to make your life easier, but did you know that amongst the advantages, you can also use it to control your cloud resources?

Whatever communication platform you’re already using for chatting with your team members, you can use for chatting with your applications and services. And with the increasing rise of ChatOps, that brings us to one of the questions we’ve been getting asked more frequently by our DevOps users: how can I manage schedules and instances from Slack, Microsoft Teams, Atlassian Stride, and other chat programs?  

One of the cool things you can do using ChatOps is control your cloud resources through ParkMyCloud. Learn how it’s done in this quick YouTube demo:

ParkMyCloud has the ability to send messages to chat rooms via notifications and receive commands from chat bots via the API. This video details the Slackbot specifically, but similar bots can be used with Microsoft Teams or Atlassian Stride. There are multiple settings you can configure within Slack to manage your account, including notifications to let you know when a schedule is shutting an instance down. You can also set up the ability to override a schedule and turn the system on from Slack. Watch the video for a brief overview of how to:

  • Set up a notification that uses the Slack type
  • Adjust settings to be notified of user actions, parking actions, policy actions, and more
  • Set up the ParkMyCloud Slackbot to respond to notifications

Once you set up Slack with ParkMyCloud, you’ll be able to do anything you normally would in the UI or API, including snooze and toggle instances to override their schedules, receive notifications and be able to control your account directly from your Slack chat room. The Slackbot is available on our GitHub. Give it a try, and enjoy full ChatOps control of your cloud costs!

Read more ›

How to Choose a CI/CD Tool: Cost Scaling, Languages, and Platforms, and More

How should CI/CD tool cost scaling, language support, and platform support affect your implementation decisions? In a previous post, we looked the factors you should consider when choosing between a SaaS CI/CD tool vs. a self-hosted CI/CD solution. In this post, we will take a look at a number of other factors that should be considered when evaluating a SaaS CI/CD tool to determine if it’s the right fit for your organization, including cost scalability and language/platform support.

CI/CD Tool Cost Scaling

One thing that is important to keep in mind when deciding to use a paid subscription-based service is how the cost scales with your usage. There are a number of factors that can affect cost. Particularly, some CI/CD SaaS services limit the number of build processes that can be run concurrently. For example, Codeship’s free plan allows only one concurrent build at a time. Travis CI’s travis-ci.org product offers up to 5 concurrent builds for open source projects, but (interestingly) their $69 USD/mo plan on travis-ci.com only offers 1 concurrent build. All of this means that increased throughput will likely result in increased cost. If you expect to maintain a steady level of throughput (that is, you don’t expect to add significantly more developers, which would require additional CI/CD throughput) then perhaps limits on the number of concurrent build processes is not a concern for you. However, if you’re planning on adding more developers to your team, you’ll likely end up having more build/test jobs that need to be executed. Limits may hamper your team’s productivity.

Another restriction you may run across is a limit on the total number of “build minutes” for a given subscription. In other words, the cumulative number of minutes that all build/test processes can run during a given subscription billing cycle (typically a month) is capped at a certain amount. For example, CircleCI’s free plan is limited to 1,500 build minutes per month, while their paid plans offer unlimited build minutes. Adding more developers to your team will likely result in additional build jobs, which will increase the required amount of build minutes per month, which may affect your cost. Additionally, increasing the complexity of your build/test process may result in longer build/test times, which will further increase the number of build minutes you’ll need during each billing cycle. The takeaway here is that if you have a solid understanding of how your team and your build processes are likely to scale in the future, then you should be well equipped to make a decision on whether the cost of a build minute-limited plan will scale adequately to meet your organization’s needs.

Though not directly related to cost scaling, it’s important to note that some CI/CD SaaS providers place a limit on the length of time allowed for any single build/test job, independent of any cumulative per-billing-cycle limitations. For example, Travis CI’s travis-ci.org product limits build jobs to 50 minutes, while jobs on their travis-ci.com product are limited to 120 minutes per build. Similarly, Atlassian’s Bitbucket Pipelines limits builds to 2 hours per job. These limits are probably more than sufficient for most teams, but if you have any long-running build/test processes, you should make sure that your jobs will fit within the time constraints set by your CI/CD provider.

CI/CD Language and Platform Support

Not all languages and platforms are supported by all SaaS CI/CD providers. Support for programming languages, operating systems, containers, and third-party software installation are just a few of the factors that need to be considered when evaluating a SaaS CI/CD tool. If your team requires Microsoft Windows build servers, you are immediately limited to a very small set of options, of which AppVeyor is arguably the most popular. If you need to build and test iOS or Android apps, you have a few more options, such as Travis CI, fastlane, and Bitrise, among others.

Programming languages are another area of consideration. Most providers support the most popular languages, but if you’re using a less popular language, you’ll need to choose carefully. For instance, Travis CI supports a huge list of programming languages, but most other SaaS CI/CD providers support only a handful by comparison. If your project is written in D, Erlang, Rust, or some other less mainstream language, many SaaS CI/CD providers may be a no-go right from the start.

Further consideration is required when dealing with Docker containers. Some SaaS CI/CD providers offer first-class support for Docker containers, while other providers do not support them at all. If Docker is an integral part of your development and build process, some providers may be immediately disqualified from consideration due to this point alone.

Final Thoughts

As you can see, when it comes to determining the CI/CD tool that’s right for your team, there are numerous factors that should be considered, especially with regard to CI/CD tool cost. Fortunately, many SaaS CI/CD providers offer a free version of their service, which gives you the opportunity to test drive the service to ensure that it supports the languages, platforms, and services that your team uses. Just remember to keep cost scaling in mind before making your decision, as the cost of “changing horses” can be expensive should you find that your CI/CD tool cost scales disproportionately with the rest of your business.

In a future post, we will explore third-party integrations with CI/CD tools, with a focus on continuous delivery.

Read more ›

How to Decide Between a CI/CD SaaS Tool vs. Self Hosted

You may find yourself deciding whether to choose a CI/CD SaaS tool, or a self-hosted option. The continuous integration/continuous delivery platform market has grown over the last several years as DevOps becomes more mainstream, and now encompasses a huge variety of tools, each with their own flavor. While it’s great to have choices, it means that choosing the right tool can be a difficult decision. There are several factors to consider when choosing the right fit, including hosting, cost, scalability, and integration support. In this post, we will look at one of the biggest points of consideration: whether to choose a SaaS CI/CD service or a self-hosted system. This will be the first entry in a series of posts about how to choose the CI/CD system that is right for your team. Like everything, there are pros and cons to all solutions, and with the vast amount of CI/CD options available today, there’s no such thing as “one size fits all”.

Considerations for Choosing a CI/CD SaaS Tool

First, let’s take a look at the up-side to choosing a CI/CD SaaS tool. Like most SaaS products, one of the biggest benefits is that there is no hardware or software infrastructure to maintain. There’s no need to worry about server maintenance or applying software updates/patches: that’s all handled for you. In addition to the reduced ongoing maintenance burden, most SaaS CI/CD systems tend to be easy to get set up, especially if you’re using a SaaS VCS (Version Control System) like GitHub or Bitbucket. 

These are great points, but there are potential down-sides that must be considered. The cost of usage for a SaaS CI/CD solution may not scale nicely with your business. For example, the price of a SaaS CI/CD service may go up as your team gets larger. If you plan on scaling your team significantly, the cost of your CI/CD system could inflate dramatically. Furthermore, not all services support all platforms, tools, and environments. If you plan on introducing any new development technologies, you should make sure that they are supported by the CI/CD provider you choose.

Considerations for a Self-Hosted CI/CD Tool

While there are many attractive points in favor of a SaaS CI/CD service, a self-hosted solution is not without its merits. One potential benefit of a self-hosted solution is extensibility. Some self-hosted services can be customized with plugins/extensions to enable functionality that is not included “out of the box”. Jenkins is a prime example of this, with over 1,000 plugins available. Even without plugins, self-hosted CI/CD tools often have more support for development platforms, languages, and testing frameworks than many SaaS solutions. If there’s not first-class support (or a plugin/extension) for a technology that you use, you can usually make things work with some shell scripts and a little bit of creativity. In addition to extensibility, self-hosted solutions typically have fewer limitations on things like build configurations and concurrent build jobs. This isn’t always the case, however. The free version of TeamCity, a CI/CD tool from JetBrains, is limited to 100 build configurations and 3 build agents. Licenses for additional configurations and build agents are available for purchase, though.

Conversely, there are some potential down-sides to a self-hosted system. Perhaps the biggest of these is that you are required to manage your own infrastructure. This includes applying software updates/patches, and may include management of hardware if you’re not hosting the service on an IaaS platform like AWS, GCP, or Azure. In contrast to a SaaS solution, self-hosted systems may require a time-intensive process to get set up. Between getting the system linked up to your VCS (Version Control System), issue/ticket tracking software, and notification system(s), there can be a steep entrypoint in getting your CI/CD system initialized. In addition to the first-time setup, you may be required to manage authentication and authorization for users in your organization if the system you choose doesn’t support your organization’s user management system (LDAP, Google GSuite, etc.).

Final Thoughts

It is worth noting that some CI/CD SaaS tool providers offer self-hosted variants of their services. For instance, CircleCI offers an enterprise solution that can be self-hosted on your own networks, and Travis CI offers Travis CI Enterprise, which is optimized for deployment on Amazon EC2 instances. These offerings throw even more into the mix, and should be part of your consideration when determining which tool has the best fit.

As you can see, there are several factors that must be considered when choosing the CI/CD tool that is right for you. In this post, we discussed some of the trade-offs between SaaS and self-hosted systems. In future posts, we will look at other factors such as scalability, cost, and restrictions/limitations.

Read more ›

How to Use Terraform Provisioning and ParkMyCloud to Manage AWS

Recently, I’ve been on a few phone calls where I get asked about cost management of resources built in AWS using Terraform provisioning. One of the great things about working with ParkMyCloud customers is that I get a chance to talk to a lot of different technical teams from various types of businesses. I get a feel for how the modern IT landscape is shifting and trending, plus I get exposed to the variety of tools that are used in real-world use cases, like Atlassian Bamboo, Jenkins, Slack, Okta, and Hashicorp’s Terraform.

Terraform seems to be the biggest player in the “infrastructure as code” arena. If you’re not already familiar with it, the utilization is fairly straightforward and the benefits quickly become apparent. You take a text file, use it to describe your infrastructure down to the finest detail, then run “terraform apply” and it just happens. Then, if you need to change your infrastructure, or revoke any unwanted changes, Terraform can be updated or roll back to a known state. By working together with AWS, Azure, VMware, Oracle, and much more, Terraform can be your one place for infrastructure deployment and provisioning.

How to Use Terraform Provisioning and ParkMyCloud with AWS Autoscaling Groups

I’ve talked to a few customers recently, and they utilize Terraform as their main provisioning tool, while ParkMyCloud is their ongoing cloud governance and cost control tool. Using these two systems together is great, but one main confusion comes in with AWS’s AutoScaling Groups. The question I usually get asked is around how Terraform handles the changes that ParkMyCloud makes when scheduling ASGs, so let’s take a look at the interaction.

When ParkMyCloud “parks” an ASG, it sets the Min/Max/Desired to 0/0/0 by default, then sets the values for “started” to the values you had originally entered for that ASG. If you run “terraform apply” while the ASG is parked, then terraform will complain that the Min/Max/Desired values are 0 and will change them to the values you state. Then, when ParkMyCloud notices this during the next time it pulls from AWS (which is every 10 minutes), it will see that it is started and stop the ASG as normal.

If you change the value of the Min/Max/Desired in Terraform, this will get picked up by ParkMyCloud as the new “on” values, even if the ASG was parked when you updated it. This means you can keep using Terraform to deploy and update the ASG, while still using ParkMyCloud to park the instances when they’re idle.

How to Use Terraform to Set Up ParkMyCloud

If you currently leverage Terraform provisioning for AWS resources but don’t have ParkMyCloud connected yet, you can also utilize Terraform to do the initial setup of ParkMyCloud. Use this handy Terraform script to create the necessary IAM Role and Policy in your AWS account, then paste the ARN output into your ParkMyCloud account for easy setup. Now you’ll be deploying your instances as usual using Terraform provisioning while parking them easily to save money!

Read more ›

Why Serverless Computing Will Be Bigger Than Containers

One of the more popular trends in public cloud adoption is the use of serverless computing in AWS, Microsoft Azure, and Google Cloud. All of the major public cloud vendors offer serverless computing options, including databases, functions/scripts, load balancers, and more. When designing new or updated applications, many developers are looking at serverless components as an option. This new craze is coming at a time when the last big thing, containers, is still around and a topic of conversation. So, when users are starting up new projects or streamlining applications, will they stick with traditional virtual machines or go with a new paradigm? And out of all these buzzy trends, will anything come out on top and endure?

Virtual Machines: The Status Quo

The “traditional” approach to deployment of an application is to use a fleet of virtual machines running software on your favorite operating system. This approach is what most deployments have been like for 20 years, which means that there are countless resources available for installation, management, and upkeep. However, that also means you and your team have to spend the time and energy to install, manage, and keep that fleet going. You also have to plan for things like high availability, load balancing, and upgrades, as well as decide if these VMs are going to be on-prem or in the cloud. I don’t see the use of virtual machines declining anytime soon, but there are better options for some use cases.

Containers: The New Hotness, But Too Complex to be Useful

Containerization involves isolating an application by making it think it’s the only application on a server, with only the hardware available that you allow. Containers can divide up a virtual machine in a similar way that virtual machines can divide up a physical server. This idea has been around since the early 1980s, but has really started to pick up steam due to the release of Docker in 2013. The main benefits of containerization are the ability to maximize the utilization of physical hardware while deploying pieces of a microservices architecture that can easily run on any OS.

This sounds great in theory, but there are a couple of downsides to this approach. The primary problem is the additional operational complexity, as you still have to manage the physical hardware and the virtual machines, along with the container orchestration without much of a performance boost. The added complexity without removing any current orchestration means that you now have to think about more, not less, You also need to build in redundancy, train your users and developers, and ensure communication between pieces on top of your existing physical and virtual infrastructure.

Speaking of container orchestration, the other main downside is the multitude of options surrounding containers and their management, as there’s no one clear choice of what to use (and it’s hard to tell if any of the existing ones will just go away one day and leave you with a mess). Kubernetes seems to be the front runner in this area, but Apache Mesos and Docker Swarm are big players as well. Which do you choose, and do you force all users and teams to use the same one? What if the company who manages those applications makes a change that you didn’t plan for? There’s a lot of questions and unknowns, along with just having to make the choice that could have ramifications for years to come.

Serverless Computing: Less Setup, More Functionality

When users or developers are working on a project that involves a database and some python scripts, they just want the database and the scripts, not a server that is running database software and a server that runs scripts. That’s because the main idea behind serverless architecture is the goal of trying to eliminate all the overhead that comes along with these requests for specific software. This is a big benefit to those who just want to get something up and running without installing operating systems, tweaking configuration files, and worrying about redundancy and uptime.

This isn’t all sunshine and rainbows, however. One of the big downsides to serverless comes hand-in-hand with that reduced complexity, in that you also typically have reduced customization. Running an older database version or having a long-running python function might not be possible using serverless services. Another possible downside is that you are typically locked in to a vendor once you start developing your applications around serverless architecture, as the APIs are often going to be vendor-specific.

That being said, it appears that the reduced complexity is a big deal for the users who want things to “just work”. Dealing with less headaches and less management so they can get creative and deploy some cool applications is one of the main goals of folks who are trying to push the boundaries of what’s possible. If Amazon, Microsoft, or Google want to handle database patching and python versioning so you don’t have to, then let them deal with it and move on to the fun stuff!

Here at ParkMyCloud, we’re doing a mix of serverless and traditional virtual machines to maximize the benefits and minimize the overhead for what we do.  By using serverless where it makes sense without forcing a square peg into a round hole, we can run virtual machines to handle the code we’ve already written while using serverless architecture for things like databases, load balancing, and email messages.  We’re starting to see more customers going with this approach as well, who then use ParkMyCloud to keep the costs of virtual machines low when they aren’t in use. (If you’d like to do the same, check out a trial of ParkMyCloud to get your hybrid infrastructure optimized.)

When it comes to development and operations, there are numerous decisions to make that all have pros and cons. Serverless architecture is the latest deployment option available, and it clearly helps reduce complexity and accounts for things that may give you headaches. The reduced mobility is something that containers can handle really well, but involves more complexity in deployment and ongoing management. Software installed on virtual machines is a tried-and-true method, but does mean you are doing a lot of the work yourself. It’s the fact that serverless computing is so simple to implement that makes it more than a trend: this is a paradigm that will endure, where containers won’t.

Read more ›

Why ParkMyCloud is the leader in Automated Cloud Cost Control – It’s About the Platform: 2017 Year in Review

2017 was a big year for ParkMyCloud and automated cloud cost control. From working closely with our customers and understanding industry trends, we continued to strengthen and grow our cloud cost control platform, continuously innovating and adding new features to make ParkMyCloud easier to use, more automated, and continue doing what we do best: saving you money on your cloud costs. Here are the highlights of what improved in ParkMyCloud during 2017:

January

Auto-Scheduling for Microsoft Azure

You asked, we answered. After a year of growth and success with optimizing cloud resources for users of Amazon Web Services (AWS), ParkMyCloud broadened its appeal by optimizing and reducing cloud spend for Microsoft Azure. CEO Jay Chapel weighed in, “Support for Azure was the top requested feature, so today’s launch will help us drive even bigger growth during 2017 as we become a go-to resource for DevOps and IT users on all the major cloud service providers.”

February

Single Sign-On

In February, signing into ParkMyCloud became easier than ever with support for single sign-on using SAML. Signing in is simple – use your preferred identity provider for a more streamlined experience, reduce the numbers needed to remember and type in, and use SSO for security by keeping by keeping a single point of authentication.

April

Free Tier for ParkMyCloud

This release gave users the option for free cloud optimization using ParkMyCloud – forever. The free tier option was created to support developers who were resorting to writing their own scheduling scripts in order to turn off non-production resources when not in use, saving not only money, but lots of time.

Support for OneLogin for Single Sign-On

ParkMyCloud integrated with OneLogin’s App Catalog marketplace, further simplifying Single Sign-On configuration using SAML 2.0. Benefits included reducing the number of passwords needed to track and allowing administrators to control user access from one place.

May

More support for Single Sign-On

In May, ParkMyCloud made more SSO integrations make signing in easy and simple. You can connect with Okta through the Okta App Network (OAN), Centrify, and with Microsoft Active Directory Federation Services (ADFS). The updates rounded out to six major SSO providers that can be used to connect to ParkMyCloud: ADFS, Azure Active Directory, Google G-Suite, Okta, OneLogin, and Ping Identity.  

June

Support for Google Cloud Platform

In addition to AWS and Azure, ParkMyCloud added support for Google Cloud Platform, making automated cost savings available for all of the ‘big three’ cloud service providers. With the new addition, ParkMyCloud’s continuous cost control platform covered a majority of the $23 billion public cloud market, enabling enterprises to eliminate wasted cloud spend – an estimated $6 billion problem for 2017, projected to become a $17 billion problem by 2020.

Stop/Start for AWS RDS Instances

In June, ParkMyCloud announced that it would now be offering “parking” for AWS RDS instances, allowing users to automatically put database resources on on/off schedules, so they only pay for what they’re actually using. This was the first parking feature on the market to be fully integrated with AWS’s RDS start/stop capability.

July

Notifications via Slack and Email

You asked, we answered (again). This user-requested feature improved the user experience by providing notifications about your environment and ParkMyCloud account via email, Slack, and other webooks. Notifications include information about parking actions, system errors, and more. Additionally, ParkMyCloud’s SlackBot allows users to manage resources and schedules through their Slack channel.

August

Cloud Savings Dashboard

After turning two, ParkMyCloud continued shaping and growing its vision with a new reporting dashboard. This feature made it easy to access reports, providing greater insight information regarding cloud costs, team rosters, and more.

November

Mobile App for Cloud Cost Optimization

In the last two months of 2017, ParkMyCloud was not about to slow down. Cloud cost optimization reached a new level with the addition of the new ParkMyCloud mobile app. Users are now able to park idle instances directly from their mobile devices. Reduce cloud waste and cut monthly spend by 65% or more, now with even more capability and ease of use.

AWS Utilization Metric Tracking

From this release, ParkMyCloud partnered with CloudWatch to give AWS users resource utilization data for EC2 instances, viewable through customizable heatmaps. The update gives information about how resources are being used, providing necessary information to help ParkMyCloud gear up for its next release coming soon – SmartParking and SmartSizing.

December

Utilization Heatmaps

Building on the November release of static heat maps displaying AWS EC2 utilization metrics, ParkMyCloud used the utilization data to create animated heat maps. This new feature helps users better identify usage patterns over time and create automated parking schedules. Data is displayed and mapped to a sequence of time, in the form of an animated “video.”  

Coming in 2018…

2017 is over, but there’s no end in sight for ParkMyCloud and automated cloud cost control. In addition to all the features we added last year to make cloud cost automation easy, simple, and more available, we have even more in store for our users in 2018. Coming soon, ParkMyCloud will introduce SmartParking, SmartSizing, PaaS ‘parking’, support for AliCloud and more. Stay tuned for another year of updates, new releases, and saving money on cloud costs with ParkMyCloud.

Read more ›

DevOps cloud cost optimization: It’s not an oxymoron

DevOps cloud cost optimization… is there such a thing? After all, if you’re concerned with your software’s development and operations, you want to make sure things work and work quickly. In dozens of companies we’ve spoken with, infrastructure cost is an afterthought.

Until it’s not.

Here’s what happens: someone in Finance, or the CTO, or the CIO, takes a look at the line-item expenses for DevOps, and realizes just how much of the budget is eaten up by cloud costs. All of a sudden, DevOps folks are facing top-down directives to reduce cloud costs, and need to find ways to do so without interrupting their “regular” work.

This is a common scenario. In 2016, enterprises spent $23B on public cloud IaaS services. By 2020, that figure is expected to reach $65B. Wasted spend makes up a quarter or more of that spend, much of it form of services running when they don’t need to be, improperly sized infrastructure, orphaned resources, and shadow IT.

DevOps teams: this is a problem you can get in front of. In fact, you can even apply some of the core tenets of DevOps to reducing cloud waste, including holistic thinking, eliminating silos, rapid feedback, and automation.

Our Director of Cloud Solutions, Chris Parlette, heard these problems from cloud users and put together a presentation on a DevOps cloud cost optimization approach. Watch it on demand now and learn how you can get started: How to Eliminate Cloud Waste with a Holistic DevOps Strategy.

Plus, check out these related resources:

Read more ›

5 Favorite AWS Training Resources

When it comes to AWS training resources, there’s no shortage of information out there. Considering the wide range of videos, tutorials, blogs, and more, it’s hard knowing where to look or how to begin. Finding the best resource depends on your learning style, your needs for AWS, and getting the most updated information available. With this in mind, we came up with our 5 favorite AWS training resources, sure to give you the tools you need to learn AWS:

1. AWS Self-Paced Labs

What better way to learn that at your own pace? AWS self-paced labs give you hands-on learning in a live AWS environment, with AWS services, and actual scenarios you would encounter in the cloud. Among the recommended labs you’ll find an Introduction to Amazon Elastic Compute Cloud (EC2), and for more advanced users, a lab on Creating Amazon EC2 Instances with Microsoft Windows. If you’re up for an adventure, enroll in a learning quest and immerse yourself in a collection of labs that will help you master any AWS scenario at your own pace. Once completed, you will earn a badge that you can boast on your resume, LinkedIn, website, etc.  

2. AWS Free Tier

Sometimes the best way to learn something is by jumping right in. With the AWS Free Tier, you can try AWS services for free. This is a great way to test out AWS for your business, or for the developers out there, to try services like AWS CodePipeLine, AWS Data Pipeline, and more. While you are still getting a hands-on opportunity to learn a number of AWS services, the only downside is that there are certain usage limits. You can track your usage with a billing alarm to avoid unwanted charges, or you can try ParkMyCloud and park your instances when they’re not in use to get the most out of your free tier experience. In fact, ParkMyCloud started its journey by using AWS’ free tier – we eat our own dog food!

3. AWS Documentation

AWS Documentation is like a virtual encyclopedia of tools, terms, training, and everything AWS. You’ll find white papers, case studies, tutorials, cloud computing basics, and so much more. This resource is a one-stop-shop for all of your AWS documentation needs, whether you’re a beginner or advanced user. No matter where you are in your AWS training journey, AWS documentation is always a useful reference and certainly deserves a spot in your bookmarks.

4. YouTube

So far, we’ve gone straight to the source for 3 out of 5 of our favorite AWS training resources. Amazon really does a great job of providing hands-on training, tutorials, and documentation for users with a range of experience. However, YouTube opens up a whole new world of video training that includes contributions from not only Amazon, but other great resources as well. Besides the obvious Amazon Web Services channel, there are also popular and highly rated videos by Edureka, Simplilearn, Eli the Computer Guy, and more.

5. Bloggers

As cloud technology usage continues to expand and evolve, blogs are a great way to stay up to speed with AWS and the world of cloud computing. Of course, in addition to labs, a free-trial, extensive documentation, and their own YouTube channel, AWS also has their own blog. Since AWS actually has a number of blogs that vary by region and technology, we recommend that you start by following Jeff Barr – Chief Evangelist at Amazon Web Services, and primary contributor. Edureka was mentioned in our recommended YouTube channels, they also have a blog that covers plenty of AWS topics. In addition, the CloudThat blog is an excellent resource for AWS and all things cloud, and was co-founded by Bhaves Goswami – a former member of the AWS product development team.

 


There’s plenty of information out there when it comes to AWS training resources. We picked our 5 favorite resources for their reliability, quality, and range of information. Whether you’re new to AWS or consider yourself an expert, these resources are sure to help you find what you’re looking for.

Read more ›

Continuous Integration and Delivery Require Continuous Cost Control

Today, we propose a new concept to add to the DevOps mindset: Continuous Cost Control.

In DevOps, speed and continuity are king. Continuous Operations, Continuous Delivery, Continuous Integration. Keep everything running and get new features in the hands of users quickly.

For some organizations, this approach leads to a mindset of “speed at any cost”. Especially in the era of easily consumable public cloud, this results in a habit of wasted spend and blown budgets – which may, of course, meet the goals for delivery. But remember that a goal of Continuous Delivery is sustainability. This applies to the coding and backend of the application, but also to the business side.

With that in mind, we get to the cost of development and operations. At some point in every organization’s lifecycle comes the need to control costs. Perhaps it’s when your system or product reaches a certain level of predictability or maturity – i.e. maintenance mode – or perhaps earlier, depending your organization.

We all know that agility has helped companies create competitive advantage; but customers and others tell us it can’t be “agility at any cost.” That’s why we believe the next challenge is cost-effective agility. That’s what Continuous Cost Control is all about.

What is Continuous Cost Control?

Think of it as the ability to see and automatically take action on development and operations resources, so that the amount spent is a controlled factor and not merely a result. This should occur with no impact to delivery.

Think of the spend your department manages. It likely includes software license costs and true-ups and perhaps various service costs. If you’re using private cloud/on-premise infrastructure, you’ve got equipment purchases and depreciations, plus everything to support that equipment, down to the fuel costs for backup generators, to consider.

However, the second biggest line item (after personnel) for many agile teams is public cloud. Within this bucket, consider the compute costs, bandwidth costs, database costs, storage, transactions… and the list goes on.

While private cloud/on-premise infrastructure requires continuous monitoring and cost control, the problem becomes acute when you change to the utility model of the public cloud. Now, more and more people in your organization have the ability to spin up virtual servers. It can be easy to forget that every hour (or minute, depending on the cloud provider) of this compute time costs money – not to mention all the surrounding costs.

Continually controlling these costs means automating your cost savings at all points in the development pipeline.  Early in the process, development and test systems should only be run while actually in use.  Later, during testing and staging, systems should be automatically turned on for specific tests, then shut down once the tests are complete.  During maintenance and production support, make sure your metrics and logs keep you updated on what is being used – and when.

How to get started with Continuous Cost Control

While Continuous Cost Control is an idea that you should apply to your development and operations practices throughout all project phases, there are a few things you can do to start a cultural behavior of controlled costs.

  • Create a mindset. Apply principles of DevOps to cloud cost control.
  • Take a few “easy wins” to automate cost control on your public cloud resources.
    • Schedule your non-production resources to turn off when not needed
    • Build in a process to “right size” your instances, so you’re not paying for more capacity than you need
    • Use alternate services besides the basic compute services where applicable. In AWS, for example, this includes Auto Scaling groups, Spot Instances, and Reserved Instances
  • Integrate cost control into your continuous delivery process. The public cloud is a utility which needs to optimized from day one – or if not then, as soon as possible.
  • Analyze usage patterns of your development team to apply rational schedules to your systems to increase adoption rates
  • Allow deviations from the normal schedules, but make sure your systems revert back to the schedule when possible
  • Be honest about what is being used, and don’t just leave it up for convenience

We hope this concept of Continuous Cost Control is useful to you and your organization – and we welcome your feedback.

Read more ›

DevOps Cloud Cost Control: How DevOps Can Solve the Problem of Cloud Waste

DevOps cloud cost control: an oxymoron? If you’re in DevOps, you may not think that cloud cost is your concern. When asked what your primary concern is, you might say speed of delivery, or integrations, or automation. However, if you’re using public cloud, cost should be on your list of problems to control.

The Cloud Waste Problem

If DevOps is the biggest change in IT process in decades, then renting infrastructure on demand is the most disruptive change in IT operations. With the switch from traditional datacenters to public cloud, infrastructure is now used like a utility. Like any utility, there is waste. (Think: leaving the lights on or your air conditioner running when you’re not home.)  

How big is the problem? In 2016, enterprises spent $23B on public cloud IaaS services. We estimate that about $6B of that was wasted on unneeded resources. The excess expense known as “cloud waste” comprises several interrelated problems: services running when they don’t need to be, improperly sized infrastructure, orphaned resources, and shadow IT.

Everyone who uses AWS, Azure, and Google Cloud Platform is either already feeling the pressure — or soon will be — to reel in this waste. As DevOps teams are primary cloud users in many companies, DevOps cloud cost control processes become a priority.

4 Principles of DevOps Cloud Cost Control

Let’s put this idea of cloud waste in the framework of some of the core principles of DevOps. Here are four key DevOps principles, applied to cloud cost control:

1. Holistic Thinking

In DevOps, you cannot simply focus on your own favorite corner of the world, or any one piece of a project in a vacuum. You must think about your environment as a whole.

For one thing, this means that, as mentioned above, cost does become your concern. Businesses have budgets. Technology teams have budgets. And, whether you care or not, that means DevOps has a budget it needs to stay within. Whether it’s a concern upfront or doesn’t become one until you’re approached by your CTO or CFO, at some point, infrastructure cost is going to be under scrutiny – and if you go too far out of budget, under direct mandates for reduction.

Solving problems not only speedily and elegantly, but cost efficiently becomes a necessity. You can’t just be concerned about Dev and Ops, you need to think about BizDevOps.

Holistic thinking also means that you need to think about ways to solve problems outside of code… more on this below.

2. No Silos

The principle of “no silos” means not only no communication silos, but also, no silos of access. This applies to the problem of cloud cost control when it comes to issues like leaving compute instances running when they’re not needed. If only one person in your organization has the ability to turn instances on and off, then all responsibility to turn those instances off falls on his or her shoulders.

It also means that if you want to use an instance that is scheduled to be turned off… well, too bad. You either call the person with the keys to log in and turn your instance on, or you wait until it’s scheduled to come on.  Or if you really need a test environment now, you spin up new instances – completely defeating the purpose of turning the original instances off.

The solution is eliminating the control silo by allowing users to access their own instances to turn them on when they need them and off when they don’t — of course, using governance via user roles and policies to ensure that cost control tactics remain uninhibited.

(In this case, we’re thinking of providing access to outside management tools like the one we provide, but this can apply to your public cloud accounts and other development infrastructure management portals as well.)

3. Rapid, Useful Feedback

In the case of eliminating cloud waste, the feedback you need is where, in fact, waste is occurring. Are your instances sized properly? Are they running when they don’t need to be? Are there orphaned resources chugging away, eating at your budget?

Useful feedback can also come in the form of total cost savings, percentages of time your instances were shut down over the past month, and overall coverage of your cost optimization efforts.  Reporting on what is working for your environment helps you decide how to continually address the problem that you are working on next.

You need monitoring tools in place in order to discover the answers to these questions. Preferably, you should be able to see all of your resources in a single dashboard, to ensure that none of these budget-eaters slip through the cracks. Multi-cloud and multi-region environments make this even more important.

4. Automation

The principle of Automation means that you should not waste time creating solutions when you don’t have to. This relates back to the problem of solving problems outside of code mentioned above.

Also, when “whipping up a quick script”, always remember the time cost to maintain such a solution. More about why scripting isn’t always the answer.

So when automating, keep your eyes open and do your research. If there’s already an existing tool that does what you’re trying to code, it could be a potential time-saver and process-simplifier.

Take Action

So take a look at your DevOps processes today, and see how you can incorporate a DevOps cloud cost control – or perhaps, “continuous cost control”  – mindset to help with your continuous integration and continuous delivery pipelines. Automate cost control to reduce your cloud expenses and make your life easier.

Read more ›

“Is that old cloud instance running?” How visibility saves money in the cloud

make sure you didn't leave a cloud instance running with better cloud visibility“Is that old cloud instance running?”

Perhaps you’ve heard this around the office. It shouldn’t be too surprising: anyone who’s ever tried to load the Amazon EC2 console has quickly found how difficult it is to keep a handle on everything that is running.  Only one region gets displayed at a time, which makes it common for admins to be surprised when the bill comes at the end of the month.  In today’s distributed world, it not only makes sense for different instances to be running in different geographical regions, but it’s encouraged from an availability perspective.

On top of this multi-region setup, many organizations are moving to a multi-cloud strategy as well.  Many executives are stressing to their operations teams that it’s important to run systems in both Azure and AWS.  This provides extreme levels of reliability, but also complicates the day-to-day management of cloud instances.

So is that old cloud instance running?

You may get a chuckle out of the idea that IT administrators can lose servers, but it happens more frequently than we like to admit.  If you only ever log in to US-East1, then you might forget that your dev team that lives in San Francisco was using US-West2 as their main development environment. Or perhaps you set up a second cloud environment to make sure your apps all work properly, but forgot to shut them down prior to going back to your main cloud.

That’s where a single-view dashboard (like the view you get with ParkMyCloud) can provide administrators with unprecedented visibility into their cloud accounts.  This is a huge benefit that leads to cost savings right off the bat, as the cloud servers running that you forgot about or thought you turned off can be seen in a single pane of glass. Knowledge is power: now that you know it exists, you can turn it off. You also get an easy view into how your environment changes over time, so you’ll be aware if instances get spun up in various regions.

This level of visibility also has a freeing effect, as it can lead you to utilizing more regions without fear of losing instances.  Many folks know they should be distributed geographically, but don’t want to deal with the headache of keeping track of the sprawl.  By tracking all of your regions and accounts in one easy-to-use view, you can start to fully benefit from cloud computing without wasting money on unused resources.

Now with ParkMyCloud’s core functionality available for free, it’s easy to get this single view of your AWS and Azure environments.  We think you’ll get a new perspective on your existing cloud infrastructure – and maybe you’ll find a few lost servers! Get started with the free version of ParkMyCloud.

Read more ›

The Cloud Waste Problem That’s Killing Your Business (and What To Do About It)

cloud wasteWaste not, want not. That was one of the well-healed quips of one the United States’ Founding Fathers, Benjamin Franklin. It couldn’t be more timely advice in today’s cloud computing world – the world of cloud waste. (When he was experimenting with static electricity and lightning, I wonder if he saw the future of Cloud? :^) )

Organizations are moving to the Cloud in droves. And why not? The shift from CapEx to monthly OpEx, the elasticity, the reduced deployment times and faster time-to-market: what’s not to love?

The good news: the public cloud providers have made it easy to deploy their services. The bad news: the public cloud providers have made it easy to deploy their services…really easy.  

And, experience over the past decade has shown that leads to cloud waste. What is “cloud waste” and where does it come from? What are the consequences? What can you do to reduce it?

What is Cloud Waste?

“Cloud waste” occurs when you consume more cloud resources than you actually need to run your business.

It takes several forms:

  • Resources left running 24×7 in development, test, demo, and training environments where they don’t need to be running 24×7.  (Thoughts of parents yelling at children to “turn the lights out” if they are the last one in a room.) I believe this is bad habit that was reinforced by the previous era of on premise data centers. The thinking: It’s a sunk cost any, why bother turning it off?  Of course, it’s not a sunk cost anymore.

This manifests itself in various ways:

    • Instances or VMs which are left running, chewing up $/CPU-Hr costs and network charges
    • Orphaned volumes (volumes not attached to any servers), which are not being used and incurring monthly $/GB charges
    • Old snapshots of those or other volumes
    • Old, out-of-date machine images

However, cloud consumers are not the only ones to blame. The public cloud providers are also responsible when it comes to their PaaS (platform as a service) offerings for which there is no OFF switch (e.g., AWS’ RDS, Redshift, DynamoDB and others). If you deliver a PaaS offering, make sure it has an OFF switch.

  • Resources that are larger than needed to do the job. Many developers don’t know what size instance to spin up to do their development work, so they will often spin up larger ones. (Hey, if 1 core and 4 GB of RAM is good, then 16 cores and 64 GB of RAM must be even better, right?) I think this habit also arose in the previous era of on-premise data centers: “We already paid for all this capacity anyway, so why not use it?” (Wrong again.)

This, too, rears its ugly head in several ways:

    • Instances or VMs which are much larger than they need to be
    • Block volumes which are larger than they need to be
    • Databases which are way over-provisioned compared to what their actual IOPS or sequential throughput requirements actually are.

Who is Affected by Cloud Waste?

The consequences of cloud waste are quite apparent. It is killing everyone’s business bottom line. For consumers, it erodes their return on assets, return on equity and net revenue.  All of these ultimately impact earnings per share for their investors as well.

Believe it or not, it also hurts the public cloud providers and their bottom line.  Public cloud providers are most profitable when they can oversubscribe their data centers. Cloud waste forces them to build more, very expensive data centers than they need to, killing their oversubscription rates and hurting their profitability as well. This is why you see cloud providers offering certain types of cost cutting solutions. For example, AWS offers Reserved Instances, where you can pay up front for break in on-demand pricing. They also offer Spot Instances, Auto-Scaling Groups and Lambda.  Azure also offers price breaks to their ELA customer and Scale Sets (the equivalent of ASGs).

How to Prevent Cloud Waste

So, what can you do to address this? Ultimately, the solution to this problem exists between your ears. Most of it is common sense: It requires rethinking… rewiring your brain to look at cloud computing in a different way. We all need to become honorary Scotsmen (short arms and deep pockets… with apologies to my Scottish friends).

  • When you turn on resources in non-production environments, turn on the minimum size needed to get the job done and only grudgingly move up to the next size.
  • Turn stuff off in non-production environments, when you are not using it. And for Pete’s sake, when it comes to compute time, don’t waste your time and money writing your own scripts…that just exacerbates the waste. Those DevOps people should spend that time on your bread and butter applications. Use ParkMyCloud instead! (Okay, yes, that was a shameless plug, but it is true.)
  • Clean up old volumes, snapshots and machine images.
  • Buy Reserved Instances for your production environments, but make sure you manage them closely, so that they actually match what your users are provisioning, otherwise you could be double paying.
  • Investigate Spot fleets for your production batch workloads that run at night. It could save you a bundle.

These good habits, over time, can benefit everyone economically: Cloud consumers and cloud producers alike.  

Read more ›

Where the Traditional IT Companies Will Never Catch Up to Those Born in the Cloud

born-in-the-cloudTraditional IT companies may dominate in a few fields, but in others, they will never catch up to those companies “born in the cloud.”

I actually have a unique perspective on these two worlds, as prior to this adventure at ParkMyCloud, I worked at IBM for many years. I was originally with Micromuse, where we had a fault and service assurance solution (Netcool) to manage and optimize Network and IT Operations. Micromuse was acquired by IBM in 2006 by the Tivoli Software Group business unit (later to be named Smarter Cloud). IBM was great – I learned a lot and met a lot of very smart, bright people. I was in Worldwide Sales Management so I had visibility across the globe into IT trends.

In the 2012/2013 timeframe, I noticed we were losing a lot of IT management, monitoring and assurance deals to companies like ServiceNow, New Relic, Splunk, Microsoft, and the like – all these “born in cloud” companies offering SaaS-based solutions to solve complex enterprise problems (that is, “born in the cloud” other than Microsoft – I’ll come back to them).

At first these SaaS-based IT infrastructure management companies were managing traditional on-premise servers and networks, but as more and more companies moved their infrastructure into the cloud, the SaaS companies were positioned to manage that as well – but at IBM, we were not. All of the sudden we were trying to sell complex, expensive IT management solutions for stuff running in this “cloud” called Amazon Web Services (AWS) – a mere 5 years ago. And then Softlayer, Rackspace, and Microsoft Azure popped up. I start thinking, there must be something here, but what is it and who’s going to manage and optimize this infrastructure?

After a few years sitting on the SaaS side of the table, now I know. Many meetings and discussions with very large Fortune 100 enterprises have taught me several very salient points about the cloud:

  1. Public cloud is here to stay – see Capital One or McDonald’s at recent AWS re:Invent Keynotes (both customers of ParkMyCloud, by the way)
  2. Enterprises are NOT using “traditional” IT tools to build, test, run and manage infrastructure and applications in the cloud
  3. What’s different about the cloud is that it’s a YUGE utility, which means companies now focus on cost control. Since it’s an OpEx model rather than a CapEx model they want to continually optimize their spend

Agility and innovation drive public cloud adoption but as cloud maturity grows so does the need for optimization – governance, cost control, and analytics.

So where does this leave the traditional companies like Oracle, HPE, and IBM? How are they involved in the migration to and lifecycle management of cloud-based applications? Well, from what I have seen they on the outside looking in – which is why when my good friend sent this to me the other day I was shocked – I guess Oracle decided to spot AWS a $13B lead – pretty smart, I am sure they will make this gap up by oh, let’s say 2052… brilliant strategy.

That said, one company that “gets it” seems to be Microsoft, both in terms of providing cloud infrastructure (Azure) but also being progressive enough to license their technologies for even the smallest of companies to adopt and grow using their applications.

To put a bow on this point, I was at a recent meeting where a Fortune 25 company was talking to us about their migration into the cloud, and the tools they are using:

  • Clouds – AWS / Azure
  • Migration – service partner
  • Monitoring – DataDog
  • Service Desk and CMDB – ServiceNow
  • Application Management – NewRelic
  • Log analytics – Splunk
  • Pipeline automation – Jenkins
  • Cost control (yes, that’s a category now) – ParkMyCloud

Now that’s some pretty good company! And not a single “traditional” IT tool on the list. I guess it takes one born in the cloud to manage it.

Read more ›

How to Save Money in DevOps: Interview with FinTech Company Using ParkMyCloud

We spoke to Tosin Ojediran, a DevOps Engineer at a FinTech company, about how he’s using ParkMyCloud as part of his approach to save money in DevOps.

Hi Tosin. So you work in FinTech. Can you tell us about what your team does within the company?

save money in devopsI’m on the DevOps team. We’re in charge of the cloud infrastructure, which ranges from servers to clusters and beyond. We have the task of maintaining the integrations between all the different services we use. Our main goal is to make sure our infrastructure is up and running and to maintain it. Our team just grew from two to three people.

What drove you to search for a cost optimization tool?

Last year, we were scaling our business, and with all the new development and testing, we kept needing to launch new clusters, databases, and instances. We did monitor the costs, but it was the Finance team that came to us and said, “hey, what’s going on with AWS? The costs keep going up, can you guys find a way to reduce this bill or move to a cheaper provider?”

So we looked into different options. We could move to Google for example, or we could move on prem, but at the time we were a team of two running a new project, trying to get things up and running, so we didn’t have the time. We had to find out how we could save money in DevOps without spending all our time to move to a new infrastructure. We went online to do research and came across ParkMyCloud, and started a trial.

What challenges did you experience in using AWS prior to using ParkMyCloud?

Like I mentioned, we were trying to cut costs. To do that, we were brainstorming about how we could write scripts to shut down machines during certain hours and spin them up. The problem was that this would require our time to write, integrate, and maintain.

We have different automation tools and containers – Chef, Docker machines, and Auto Scaling. Each of these takes time to script up. This all takes away from the limited time we have. With ParkMyCloud, we didn’t need to spend time on this automation – it was fast and simple. It allowed me to have all teams, including Analysts and others outside of the DevOps team, park their own resources. If you have a script that you run and if you have a two-man DevOps team, every time someone wants to park their machine, or start it outside of hours, they have to call me and ask me to do start their machines for them. But now with ParkMyCloud, I can assign machines to individual teams, they can start their machines whenever you want them – and it’s easy to use, you don’t have to know programming to use it

It frees up my time, because now everyone can control their own resources, when they used to have to ask me to do it for them.

Can you describe your experience so far using ParkMyCloud?

It’s been great for us to reduce AWS costs. We’re better staying within budget now. ParkMyCloud actually really exceeded my expectations. We sent the savings numbers to our CTO, and he said, “wow, this is awesome.” It’s easy to use, it does what it’s supposed to use. We’re reducing our bill by about 25-30%.

One other thing I love about ParkMyCloud. So, I work with a lot of vendors. A lot of times, they promise you one thing, and you get something else. There’s different terms and conditions, or you have to pay extra to actually qualify for different features. But with ParkMyCloud, it was up and running in 5-10 minutes, it was easy to integrate, easy to use, and you all deliver what you promise.

Read more ›

In 2017, I will… Not “build” when I should “buy”. (When to buy vs. build software.)

Buy vs. build software: The eternal question

buy vs. build softwareThe question of whether to buy vs. build software may be an old one, but it’s still relevant. Particularly as companies face rising IT costs, it’s important to consider the most cost-effective options for your business.

When you have an internal development team, it’s tempting to believe that “just having them whip something up” is cheaper than purchasing an off-the-shelf software solution. However, this ignores the opportunity cost of having your skilled developers focus their efforts on non-core activities and ones typically that deliver less value to the business.

To put a number on it, the national average salary for a software developer is $85,000. Including benefits, that’s about $110,000. . So a back-of-the-napkin estimate puts an hour of a developer’s time at $55. Then, consider the number of developers involved, and that you may not be as stringent in budgeting their time for “side projects” as you might for your core work.

So it’s expensive to build. Isn’t the outcome the same?

Actually, probably not. Though internally developed solutions may in theory have the same functionality as purchased software – for example, “it turns instances off when you don’t need them” – they will require additional work to integrate with team structures and to cover a broad variety of use cases. In that example, what about the reporting and savings information? After all, isn’t that the point of turning the instances off in the first place? And then there’s advanced features and the cost to maintain homegrown solutions over time as new requirements creep in.

For one look at how an off-the-shelf solution may compare in functionality to homegrown scripted solutions, here’s a simple side-by-side comparison we put together, showing ParkMyCloud vs. an in-house developed solution.

Functionality In-house Developed Scripting ParkMyCloud
Multi-User / Multi-Team · In small environments, may be difficult to meet demand for skilled DevOps personnel with knowledge of scripting & automation

·  In small environments,  Significant risk if knowledge of infrastructure and scripting is managed by single individual (knowledge transfer)   

In large environments, unless highly centralized, difficult to ensure consistency and standardization of automation approach across entire organization

·   DevOps support for all AWS environments across multiple teams / business units will get complex and resource intensive

·   DevOps resources distracted from core business activities – PMC offers API for integration into DevOps process

·   Opportunity Cost

·   Ability to devolve management of AWS instances to non-technical teams for scheduling on/off (PMC requires NO scripting)

·   Supporting existing team structures and ensuring appropriate controls is difficult to achieve without building out complete custom solution.

·  Role-based access controls (RBAC) and access-based enumeration (ABE) for enhanced security

·  Unlimited teams

·  Unlimited users

·  Laser development focus on EC2 cost optimization

·  One way to automate on/off times with enterprise-wide visibility

·  Options for centralizing or decentralizing control to departments, teams & individuals

·  Designed to support global operations

·  Single view of all resources across locations, account and cloud service providers (CSPs)

·  Reporting

·  $3.00 or less per instance per month

·  Configures in 15 minutes or less

Multiple Credentials /

Multiple CSPs

(Coming soon)

·  Must develop means to securely handle and manage credentials and other sensitive account information.

·  Must keep up-to-date on changes / updates to public cloud which is constantly evolving and adding and changing services.

·  Must develop approach to assign access to different credentials by different teams with PMC RBAC

·  Must develop approach and interface across multiple CSPs

·  Unlimited number of credentials / accounts

·  IAM Role and IAM User support (for AWS)

·  Secure credential management (AES-256 encryption)

·  Multiple public CSPs (coming soon) – ability to manage AWS, Azure and Google for single platform

Platform Coverage ·   Must develop means to create a single view and the ability to manage and start/stop ASG’s

·   Must develop means to create, manage and start/stop logical groups

·  Ability to manage & park Auto-scaling Groups

·  Ability to create, manage and park Logical Groups

·  Global view of ALL AWS Regions and Availability Zones in a single pane of glass

Always ‘off’ Scheduling ·   Must develop a process to enable on-demand access to stopped instances in off hours

·   Must be able to re-apply schedule when off hour work is done

·   Must do this across multiple accounts and CSPs

·  Ability to temporarily suspend parking schedules during off-hours to enable ad hoc instance control
Cost Visibility ·  Need to develop custom application to determine cost savings based upon application of automation or removal of schedules (to date we have not encountered anyone who has developed such an application)

·  Would need ability for ad hoc reports over arbitrary date ranges

·  Forecasts & displays future savings based upon selected schedules

·  Displays real-time actual month-to-date savings

·  Generates & distribute ad hoc detailed cost and savings reports

Policy Engine ·   Hard to enforce consistent and standardized policies within organization within decentralized structures where different automation tools are being used

·   This would need to be done across all CSP accounts and across CSPs

·   Difficult to build something like Never Park or Snooze Only

·   Enterprise-wide policies based on Tags to auto enforce actions (automate parking schedule assignment, Never Park for production instances, & assignment of instances to teams)

 

Resolution

As you can see, there is a technical advantage of purchasing software that’s been purpose-built with a dedicated development team over a long period of time. You’ll get more functionality for less money.

This year, we resolve not to “build” when we should “buy.”

Do you?

 

Read more ›

Cloud applications in 2017: How long until full cloud takes over?

clouds-take-overWe were recently asked about our vision for cloud applications in 2017: are we still seeing ported versions of legacy on-premises Software-as-a-Service (SaaS) applications? Or are most applications – even outside of pure-play startups – being built and hosted in the cloud? In other words, how long until full cloud takes over?

Actually, it already has.

Native cloud applications like ours – an 18-month-old startup – that have been built, tested, and run in the cloud are no longer the fringe innovators, but the norm. In fact, outside of a printer, we have no infrastructure at all – we are BYOD, and every application we use for development, marketing, sales and finance is a SaaS-based, cloud-hosted solution that we either use for free or rent and pay month-to-month or year-to-year.

This reliance on 100% cloud solutions has allowed us to rapidly scale our entire business – the cloud, and cloud-based SaaS solutions, have provided ParkMyCloud with the agility, speed, and cost control needed to manage to an OpEx model rather than a CapEx model.

We were able to rapidly prototype our technology, test it, iterate, and leverage “beta” communities in the cloud in a matter of months. We even outsource our development efforts, and seamlessly run agile remotely using the cloud and cloud-based tools. For a peek into the process, here’s a sampling of software development tools we use in a cloud-shrouded nutshell:

  • Amazon Web Service (AWS) for development, test, QA and production
  • VersionOne for agile management
  • Skype for scrum and video communication
  • GitHub for version control
  • Zoho for customer support
  • LogEntries for log integration
  • Confluence for documentation
  • Swagger for API management

And I could repeat the same for our Marketing, Sales, and Finance process and tools – the cloud has truly taken over.

We don’t know if these applications are built and run in the public cloud or the private cloud – that’s irrelevant to us, what’s important is they solve a problem, are easily accessible, and meet our price point. We do know that these are all cloud-based SaaS offerings – we don’t use any on premise, traditional software.

The net net is that many companies are just like ParkMyCloud. The question is no longer about how us newbies will enter the world – the question is, how fast will legacy enterprises migrate ALL their applications to cloud? And where will they strike the balance between public and private cloud?

Read more ›

Why ParkMyCloud Uses ParkMyCloud: A Story of the Importance of Drinking Your Own Champagne

drinking-our-own-champagneI think most people can agree that eating your own dog food – or drinking your own champagne, to the glass-half-full crowd – is a hallmark of a business that has created a successful product. The opposite is clearly true: when Alan Mullaly was brought in to Ford, he knew there was a problem when he was picked up from the airport in a Land Rover rather than a Ford car – and when he couldn’t find a single Ford vehicle in the executive parking garage.

For those of us in the software world, there’s another piece to that picture. To tell you how we discovered this for ourselves, I’m going to tell you a story.

It was six weeks after ParkMyCloud’s founding. We had the very first beta version of the product at our fingertips – but before sending it out to beta testers, we gathered the ParkMyCloud team in a conference room to do a bit of usability testing for ourselves. I created a ParkMyCloud user account and hooked up our AWS account so there would be instances to display.

“Now try it out, and let me know if you see any problems,” I told the group.

Heads down, focused on laptops, everyone diligently began to click around, playing with the first generation dashboard and parking schedule interface. For a moment, the room was quiet. Then a chorus went around.

“Hey, what happened?”

“Is anyone else getting this error?”

All at once, everyone around the table lost access to the application. It was gone. For a minute, we were left scratching our heads.

“Okay, what was everyone doing just before it shut down? Did anyone park anything?”

Finally, a sheepish marketing contractor spoke up. “I may have parked an instance.”

As it turned out, he had parked a production server. In particular, the production server running the ParkMyCloud application. D’oh!

Apparently, we needed governance. And we needed it fast. We got to work, and soon after, we released a version of ParkMyCloud that allowed for multiple users and teams for each ParkMyCloud account, all governed with role-based access control (RBAC).

We still use those roles today (incidentally, the “demo” team does not have access to production servers).

The lesson here is that using your application for yourself uncovers important usability issues. Some of these can’t be discovered as quickly as the one above, but only over time – like awkward flows, and reports that skip over meaningful data.

But of course, we also get the same benefits that the product gives to our customers – like saving money. In fact, after the approach was suggested to us by one of our customers, we adopted an “always off” schedule for ourselves. All of our non-production servers are parked 24×7. When our developers need to use them, they log in to ParkMyCloud and “snooze” the schedules for the length of time they need to use them.

This eliminates the need for central schedules, which works especially well for our multi-time-zone development team. Using this schedule, we save about 81% on our non-production servers.

I would encourage anyone who creates products to lead by example and use your product internally — and I assure potential ParkMyCloud customers that we drink our own champagne every day.

Read more ›

How one startup used AWS tools to build an MVP in 7 sprints

Below is the transcript of an interview with our friend Jonathan Chashper of Product Savvy about his experience in rapidly building an app, Wolfpack, using various AWS tools. From getting his team in a room and unpacking laptops, to releasing a minimum viable product (MVP) for beta testing took 14 weeks, which Jonathan attributes not only to the skill of his team but to the ease-of-use and agility they gained from AWS.

Thanks for speaking with us, Jonathan! First of all, can you tell us a little bit about Wolfpack? What is it, and why did you decide to start it?

wolfpackI am a motorcycle rider. A few years ago, I went on a group ride, and very quickly, the group broke apart. Some people missed a turn, some people got stuck at a red light, and a group of six suddenly became three groups of two. It took us about half an hour to figure out where everyone was, since you need to pull over, call everyone, and then – since everyone is riding their motorcycles – wait for them to pull over and call you back. It’s one big mess.

So I thought, there has to be a technical solution to this. I decided we should build a system that would allow me to track everyone I’m riding with, so I could see where the people riding with me are at any given time. If I got disconnected from the group, I could see where they are and pull over to gather back together. This was Eureka #1

Eureka #2 was understanding that communication is the second big problem for moving in groups. When you ride in a group,  on motorcycles, you’re usually riding in a column. Let’s say you’re rider #4 and you need gas. You cannot just pull over into a gas station, because you will get separated from the group. So usually what happens is that you speed up, you try to signal to the guy at the head of the column, and you point to the gas tank, you hope he understands and actually pulls into a gas station. It’s dangerous. So this is the second problem that people have when they move in packs, and these are the two problems that Wolfpack is solving: Keeping the group together and allowing for communication during the ride.

Wolfpack is a system for moving in groups. It doesn’t have to be motorcycles, but that’s the first niche we’re releasing it for. It’s also relevant for a group of cars, or even walking on foot with ten people around you, people get separated, and so on.

So we built a system that allows you as a user to install an app on a mobile device (both iOS and Android), that will allow you to manage the groups you want to travel with. Then, once you have the groups defined, you can define a trip with a starting point and an ending point. Everyone in the group then gets a map, and everyone can hop on it and start traveling together.

Here’s WolfPack’s About video, if you’re interested:

What AWS tools did you leverage when building Wolfpack?

Wolfpack is built on AWS, and we’re using CloudFront, we’re using SNS, we’re using S3 buckets, we’re using RDS, and of course EC2 instances, load balancing, Auto Scaling Groups, all the pretty buzzwords. We use them all – even AWS IoT, actually.

Have you had any interaction with AWS?

No, we’ve done it 100% ourselves. We’ve never talked to any solutions architects or anyone at AWS. It’s that easy to use.

What Amazon is doing is unbelievable. Things that used to take months or years to accomplish, you can now accomplish in days by clicking a couple of buttons and writing a little bit of code.

Why did you choose to develop on AWS?

The ecosystem they’ve created. This is why I think AWS is awesome: they’ve identified the pain points for people who want to build software.

The basic problem they identified is the need to buy servers. That’s the very basic solution they’ve given you: you can stand up a server in two minutes, you don’t need to buy or pay ten thousand dollars out of pocket, and so on and so forth, these are the good old EC2 Instances.

Then they went step by step and they said, okay, the next problem is managing databases. Before RDS, I had to have my own database from Oracle, and you’d have to buy a solution for load balancing, a solution for failover, back-up, recovery, etc., and this would cost tens of thousands, if not hundreds of thousands of dollars. AWS took that pain away by providing RDS.

The next step was message queues. Again, in the past, we would go to IBM, we would go to Oracle, back in the day, and you would use their message queues. It was complex, one message queue didn’t work with the other, and it was a mess. So  AWS created the  SNS to solve that.

And so on and so forth, like a domino. They have the buckets to solve the storage issue. Now the newest thing is IoT, where they understand that there’s billions of devices out there trying to send messages to each other, and very quickly, you clog the system. So AWS said, “okay, we’ll solve that problem now.” And they created the AWS IoT system which allows you to connect any device you want, very quickly, and support, I don’t know, probably billions and billions of messages. Almost for free, it doesn’t really cost anything. It’s a great system.

Have you had any challenges with AWS so far?

No, actually, no technological challenges so far. What they offer is really easy to use and understand. The one thing we do want to do is pay as little as we can for the EC2 servers, which is where we’re using ParkMyCloud to schedule on/off times for our non-production servers.

Are you using any other tools for automation and DevOps?

Yes, we are using Jenkins – we have a continuous integration machine. Our testing is still manual, unfortunately.

Continuous integration is the idea that every time someone completes a piece of code, they submit that to a repository. Jenkins has a script that takes that out of the repository, compiles everything, and deploys it. So at any given time, every time someone submits something, it’s immediately ready for my QA guy to test. The need for “Integration Sessions” went down, drastically. .

How long has the development taken?

From the minute we put the team together, until we had an MVP, we had seven sprints, which is just 14 weeks. And when I say “putting the team together,” I mean they went into a room and unpacked their laptops on March 1st. Now, fourteen weeks later, we had our MVP, which we’re now using for beta testing.

And did your team have deep AWS experience, or were some of them beginners?

Some of them had a little bit of AWS experience, but most of it came from us as on-the-job training. If you’re a software engineer, it’s really easy to get it.

On your non-production servers, where you’re using ParkMyCloud, do you know what percent of savings you’re getting?

We’re running those instances 12 hours a day, 5 days a week. So we’re running them 60 hours a week, so, let’s see, we’re getting about 65% savings. That’s pretty awesome.

Thanks so much for speaking with us, Jonathan.

Thank you!

Read more ›
Page 1 of 212
Copyright © ParkMyCloud 2016-2018. All rights reserved|Privacy Policy