After hearing a lot of buzz about this concept in AI, we decided to see what’s next for robotic process automation. The promise of the technology is that it can automate processes that employees are doing manually, saving your employees’ time and potentially reducing operational costs. While robotic process automation (RPA) interest has been high for a while, actual adoption is now catching up and will only continue to grow in the future. Organizations are understanding the power of process automation, so in turn, more industries are expected to deploy more RPA bots to eliminate manual repetitive actions performed by employees.
RPA software is en route to becoming a billion-dollar category in 2020. Last year, Gartner projected that spending on RPA software was expected to hit $1.3 billion. However, there are still some growing pains to address with RPA and is not exactly a 100 percent perfect, but it fits right in with the current trends in cloud computing toward optimization. And, since, we’re all about saving time and money – let’s recap on this trend to see how it can help to do these things.
What is Robotic Process Automation?
To recount, RPA, whether it’s called “intelligent automation” or “cognitive automation” in the future, is a way to automate business processes by creating software robots paired with artificial intelligence (AI) and machine learning capabilities to perform manual and mundane work-tasks. It allows users to configure within an application and gives them the capability to handle a variety of repetitive tasks by processing, employing, generating and communicating information automatically. For example, you might program RPA bots to do first-level customer support tasks by searching for answers; copy and paste data from one system to another for invoicing or expense management or issue refunds. This video from IBM shows an example in action.
RPA software is not part of an organization’s IT infrastructure. Instead, it sits on top of it, enabling a company to implement the technology quickly and efficiently. Furthermore, RPA tools can be trained to make judgments about future outputs. Many users appreciate its non-intrusive nature and the ability to integrate within infrastructures without causing disruption to systems already in place.
How can you use Robotic Process Automation?
RPA technology can help organizations on their digital transformation journeys by:
- Enabling better customer service.
- Ensuring business operations and processes comply with regulations and standards.
- Allowing processes to be completed much more rapidly.
- Providing improved efficiency by digitizing and auditing process data.
- Creating cost savings for manual and repetitive tasks.
- Enabling employees to be more productive.
Companies like Walmart, AT&T, and Walgreens are adopting the use of RPA. Clay Johnson, the CIO of Walmart, says they use RPA bots to automate pretty much anything from answering employee questions to retrieving useful information from audit documents. The CIO of American Express Global Business Travel, David Thompson, says they implement the use of RPA to automate the process for canceling an airline ticket and issuing refunds. In addition, Thompson is looking to use RPA to facilitate automatic rebooking recommendations, and to automate certain expense management tasks in the company.
But more specific to cloud computing and IT, one great application for RPA is in automated software testing. If testing involves multiple applications and monotonous work, RPA can replace workers’ time spent testing. Automated tests can run repeatedly at any time of day. This approach fits in with continuous testing as well as continuous integration (CI) and continuous delivery (CD) software development practices. Additionally, RPA can be used to automate processes in monolithic legacy systems that are not worth developers’ time to update, to bring automation while work on newer microservices systems is in progress.
Is Robotic Process Automation the Best Way to Automate Cost Control?
A study found that not all automation is achievable with RPA. In the study, they conclude that only three percent of organizations have managed to scale RPA to a high level. Additionally, Gartner placed RPA tools at the “Peak of Inflated Expectations” in their Hype Cycle guide for artificial intelligence – another vote for more buzz than potential. In reality, it is only as efficient as the person configuring the automation flow and organizations that have overly idealized expectations of the technology’s capabilities. Those that don’t have a solid grasp of their own processes may find it difficult to find the right tool to automate jobs.
However, RPA is expected to deliver tangible results to organizations that make automation a key component of their digital transformation as the collaboration between digital workers and human talent become more efficiently aligned in the future.
So can it save you time and money? If employees at your company are spending a large percentage of their time on repetitive tasks that require little to no decision making, then yes, it probably can. It’s also important because it will free up developer time that is spent on automatable tasks, like scripting, so they can focus on creating value for your business.
For complex and long-term automation, though, purpose-built software is a better solution. If there is already a solution to your automation needs on the market, it will probably serve you better than RPA because there won’t be an upfront period needed to program bots, you won’t need to make frequent changes to your processes like many RPA bots will require, and it’s a better solution for the long run.
In the search to accelerate and simplify the DevOps process, we take a look at Microsoft’s Azure DevOps, a hosted service providing development and collaboration tool that was formerly known as Visual Studio Team Services (VSTS). Last year, Microsoft split VSTS into five separate Azure-branded services, under the banner Azure DevOps for a complete offering in public cloud that makes it easier for developers to adopt portions of the Azure DevOps platform, without requiring them to go “all in” like the former VSTS.
Azure DevOps supports both public and private cloud configurations – the services include:
- Azure Boards – A work tracking system with Kanban boards, dashboards, and reporting
- Azure Pipelines – A CI/CD, testing, and deployment system that can connect to any Git repository
- Azure Repos – A cloud-hosted private Git repository service
- Azure Test Plans – A solution for tests and capturing data about defects
- Azure Artifacts – A hosting facility for Maven, npm, and NuGet packages
Each of these Azure DevOps services is open and extensible and can be used with all varieties of applications, regardless of the framework, platform or cloud. Built-in cloud-hosted agents are provided for Windows, Mac OS and Linux and workflows are enabled for native container support and Kubernetes deployment options, virtual machines, and serverless environments.
With all five services together users can take advantage of an integrated suite that provides end to end DevOps functionalities. But, since they are broken up into separate components, Azure DevOps gives users the flexibility to just pick which services to employ without the need to use the full suite. For example, with Kubernetes having a standard interface and running the same way on all cloud providers, Azure Pipelines can be used for deploying to Azure Kubernetes Service (AKS), Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS), or clusters from any other cloud provider without requiring the use of any of the other Azure DevOps components.
Embracing Azure DevOps
One of the main benefits for teams using Azure DevOps is developers will be able to work securely from anywhere and in any format and embrace open-source technology. Azure DevOps addresses the vendor lock-in problem from its early version by providing extensive integration with industry and community tools.
With the many integrations available, users can log in using SSO tools like Azure AD or communicate with their team via Slack integration while accessing both cloud and on-premises resources.
Azure Pipelines offers free CI/CD with unlimited minutes and 10 parallel jobs for every open source project and many of the top open-source projects already use Azure Pipelines for CI/CD, such as Atom, CPython, Pipenv, Tox, Visual Studio Code, and TypeScript.
Benefits of Azure DevOps
Azure DevOps use cases include:
- Planning – Azure DevOps makes it easy for DevOps teams to manage their work with full visibility across products and projects, helping them keep development efforts transparent and on schedule. Teams can define, track, and layout work with Kanban boards, backlogs, custom dashboards and reporting capabilities using Azure Boards.
- Developing – Allows teams to share code and collaborate together with Visual Studio and Visual Studio Code. Users can create automatic workflows for automated testing and continuous integration in the cloud with Azure Pipelines.
- Delivery – Helps teams deploy applications to any Azure service automatically and with full control. Users can define and spin up multiple cloud environments with Azure Resource Manager or HashiCorp Terraform, and then create continuous delivery pipelines into these environments using Azure Pipelines or tools such as Jenkins and Spinnaker.
- Operations – With Azure Monitor, users can implement full stack monitoring, get actionable alerts, and gain insights from logs and telemetry.
As for Azure DevOps pricing, there are a lot of open-source tools that can be combined to deliver the functionality that Azure DevOps promises to provide, but the basic plan for open source projects and small projects is free up to five users. For larger teams, the cost can range from $30 per month for 10 users to $90 per month for 20 users and so forth.
In summary, Azure DevOps is an all in one focussed project tracking and planning tool mixed with Developer and DevOps tools for writing, building and deploying code that’s relatively quick and easy to use. But, while maintenance cost is decreased, developers only need an active subscription to have constant access to the latest version. Azure DevOps will indirectly utilize Azure Storage and compute services that will increase usage and impact costs.
As we continue to evaluate ways to automate various aspects of software development, today we’ll take a look at Google Cloud Composer. This is a fully managed workflow orchestration service built on Apache Airflow that makes workflow management and creation simple and consistent.
The evolution of hybrid and multi-cloud environments continue to grow as enterprises want to take advantage of the cloud’s scalability, flexibility, and global reach. Of the three major providers, Google Cloud has been the most open to supporting this multi-cloud reality. For example, earlier this year, Google launched Anthos, a new managed service offering for hybrid and multi-cloud environments to give enterprises operational consistency by running quickly on any existing hardware, leverage open APIs and give developers the freedom to modernize. But, implementing the management of these environments can be either an invaluable proposition for your company or one to completely challenge your infrastructure instead – which brings us to Google’s solution, Cloud Composer.
How does Google Cloud Composer work?
With Cloud Composer, you can monitor, schedule and manage workflows across your hybrid and multi-cloud environment. Here is how:
- As part of Google Cloud Platform (GCP), Cloud Composer integrates with tools like BigQuery, Dataflow, Dataproc, Datastore, Cloud Storage, Pub/Sub and Cloud ML Engine, giving users the ability to orchestrate end-to-end GCP workloads.
- You can code directed acyclic graphs (DAGs) using Python to improve workflow readability and pinpoint areas in need of assistance.
- It has one-click deployment built-in to give you instant and easy access to a range of connectors and graphical representations that show your workflow in action.
- Cloud Composer allows you to pull workflows together from wherever they live, supporting a fully-functioning and connected cloud environment.
- Since Cloud Composer is built on Apache Airflow – an open-source technology – it provides freedom from vendor lock-in as well as integration with a wide variety of platforms.
Simplifying hybrid and multi-cloud environment management
Cloud Composer is ideal for hybrid and multi-cloud management because it’s built on Apache Airflow and operated with the Python programming language. Using open-source technology and the “no lock-in” approach and portability gives users the flexibility to create and deploy workflows seamlessly across clouds for a unified data environment.
Setting up your environment is quick and simple. Pipelines created with Cloud Composer will be configured as DAGs with easy integration for any required Python libraries, giving users of almost any level the ability to create and schedule their own workflows. With the built-in one-click deployment, you get instant and easy access to a range of connectors and graphical representations that show your workflow in action.
However, costs can be a drawback to making the most of your cloud environment when using Cloud Composer. Landing on specific costs for Cloud Composer can be hard to calculate, as Google measures the resources your deployments use and add the total cost of your Apache Airflow deployments onto your wider GCP bill.
Cloud Composer Pricing
Pricing for Cloud Composer is based on the size of a Cloud Composer environment and the duration the environment runs, so you pay for what you use, as measured by vCPU/hour, GB/month, and GB transferred/month. Google offers multiple pricing units for Cloud Composer because it uses several GCP products as building blocks. You can also use the Google Cloud Platform pricing calculator to estimate the cost of using Cloud Composer.
So, should you use Google Cloud Composer? Cloud Composer environments are meant to be long-running compute resources that are always online so that you can schedule repeating workflows whenever necessary. Unfortunately, since you can’t turn on and off a Cloud Composer environment; you can only create or destroy, it may not be right for every environment and could cost more than the advantages may be worth.
Earlier this year at the Google Cloud Next event, Google announced the launch of its new managed service offering for multi-cloud environments, Google Cloud Anthos.
The benefits of public cloud, like cost savings and higher levels of productivity, are often presented as an “all or nothing” choice to enterprises. However, with this offering, Google is acknowledging that multi-cloud environments are the reality as organizations see the value of expanding their cloud platform portfolios. Anthos is Google’s answer to the challenges enterprises face when adopting cloud solutions alongside their on-prem environments. It aims to enable customers to evolve into a hybrid and multi-cloud environment to take advantage of scalability, flexibility, and global reach. In the theory of “write once, run anywhere”, Anthos also promises to give developers the ability to build once and run apps anywhere on their multi-cloud environments.
Anthos embraces open-source technology
Google Cloud Anthos is based on the Cloud Services Platform that Google introduced last year. Google’s vision is to integrate the family of cloud services together.
Anthos is generally available on both Google Cloud Platform (GCP) with Google Kubernetes Engine (GKE) and data centers with GKE On-Prem. So how does Google aim to deliver on the multi-cloud promise? It embraces open-source technology standards to let you build, manage and run modern hybrid applications on existing on-prem environments or in public cloud. Moreover, Anthos offers a flexible way to shift workloads from third-party clouds, such as Amazon Web Services (AWS) and Microsoft Azure to GCP and vice-versa. This allows users not to worry about getting locked-in to a provider.
As a 100% software solution, Anthos gives businesses operational consistency by running quickly on any existing hardware. Anthos leverages open APIs, giving developers the freedom to modernize. And, it automatically updates with the latest feature updates and security patches, because is based on GKE.
Rapid cloud transformation from Anthos
Google also introduced Migrate for Anthos at Cloud Next, which automates the process of migrating virtual machines (VM) to a container in GKE, regardless of whether the VM is set up on-prem or in the cloud lets users convert workloads directly into containers in GKE. Migrate for Anthos makes the workload portability less difficult both technically and in terms of developer skills when migrating.
Though most digital transformations are a mix of different strategies, for the workloads that will benefit the most, containers, migrating with Anthos will deliver a fast, smooth path to modernization according to Migrate for Anthos Beta.
Streamlining multi-cloud management with Anthos
Another piece of the offering is Anthos Config Management, which lets users streamline confirmation so they can create multi-cluster policies out of the box, set and enforce secure role-based access controls, resource quotas, and create namespaces. The capability to automate policy and security also works with their open-source independent service for microservices, Istio.
The management platform also lets users create common configurations for all administrative policies that apply to their Kubernetes clusters both on-prem and cloud. Users can define and enforce configurations globally, validate configurations with the built-in validator that reviews every line of code before it gets to the repository, and actively monitors them.
Expanded Services for Anthos
Google Cloud is expanding its Anthos platform with Anthos Service Mesh and Cloud Run for Anthos serverless capabilities, announced last week and currently in beta.
The first is Anthos Service Mesh, which is built on Istio APIs, is designed to connect, secure, monitor and manage microservice running in containerized environments, all through a single administrative dashboard that tracks the application’s traffic. This new service is aimed to improve the developer experience by making it easier to manage and troubleshoot the complexities of the multi-cloud environment.
Another update Google introduced was Cloud Run for Anthos. This managed service for serverless computing allows users to easily run stateless workloads on a fully managed Anthos environment without having to manage those cloud resources. It only charges for access when the application needs resources. Cloud Run for Anthos can run workloads on Google Cloud on on-premises and is limited to Google’s Cloud Platform (GCP) only.
Both AWS and Azure have hybrid cloud offerings but are not the same, mostly for one single reason.
AWS Outposts brings native AWS services, infrastructure, and operating models to virtually any data center, co-location space, or on-premises facility, in the same operating idea as Anthos, using the same AWS APIs, tools, and infrastructure across on-prem and the AWS cloud to deliver a seamless and consistent for an AWS hybrid experience.
As an extension of Azure to consistently build and run hybrid applications across their cloud and on-prem environments, Azure Stack delivers a solution for workloads wherever they reside and gives them access to connect to Azure Stack for cloud services.
As you can see, the main difference is that both AWS Outposts and Azure Stack are limited to combining on-premises infrastructure and the respective cloud provider itself, with no support for other cloud providers, unlike Anthos. Google Cloud Anthos manages hybrid multi-cloud environments, not just hybrid cloud environments, making it a unique offering for multi-cloud environment users.
Is green computing something cloud providers like Amazon, Microsoft, and Google care about? And whether they do or not – how much does it matter? As the data center market continues to grow, it’s making an impact not only on the economy but on the environment as well.
Public cloud offers enterprises more scalability and flexibility compared to their on-premise infrastructures. One benefit occasionally touted by the major cloud providers is that organizations will be more socially responsible when moving to the cloud by reducing their carbon footprint. But is this true?
Here is one example: Northern Virginia is the east coast’s capital of data centers, where “Data Center Alley” is located (and, as it happens, the ParkMyCloud offices), home to more than 100 data centers and more than 10 million square feet of data center space. Northern Virginia welcomed the data center market because of its positive economic impact. But as the demand for cloud services continues to grow, the expansion of data centers also increases dramatically. Earlier this year, the cloud boom in Northern Virginia alone was reaching over 4.5 gigawatts in commissioned energy, about the same power output needed from nine large (500-megawatt) coal power plants.
Environmental groups like Greenpeace have accused major cloud providers like Amazon Web Services (AWS) of not doing enough for the environment when operating data centers. According to them, the problem is that cloud providers rely on commissioned energy from energy companies that are only focused on dirty energy (coal and natural gas) and very little from initiatives that include renewable energy. While the claims bring the spotlight on energy companies as well, we wanted to know what (if anything) the major cloud providers are doing to rely less on these types of energy and provide data centers with cleaner energy to make green computing a reality.
Data Center Sustainability Projects from AWS
According to AWS’s sustainability team, they’re investing in green energy initiatives and are striving to commit to an ambitious goal of 100% use of renewable energy by 2040. They are doing this with the proposition and support of smart environmental policies, and leveraging expertise in technology that drives sustainable innovation by working with state and local environmental groups and through power purchase agreements (PPAs) from power companies.
AWS’s Environmental Layer, which is dedicated to site selection, construction, operations and the mitigation of environmental risks for data centers, also includes sustainability considerations when making such decisions. According to them, “When companies move to the AWS Cloud from on-premises infrastructure, they typically reduce carbon emissions by 88%.” This is because their data suggests companies generally use 77% fewer servers, 84% less power, and gain access to a 28% cleaner mix of energy – solar and wind power – compared to using on-premise infrastructure.
So, how much of this commitment has AWS been able to achieve and is it enough? In 2018, AWS said they had made a lot of progress in their sustainability commitment, and exceeded 50% of renewable energy use. Currently, AWS has nine renewable energy farms in the US, including six solar farms located in Virginia and three wind farms in North Carolina. AWS plans to add three more renewable energy projects, one more here in the US, one in Ireland and one in Sweden. Once completed they expect to create approximately 2.7 gigawatts of renewable energy annually.
Microsoft’s Environmental Initiatives for Data Centers
Microsoft has stated that they are committed to change and make a positive impact on the environment, by “leveraging technology to solve some of the world’s most urgent environmental issues.”
In 2016, they announced they would power their data centers with more renewable energy, and set a target goal of 50% renewable energy by the end of 2018. But according to them, they were able to achieve that goal by 2017, earlier than they expected. Looking ahead they plan to surpass their next milestone of 70% and hope to reach 100% of renewable energy by 2023. If they were to meet these targets, they would be far ahead of AWS.
Beyond renewable energy, Microsoft plans to use IoT, AI and blockchain technology to measure, monitor and streamline the reuse, resale, and recycling of data center assets. Additionally, Microsoft will implement new water replenishment initiatives that will utilize rainfall for non-drinking water applications in their facilities.
Google’s Focus for Efficient Data Centers
Google claims that making data centers run as efficiently as possible is a very big deal, and that reducing energy usage has been a major focus to them for over the past 10 years.
Google’s innovation in the data center market came from the process of building facilities from the ground up instead of buying existing infrastructures. According to Google, using machine learning technology to monitor and improve power-usage-effectiveness (PUE) and find new ways to save energy in their data centers gave them the ability to implement new cooling technologies and operational strategies that would reduce energy consumption in their buildings by 30%. Additionally, they deployed custom-designed, high-performance servers that use very little energy as possible by stripping them of unnecessary components, helping them reduce their footprint and add more load capacity.
By 2017, Google announced they were using 100% renewable energy through power purchase agreements (PPAs) from wind and solar farms and then reselling it back to the wholesale markets where data centers are located.
The Environmental Argument
Despite the pledges cloud providers are committing to in renewable energy, cloud services continue to grow beyond those commitments, and how much energy is needed to operate data centers is still very dependant on “dirty energy.”
Breakthroughs for cloud sustainability are taking place, whether big or small, providing the cloud with better infrastructures, high-performance servers, and the reduction of carbon emissions with more access to renewable energy resources like wind and solar power.
However, some may argue the time might be against us, but if cloud providers continue to better improve existing commitments that keep up with growth, then data centers – and ultimately the environment – will benefit from them.