Google Sustainability Efforts in the Cloud Now Claim to be “Carbon Intelligent”

Google Sustainability Efforts in the Cloud Now Claim to be “Carbon Intelligent”

Google Sustainability is an effort that ranges across their business, from the Global Fishing Watch to environmental consciousness in the supply chain. Given that cloud computing has been a major draw of global energy in recent years, the amount of computing done in data centers more than quintupled between 2010 and 2018. But, the amount of energy consumed by the world’s data centers grew only six percent during that period, thanks to improvements in energy efficiency. However, that’s still a lot of power. That’s why Google’s sustainability efforts for data centers and cloud computing are especially important. 

Google Cloud Sustainability Efforts – As Old as Their Data Centers

Reducing energy usage has been an initiative for Google for more than 10 years. Google has been carbon neutral since 2007, and 2019 marked the third year in a row that they’ve matched their energy usage with 100 percent renewable energy purchases. Google’s innovation in the data center market also comes from the process of building facilities from the ground up instead of buying existing infrastructures and using machine learning technology to monitor and improve power-usage-effectiveness (PUE) and find new ways to save energy in their data centers. 

When comparing the big three cloud providers in terms of sustainability efforts, AWS is by far the largest source of carbon emissions from the cloud globally, due to its dominance. However, AWS’s sustainability team is investing in green energy initiatives and is striving to commit to an ambitious goal of 100% use of renewable energy by 2040 to become as carbon-neutral as Google has been. Microsoft Azure, on the other hand, has run on 100 percent renewable energy since 2014 but would be considered a low-carbon electricity consumer and that’s in part because it runs less of the world than Amazon or Google. 

Nonetheless, data centers from the big three cloud providers, wherever they are, all run on electricity. How the electricity is generated is the important factor in whether they are more or less favorable for the environment. For Google, reaching 100% renewable energy purchasing on a global and annual basis was just the beginning. In addition to continuing their aggressive move forward with renewable energy technologies like wind and solar, they wanted to achieve the much more challenging long-term goal of powering operations on a region-specific, 24-7 basis with clean, zero-carbon energy.

Why Renewable Energy Needs to Be the Norm for Cloud Computing

It’s no secret that cloud computing is a drain of resources, roughly three percent of all electricity generated on the planet. That’s why it’s important for Google and other cloud providers to be part of the solution to solving global climate change. Renewable energy is an important element, as is matching the energy use from operations and by helping to create pathways for others to purchase clean energy. However, it’s not just about fighting climate change. Purchasing energy from renewable resources also makes good business sense, for two key reasons:

  • Renewables are cost-effective – The cost to produce renewable energy technologies like wind and solar had come down precipitously in recent years. By 2016, the levelized cost of wind had come down 60% and the levelized cost of solar had come down 80%. In fact, in some areas, renewable energy is the cheapest form of energy available on the grid. Reducing the cost to run servers reduces the cost for public cloud customers – and we’re in favor of anything that does that.
  • Renewable energy inputs like wind and sunlight are essentially free – Having no fuel input for most renewables allows Google to eliminate exposure to fuel-price volatility and especially helpful when managing a global portfolio of operations in a wide variety of markets.

Google Sustainability in the Cloud Goes “Carbon Intelligent”

In continuum with their goals for data centers to consume more energy from renewable resources, Google recently revealed in their latest announcement that it will also be time-shifting workloads to take advantage of these resources and make data centers run harder when the sun shines and the wind blows. 

“We designed and deployed this first-of-its-kind system for our hyperscale (meaning very large) data centers to shift the timing of many compute tasks to when low-carbon power sources, like wind and solar, are most plentiful.”, Google announced.  

Google’s latest advancement in sustainability is a newly developed carbon-intelligent computing platform that seems to work by using two forecasts – one indicating future carbon intensity of the local electrical grid near its data center and another of its own capacity requirements – and using that data “align compute tasks with times of low-carbon electricity supply.” The result is that workloads run when Google believes it can do so while generating the lowest-possible CO2 emissions.

The carbon-intelligent computing platform’s first version will focus on shifting tasks to different times of the day, within the same data center. But, Google already has plans to expand its capability, in addition to shifting time, it will also move flexible compute tasks between different data centers so that more work is completed when and where doing so is more environmentally friendly. As the platform continues to generate data, Google will document its research and share it with other organizations in hopes they can also develop similar tools and follow suit. 

Leveraging forecasting with artificial intelligence and machine learning is the next best thing and Google is utilizing this powerful combination in their platform to anticipate workloads and improve the overall health and performance of their data center to be more efficient. Combined with efforts to use cloud resources efficiently by only running VMs when needed, and not oversizing, resource utilization can be improved to reduce your carbon footprint and save money.

NEW in ParkMyCloud: Now Offering Azure AKS Cost Optimization!

NEW in ParkMyCloud: Now Offering Azure AKS Cost Optimization!

We’re excited to share the latest in cost optimization for container services: ParkMyCloud now enables enterprises to optimize their Azure AKS (managed Azure Kubernetes Service) cloud costs. This is the second managed container service supported in the platform since we announced support for the scheduling of Amazon EKS (managed Elastic Kubernetes Service) last month.

Why is Container Cost Optimization Essential?

As we continue to expand our container management offering, it’s essential to understand that container management, like the broader cloud management, includes orchestration, security, monitoring, and of course, optimization.

Containers provide opportunities for efficiency and more lightweight application development, but like any on-demand computing resource, they also leave the door open for wasted spend. If not managed properly unused, idle, and otherwise suboptimal container options will contribute billions more to the estimated $17.6 billion in wasted cloud spend expected this year alone.

AKS Scheduling in ParkMyCloud

The opportunities to save money through container optimization are in essence no different than for your non-containerized resources. ParkMyCloud analyzes resource utilization history and creates recommended schedules for compute, database and container resources, and programmatically schedules and resizes them, saving enterprises around the world tens of millions of dollars.

You can reduce your AKS costs by setting schedules for AKS nodes based on working hours and usage, and automatically assign those schedules using the platform’s policy engine and tags. Or, use ParkMyCloud’s schedule recommendations for your resources based on your utilization data. 

Already a ParkMyCloud user? Log in to your account to optimize your AKS costs. Please note you’ll have to update your Azure permissions. Details available in the release notes

Not yet a ParkMyCloud user?start a free trial to get started.

What’s Next for Container Optimization?

This is the second release for container optimization in ParkMyCloud. The platform already offers support for Amazon EKS (managed Elastic Kubernetes Service). Support scheduling for Amazon ECS, AWS Fargate, and Google Kubernetes Engine (GKE) will be available soon in the next few months, so stay tuned.

Questions? Feature requests? We’d love to hear them. Comment below or contact us directly.

Further reading:

 

Will Robotic Process Automation Save Your Company Time and Money in the Cloud?

Will Robotic Process Automation Save Your Company Time and Money in the Cloud?

After hearing a lot of buzz about this concept in AI, we decided to see what’s next for robotic process automation. The promise of the technology is that it can automate processes that employees are doing manually, saving your employees’ time and potentially reducing operational costs. While robotic process automation (RPA) interest has been high for a while, actual adoption is now catching up and will only continue to grow in the future. Organizations are understanding the power of process automation, so in turn, more industries are expected to deploy more RPA bots to eliminate manual repetitive actions performed by employees. 

RPA software is en route to becoming a billion-dollar category in 2020. Last year, Gartner projected that spending on RPA software was expected to hit $1.3 billion. However, there are still some growing pains to address with RPA and is not exactly a 100 percent perfect, but it fits right in with the current trends in cloud computing toward optimization. And, since, we’re all about saving time and money – let’s recap on this trend to see how it can help to do these things. 

What is Robotic Process Automation?

To recount, RPA, whether it’s called “intelligent automation” or “cognitive automation” in the future, is a way to automate business processes by creating software robots paired with artificial intelligence (AI) and machine learning capabilities to perform manual and mundane work-tasks. It allows users to configure within an application and gives them the capability to handle a variety of repetitive tasks by processing, employing, generating and communicating information automatically. For example, you might program RPA bots to do first-level customer support tasks by searching for answers; copy and paste data from one system to another for invoicing or expense management or issue refunds. This video from IBM shows an example in action. 

RPA software is not part of an organization’s IT infrastructure. Instead, it sits on top of it, enabling a company to implement the technology quickly and efficiently. Furthermore, RPA tools can be trained to make judgments about future outputs. Many users appreciate its non-intrusive nature and the ability to integrate within infrastructures without causing disruption to systems already in place.

How can you use Robotic Process Automation?

RPA technology can help organizations on their digital transformation journeys by:

  • Enabling better customer service.
  • Ensuring business operations and processes comply with regulations and standards.
  • Allowing processes to be completed much more rapidly.
  • Providing improved efficiency by digitizing and auditing process data.
  • Creating cost savings for manual and repetitive tasks.
  • Enabling employees to be more productive.

Companies like Walmart, AT&T, and Walgreens are adopting the use of RPA. Clay Johnson, the CIO of Walmart, says they use RPA bots to automate pretty much anything from answering employee questions to retrieving useful information from audit documents. The CIO of American Express Global Business Travel, David Thompson, says they implement the use of RPA to automate the process for canceling an airline ticket and issuing refunds. In addition, Thompson is looking to use RPA to facilitate automatic rebooking recommendations, and to automate certain expense management tasks in the company.

But more specific to cloud computing and IT, one great application for RPA is in automated software testing. If testing involves multiple applications and monotonous work, RPA can replace workers’ time spent testing. Automated tests can run repeatedly at any time of day. This approach fits in with continuous testing as well as continuous integration (CI) and continuous delivery (CD) software development practices. Additionally, RPA can be used to automate processes in monolithic legacy systems that are not worth developers’ time to update, to bring automation while work on newer microservices systems is in progress. 

Is Robotic Process Automation the Best Way to Automate Cost Control?

A study found that not all automation is achievable with RPA. In the study, they conclude that only three percent of organizations have managed to scale RPA to a high level. Additionally, Gartner placed RPA tools at the “Peak of Inflated Expectations” in their Hype Cycle guide for artificial intelligence  – another vote for more buzz than potential. In reality, it is only as efficient as the person configuring the automation flow and organizations that have overly idealized expectations of the technology’s capabilities. Those that don’t have a solid grasp of their own processes may find it difficult to find the right tool to automate jobs.

However, RPA is expected to deliver tangible results to organizations that make automation a key component of their digital transformation as the collaboration between digital workers and human talent become more efficiently aligned in the future.

So can it save you time and money? If employees at your company are spending a large percentage of their time on repetitive tasks that require little to no decision making, then yes, it probably can. It’s also important because it will free up developer time that is spent on automatable tasks, like scripting, so they can focus on creating value for your business. 

For complex and long-term automation, though, purpose-built software is a better solution. If there is already a solution to your automation needs on the market, it will probably serve you better than RPA because there won’t be an upfront period needed to program bots, you won’t need to make frequent changes to your processes like many RPA bots will require, and it’s a better solution for the long run. 

The 5 Components of Azure DevOps

The 5 Components of Azure DevOps

In the search to accelerate and simplify the DevOps process, we take a look at Microsoft’s Azure DevOps, a hosted service providing development and collaboration tool that was formerly known as Visual Studio Team Services (VSTS). Last year, Microsoft split VSTS into five separate Azure-branded services, under the banner Azure DevOps for a complete offering in public cloud that makes it easier for developers to adopt portions of the Azure DevOps platform, without requiring them to go “all in” like the former VSTS.

Azure DevOps supports both public and private cloud configurations – the services include:

  • Azure Boards – A work tracking system with Kanban boards, dashboards, and reporting
  • Azure Pipelines – A CI/CD, testing, and deployment system that can connect to any Git repository
  • Azure Repos – A cloud-hosted private Git repository service
  • Azure Test Plans – A solution for tests and capturing data about defects
  • Azure Artifacts – A hosting facility for Maven, npm, and NuGet packages

Each of these Azure DevOps services is open and extensible and can be used with all varieties of applications, regardless of the framework, platform or cloud. Built-in cloud-hosted agents are provided for Windows, Mac OS and Linux and workflows are enabled for native container support and Kubernetes deployment options, virtual machines, and serverless environments.

With all five services together users can take advantage of an integrated suite that provides end to end DevOps functionalities. But, since they are broken up into separate components, Azure DevOps gives users the flexibility to just pick which services to employ without the need to use the full suite. For example, with Kubernetes having a standard interface and running the same way on all cloud providers, Azure Pipelines can be used for deploying to Azure Kubernetes Service (AKS), Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS), or clusters from any other cloud provider without requiring the use of any of the other Azure DevOps components.

Embracing Azure DevOps

One of the main benefits for teams using Azure DevOps is developers will be able to work securely from anywhere and in any format and embrace open-source technology. Azure DevOps addresses the vendor lock-in problem from its early version by providing extensive integration with industry and community tools.

With the many integrations available, users can log in using SSO tools like Azure AD or communicate with their team via Slack integration while accessing both cloud and on-premises resources.

Azure Pipelines offers free CI/CD with unlimited minutes and 10 parallel jobs for every open source project and many of the top open-source projects already use Azure Pipelines for CI/CD, such as Atom, CPython, Pipenv, Tox, Visual Studio Code, and TypeScript.

Benefits of Azure DevOps 

Azure DevOps use cases include:

  • Planning – Azure DevOps makes it easy for DevOps teams to manage their work with full visibility across products and projects, helping them keep development efforts transparent and on schedule. Teams can define, track, and layout work with Kanban boards, backlogs, custom dashboards and reporting capabilities using Azure Boards.
  • Developing –  Allows teams to share code and collaborate together with Visual Studio and Visual Studio Code. Users can create automatic workflows for automated testing and continuous integration in the cloud with Azure Pipelines. 
  • Delivery – Helps teams deploy applications to any Azure service automatically and with full control. Users can define and spin up multiple cloud environments with Azure Resource Manager or HashiCorp Terraform, and then create continuous delivery pipelines into these environments using Azure Pipelines or tools such as Jenkins and Spinnaker.
  • Operations –  With Azure Monitor, users can implement full stack monitoring, get actionable alerts, and gain insights from logs and telemetry.

Pricing

As for Azure DevOps pricing, there are a lot of open-source tools that can be combined to deliver the functionality that Azure DevOps promises to provide, but the basic plan for open source projects and small projects is free up to five users. For larger teams, the cost can range from $30 per month for 10 users to $90 per month for 20 users and so forth.

In summary, Azure DevOps is an all in one focussed project tracking and planning tool mixed with Developer and DevOps tools for writing, building and deploying code that’s relatively quick and easy to use. But, while maintenance cost is decreased, developers only need an active subscription to have constant access to the latest version. Azure DevOps will indirectly utilize Azure Storage and compute services that will increase usage and impact costs.

How to Manage Hybrid & Multi-Cloud Environments with Google Cloud Composer

How to Manage Hybrid & Multi-Cloud Environments with Google Cloud Composer

As we continue to evaluate ways to automate various aspects of software development, today we’ll take a look at Google Cloud Composer. This is a fully managed workflow orchestration service built on Apache Airflow that makes workflow management and creation simple and consistent.

The evolution of hybrid and multi-cloud environments continue to grow as enterprises want to take advantage of the cloud’s scalability, flexibility, and global reach. Of the three major providers, Google Cloud has been the most open to supporting this multi-cloud reality. For example, earlier this year, Google launched Anthos, a new managed service offering for hybrid and multi-cloud environments to give enterprises operational consistency by running quickly on any existing hardware, leverage open APIs and give developers the freedom to modernize. But, implementing the management of these environments can be either an invaluable proposition for your company or one to completely challenge your infrastructure instead – which brings us to Google’s solution, Cloud Composer.

How does Google Cloud Composer work?

With Cloud Composer, you can monitor, schedule and manage workflows across your hybrid and multi-cloud environment. Here is how:

  • As part of Google Cloud Platform (GCP), Cloud Composer integrates with tools like BigQuery, Dataflow, Dataproc, Datastore, Cloud Storage, Pub/Sub and Cloud ML Engine, giving users the ability to orchestrate end-to-end GCP workloads.
  • You can code directed acyclic graphs (DAGs) using Python to improve workflow readability and pinpoint areas in need of assistance.
  • It has one-click deployment built-in to give you instant and easy access to a range of connectors and graphical representations that show your workflow in action.
  • Cloud Composer allows you to pull workflows together from wherever they live, supporting a fully-functioning and connected cloud environment.
  • Since Cloud Composer is built on Apache Airflow – an open-source technology – it provides freedom from vendor lock-in as well as integration with a wide variety of platforms.  

Simplifying hybrid and multi-cloud environment management

Cloud Composer is ideal for hybrid and multi-cloud management because it’s built on Apache Airflow and operated with the Python programming language. Using open-source technology and the “no lock-in” approach and portability gives users the flexibility to create and deploy workflows seamlessly across clouds for a unified data environment.

Setting up your environment is quick and simple. Pipelines created with Cloud Composer will be configured as DAGs with easy integration for any required Python libraries, giving users of almost any level the ability to create and schedule their own workflows. With the built-in one-click deployment, you get instant and easy access to a range of connectors and graphical representations that show your workflow in action.

However, costs can be a drawback to making the most of your cloud environment when using Cloud Composer. Landing on specific costs for Cloud Composer can be hard to calculate, as Google measures the resources your deployments use and add the total cost of your Apache Airflow deployments onto your wider GCP bill. 

Cloud Composer Pricing 

Pricing for Cloud Composer is based on the size of a Cloud Composer environment and the duration the environment runs, so you pay for what you use, as measured by vCPU/hour, GB/month, and GB transferred/month. Google offers multiple pricing units for Cloud Composer because it uses several GCP products as building blocks. You can also use the Google Cloud Platform pricing calculator to estimate the cost of using Cloud Composer. 

So, should you use Google Cloud Composer? Cloud Composer environments are meant to be long-running compute resources that are always online so that you can schedule repeating workflows whenever necessary. Unfortunately, since you can’t turn on and off a Cloud Composer environment; you can only create or destroy, it may not be right for every environment and could cost more than the advantages may be worth.