Google Sustainability Efforts in the Cloud Now Claim to be “Carbon Intelligent”

Google Sustainability Efforts in the Cloud Now Claim to be “Carbon Intelligent”

Google Sustainability is an effort that ranges across their business, from the Global Fishing Watch to environmental consciousness in the supply chain. Given that cloud computing has been a major draw of global energy in recent years, the amount of computing done in data centers more than quintupled between 2010 and 2018. But, the amount of energy consumed by the world’s data centers grew only six percent during that period, thanks to improvements in energy efficiency. However, that’s still a lot of power. That’s why Google’s sustainability efforts for data centers and cloud computing are especially important. 

Google Cloud Sustainability Efforts – As Old as Their Data Centers

Reducing energy usage has been an initiative for Google for more than 10 years. Google has been carbon neutral since 2007, and 2019 marked the third year in a row that they’ve matched their energy usage with 100 percent renewable energy purchases. Google’s innovation in the data center market also comes from the process of building facilities from the ground up instead of buying existing infrastructures and using machine learning technology to monitor and improve power-usage-effectiveness (PUE) and find new ways to save energy in their data centers. 

When comparing the big three cloud providers in terms of sustainability efforts, AWS is by far the largest source of carbon emissions from the cloud globally, due to its dominance. However, AWS’s sustainability team is investing in green energy initiatives and is striving to commit to an ambitious goal of 100% use of renewable energy by 2040 to become as carbon-neutral as Google has been. Microsoft Azure, on the other hand, has run on 100 percent renewable energy since 2014 but would be considered a low-carbon electricity consumer and that’s in part because it runs less of the world than Amazon or Google. 

Nonetheless, data centers from the big three cloud providers, wherever they are, all run on electricity. How the electricity is generated is the important factor in whether they are more or less favorable for the environment. For Google, reaching 100% renewable energy purchasing on a global and annual basis was just the beginning. In addition to continuing their aggressive move forward with renewable energy technologies like wind and solar, they wanted to achieve the much more challenging long-term goal of powering operations on a region-specific, 24-7 basis with clean, zero-carbon energy.

Why Renewable Energy Needs to Be the Norm for Cloud Computing

It’s no secret that cloud computing is a drain of resources, roughly three percent of all electricity generated on the planet. That’s why it’s important for Google and other cloud providers to be part of the solution to solving global climate change. Renewable energy is an important element, as is matching the energy use from operations and by helping to create pathways for others to purchase clean energy. However, it’s not just about fighting climate change. Purchasing energy from renewable resources also makes good business sense, for two key reasons:

  • Renewables are cost-effective – The cost to produce renewable energy technologies like wind and solar had come down precipitously in recent years. By 2016, the levelized cost of wind had come down 60% and the levelized cost of solar had come down 80%. In fact, in some areas, renewable energy is the cheapest form of energy available on the grid. Reducing the cost to run servers reduces the cost for public cloud customers – and we’re in favor of anything that does that.
  • Renewable energy inputs like wind and sunlight are essentially free – Having no fuel input for most renewables allows Google to eliminate exposure to fuel-price volatility and especially helpful when managing a global portfolio of operations in a wide variety of markets.

Google Sustainability in the Cloud Goes “Carbon Intelligent”

In continuum with their goals for data centers to consume more energy from renewable resources, Google recently revealed in their latest announcement that it will also be time-shifting workloads to take advantage of these resources and make data centers run harder when the sun shines and the wind blows. 

“We designed and deployed this first-of-its-kind system for our hyperscale (meaning very large) data centers to shift the timing of many compute tasks to when low-carbon power sources, like wind and solar, are most plentiful.”, Google announced.  

Google’s latest advancement in sustainability is a newly developed carbon-intelligent computing platform that seems to work by using two forecasts – one indicating future carbon intensity of the local electrical grid near its data center and another of its own capacity requirements – and using that data “align compute tasks with times of low-carbon electricity supply.” The result is that workloads run when Google believes it can do so while generating the lowest-possible CO2 emissions.

The carbon-intelligent computing platform’s first version will focus on shifting tasks to different times of the day, within the same data center. But, Google already has plans to expand its capability, in addition to shifting time, it will also move flexible compute tasks between different data centers so that more work is completed when and where doing so is more environmentally friendly. As the platform continues to generate data, Google will document its research and share it with other organizations in hopes they can also develop similar tools and follow suit. 

Leveraging forecasting with artificial intelligence and machine learning is the next best thing and Google is utilizing this powerful combination in their platform to anticipate workloads and improve the overall health and performance of their data center to be more efficient. Combined with efforts to use cloud resources efficiently by only running VMs when needed, and not oversizing, resource utilization can be improved to reduce your carbon footprint and save money.

How Containerization in the Cloud Reduces Vendor Lock-in

How Containerization in the Cloud Reduces Vendor Lock-in

As you accelerate your organization’s containerization in the cloud, key stakeholders may worry about putting all your eggs in one cloud provider’s basket. This combination of fears – both a fear of converting your existing (or new) workloads into containers, plus a fear of being too dependent on a single cloud provider like Amazon AWS, Microsoft Azure, or Google Cloud – can lead to hasty decisions to use less-than-best-fit technologies. But what if using more of your chosen cloud provider’s features meant you were less reliant on that cloud provider?

The Core Benefit of Containers

Something that can get lost in the debate about whether containerization is good or worthwhile is the feature of portability. When Docker containers were first being discussed, one of the main use cases was the ability to run the container on any hardware in any datacenter without worrying if it would be compatible. This seemed to be a logical progression from virtual machines, which had provided the ability to run a machine image on different hardware, or even multiple machines on the same hardware. Most container advocates seem to latch on to this from the perspective of container density and maximizing hardware resources, which makes much more sense in the on-prem datacenter world.

In the cloud, however, hardware resource utilization is now someone else’s problem. You choose your VM or container size and pay just for that size, instead of having to buy a whole physical server and pay for the entirety of it up-front. Workload density still matters, but is much more flexible than on-prem datacenters and hardware. With a shift to containers as the base unit instead of Virtual Machines, your deployment options in the cloud are numerous. This is where container portability comes into play.

The Dreaded “Vendor Lock-in”

Picking a cloud provider is a daunting task, and choosing one and later migrating away from it can have enormous impacts of lost money and lost time. But do you need to worry about vendor lock-in? What if, in fact, you can pivot to another provider down the road with minimal disruption and no application refactoring?

Implementing containerization in the cloud means that if you ever choose to move your workloads to a different cloud provider, you’ll only need to focus on pointing your tooling to the new provider’s APIs, instead of having to test and tinker with the packaged application container. You also have the option of running the same workload on-prem, so you could choose to move out of the cloud as well. That’s not to say that there would be no effort involved, but the major challenge of “will my application work in this environment” is already solved for you. This can help your Operations team and your Finance team to worry less about the initial choice of cloud, since your containers should work anywhere. Your environment will be more agile, and you can focus on other factors (like cost) when considering your infrastructure options. 

Further Reading

How to Communicate Software Development Costs to Your Finance Department

How to Communicate Software Development Costs to Your Finance Department

If you’re in engineering or development, communicating about cloud infrastructure and other software development costs with your finance department is tricky. For one thing, those costs are almost certainly rising.

Also, you are in different roles with different priorities. This naturally creates barriers of communication. You may think your development costs are perfectly reasonable while your CFO thinks there’s a problem – or you may be focused on different parts of the bill than your colleagues in finance are. 

Here are some ways to break down that communication barrier and make your software development costs sound a little less scary. 

Use the CFO’s Language 

Engineering and finance use different language to talk about the same things – which means there’s going to be an element of translation involved. Before meeting with someone who lives in a different day-to-day world than you do, consider how they may talk about cost areas in a way that’s meaningful to their role. For example: 

  • Dev-speak: “Non-production workloads” – or dev or test or stage
  • Finance-speak: R&D costs

 

  • Dev-speak: “Production workloads”
  • Finance-speak: Cost of goods sold or COGS

Focus on Business Growth Impact

So your software development costs are probably going up. There will be some wasted spend that can be eliminated, but for the most part, this growth is unavoidable for a growing business. Highlight the end results that drove decisions to increase spending on software development, for example:

  • We increased our headcount and sprint velocity to speed time to market and beat our competition for offering A.
  • We are developing multiple applications in parallel.
  • Our user base is growing, which is increasing our infrastructure costs. 
  • Our open bug count is down by 50% YOY, increasing customer satisfaction and retention.

Know the Details, But Don’t Get Bogged Down in Them

Are your S3 costs surging? Did you just commit to a bunch of 3-year reserved instances upfront (wait –– did you really?) Did your average salary per developer increase due to specialized skill requirements, or by moving outsourced QA in-house?

You should know the answers to all of these questions, but there’s no need to lead with them in a conversation. Use them as supporting information to answer questions, but not the headline.

Share Your Cost Control Plans – and Automate

Everybody likes an action plan. Identify the areas where you can reduce costs.

  • Consider roles where outsourcing may be prudent – such as apps outside your core offering
  • Automate QA testing – You’re not going to replace human software developers with bots (yet), but there are a few areas where automation can reduce costs, such as QA testing.
  • Optimize your existing infrastructure to turn off when not needed, and size resources to match demand based on utilization metrics, automatically
  • Reduce other wasted infrastructure spend by decommissioning legacy systems, 

Like many things in business, effective communication and collaboration can go a long way. While it’s important to optimize costs to make your software development costs go the furthest, they are going to continue to rise. And that’s okay.

7 Favorite AWS Training Resources

7 Favorite AWS Training Resources

When it comes to AWS training resources, there’s no shortage of information out there. Considering the wide range of videos, tutorials, blogs, and more, it’s hard knowing where to look or how to begin. Finding the best resource depends on your learning style, your needs for AWS, and getting the most updated information available. Whether you’re just getting started in AWS or consider yourself an expert, there’s an abundance of resources for every learning level. With this in mind, we came up with our 7 favorite AWS training resources, sure to give you the tools you need to learn AWS:

1. AWS Self-Paced Labs

What better way to learn that at your own pace? AWS self-paced labs give you hands-on learning in a live AWS environment, with AWS cloud services, and actual scenarios you would encounter in the cloud. There are two different ways to learn with these labs. You can either take an individual lab or follow a learning quest. Individual labs are intended to help users get familiar with an AWS service as quickly as 15 minutes. Learning quests guide you through a series of labs so you can master any AWS scenario at your own pace. Once completed, you will earn a badge that you can boast on your resume, LinkedIn, website, etc. 

Whatever your experience level may be, there are plenty of different options offered. Among the recommended labs you’ll find an Introduction to Amazon Elastic Compute Cloud (EC2), and for more advanced users, a lab on Maintaining High Availability with Auto Scaling (for Linux).

2. AWS Free Tier

Sometimes the best way to learn something is by jumping right in. With the AWS Free Tier, you can try AWS services for free. This is a great way to test out AWS for your business, or for the developers out there, to try services like AWS CodePipeLine, AWS Data Pipeline, and more. While you are still getting a hands-on opportunity to learn a number of AWS services, the only downside is that there are certain usage limits. You can track your usage with a billing alarm to avoid unwanted charges, or you can try ParkMyCloud and park your instances when they’re not in use so you get the most out of your free tier experience. In fact, ParkMyCloud started its journey by using AWS’s free tier!

3. AWS Documentation and Whitepapers

AWS Documentation is like a virtual encyclopedia of tools, terms, training, and everything AWS. You’ll find case studies, tutorials, cloud computing basics, and so much more. This resource is a one-stop-shop for all of your AWS documentation needs, whether you’re a beginner or advanced user. No matter where you are in your AWS training journey, AWS documentation is always a useful reference and certainly deserves a spot in your bookmarks. 

Additionally, you’ll find whitepapers that give users access to technical AWS content that is written by AWS and individuals from the AWS community, to help further your knowledge of their cloud. These whitepapers include things from technical guides, reference material, and architecture diagrams.

4. YouTube

So far, we’ve gone straight to the source for 3 out of 7 of our favorite AWS training resources. Amazon really does a great job of providing hands-on training, tutorials, and documentation for users with a range of experience. However, YouTube opens up a whole new world of video training that includes contributions from not only Amazon, but other great resources as well. Besides the obvious Amazon Web Services channel, there are also popular and highly rated videos by Edureka, Simplilearn, Eli the Computer Guy, and more.

5. Bloggers

As cloud technology usage continues to expand and evolve, blogs are a great way to stay up to speed with AWS and the world of cloud computing. Of course, in addition to aws labs, a free-trial, extensive documentation, and their own YouTube channel, AWS also has their own blog. Since AWS actually has a number of blogs that vary by region and technology, we recommend that you start by following Jeff Barr – Chief Evangelist at Amazon Web Services, and primary contributor. Edureka was mentioned in our recommended YouTube channels, they also have a blog that covers plenty of AWS topics. The CloudThat blog is an excellent resource for AWS and all things cloud, and was co-founded by Bhaves Goswami – a former member of the AWS product development team. Additionally, AWS Insider is a great source for all things AWS. Here you’ll find blogs, webcasts, how-to, tips, tricks, news articles and even more hands-on guidance for working with AWS. If you prefer newsletters straight to your inbox, check out Last Week in AWS and Inside Cloud

6. Online Learning Platforms

As public cloud computing continues to grow – and AWS continues to dominate the market – people have become increasingly interested in this CSP and what it has to offer. In the last 8-10 years, two massive learning platforms were developed, Coursera and Udemy. These platforms offer online AWS courses, specializations, training, and degrees. The abundance of courses that these platforms provide can help you learn all things AWS and give you a wide array of resources to help you train for different AWS certifications and degrees. 

7. GitHub

GitHub is a developer platform where users work together to review and host code, build software and manage projects. This platform has access to a number of materials that can help further your AWS training. In fact, here’s a great list of AWS training resources that can help you prepare for an Amazon Cloud certification. The great thing about this site is the collaboration among the users. The large number of people in this community brings together people from all different backgrounds so they are able to provide knowledge about their own specialties and experiences. With access to everything from ebooks, video courses, free lectures, and sample tests, posts like these can help you get on the right certification track. 


There’s plenty of information out there when it comes to AWS training resources. We picked our 7 favorite resources for their reliability, quality, and range of information. Whether you’re new to AWS or consider yourself an expert, these resources are sure to help you find what you’re looking for.

Will Robotic Process Automation Save Your Company Time and Money in the Cloud?

Will Robotic Process Automation Save Your Company Time and Money in the Cloud?

After hearing a lot of buzz about this concept in AI, we decided to see what’s next for robotic process automation. The promise of the technology is that it can automate processes that employees are doing manually, saving your employees’ time and potentially reducing operational costs. While robotic process automation (RPA) interest has been high for a while, actual adoption is now catching up and will only continue to grow in the future. Organizations are understanding the power of process automation, so in turn, more industries are expected to deploy more RPA bots to eliminate manual repetitive actions performed by employees. 

RPA software is en route to becoming a billion-dollar category in 2020. Last year, Gartner projected that spending on RPA software was expected to hit $1.3 billion. However, there are still some growing pains to address with RPA and is not exactly a 100 percent perfect, but it fits right in with the current trends in cloud computing toward optimization. And, since, we’re all about saving time and money – let’s recap on this trend to see how it can help to do these things. 

What is Robotic Process Automation?

To recount, RPA, whether it’s called “intelligent automation” or “cognitive automation” in the future, is a way to automate business processes by creating software robots paired with artificial intelligence (AI) and machine learning capabilities to perform manual and mundane work-tasks. It allows users to configure within an application and gives them the capability to handle a variety of repetitive tasks by processing, employing, generating and communicating information automatically. For example, you might program RPA bots to do first-level customer support tasks by searching for answers; copy and paste data from one system to another for invoicing or expense management or issue refunds. This video from IBM shows an example in action. 

RPA software is not part of an organization’s IT infrastructure. Instead, it sits on top of it, enabling a company to implement the technology quickly and efficiently. Furthermore, RPA tools can be trained to make judgments about future outputs. Many users appreciate its non-intrusive nature and the ability to integrate within infrastructures without causing disruption to systems already in place.

How can you use Robotic Process Automation?

RPA technology can help organizations on their digital transformation journeys by:

  • Enabling better customer service.
  • Ensuring business operations and processes comply with regulations and standards.
  • Allowing processes to be completed much more rapidly.
  • Providing improved efficiency by digitizing and auditing process data.
  • Creating cost savings for manual and repetitive tasks.
  • Enabling employees to be more productive.

Companies like Walmart, AT&T, and Walgreens are adopting the use of RPA. Clay Johnson, the CIO of Walmart, says they use RPA bots to automate pretty much anything from answering employee questions to retrieving useful information from audit documents. The CIO of American Express Global Business Travel, David Thompson, says they implement the use of RPA to automate the process for canceling an airline ticket and issuing refunds. In addition, Thompson is looking to use RPA to facilitate automatic rebooking recommendations, and to automate certain expense management tasks in the company.

But more specific to cloud computing and IT, one great application for RPA is in automated software testing. If testing involves multiple applications and monotonous work, RPA can replace workers’ time spent testing. Automated tests can run repeatedly at any time of day. This approach fits in with continuous testing as well as continuous integration (CI) and continuous delivery (CD) software development practices. Additionally, RPA can be used to automate processes in monolithic legacy systems that are not worth developers’ time to update, to bring automation while work on newer microservices systems is in progress. 

Is Robotic Process Automation the Best Way to Automate Cost Control?

A study found that not all automation is achievable with RPA. In the study, they conclude that only three percent of organizations have managed to scale RPA to a high level. Additionally, Gartner placed RPA tools at the “Peak of Inflated Expectations” in their Hype Cycle guide for artificial intelligence  – another vote for more buzz than potential. In reality, it is only as efficient as the person configuring the automation flow and organizations that have overly idealized expectations of the technology’s capabilities. Those that don’t have a solid grasp of their own processes may find it difficult to find the right tool to automate jobs.

However, RPA is expected to deliver tangible results to organizations that make automation a key component of their digital transformation as the collaboration between digital workers and human talent become more efficiently aligned in the future.

So can it save you time and money? If employees at your company are spending a large percentage of their time on repetitive tasks that require little to no decision making, then yes, it probably can. It’s also important because it will free up developer time that is spent on automatable tasks, like scripting, so they can focus on creating value for your business. 

For complex and long-term automation, though, purpose-built software is a better solution. If there is already a solution to your automation needs on the market, it will probably serve you better than RPA because there won’t be an upfront period needed to program bots, you won’t need to make frequent changes to your processes like many RPA bots will require, and it’s a better solution for the long run. 

How Cloud has affected the Centralization vs Decentralization of IT

How Cloud has affected the Centralization vs Decentralization of IT

Every week, we find ourselves having a conversation about cost optimization with a wide variety of enterprises. In larger companies, we often talk to folks in the business unit that most people traditionally refer to as Information Technology (IT). These meetings usually include discussions about the centralization vs decentralization of IT and oftentimes they don’t realize it, as we are discussing cloud and how it’s built, run and managed in the organization. 

Enterprises traditionally organized their IT team as a single department under the leadership of the CIO. The IT team works across organizational departments and supports the enterprise to meet various tooling and project needs requested by other business units or the executive team.  Although there are significant efficiencies from this type of approach, there are some risks that can affect the entire organization, in particular, one that seems to stem from the ‘need for speed’ (agility). The LOB depends on IT to deliver services, hardware, software, and other ‘tools’, but this is not always done quickly and efficiently, mostly due to internal processes.

Benefits of Centralized IT Structures

The benefits of this type of organizational structure were often associated with increased purchasing power, improved information flow between IT team members, skilled hiring efficiencies, and a watchful view of the enterprise’s technical infrastructure from both an operational network and security perspective. Let’s dig into these in a bit more detail.  

  • Lowered expenses and increased purchasing power – the centralized environment will always provide a business with more buying power at a lower cost by combining all of the needs of the business into a centralized buying pool.
  • Improved productivity for IT staff – IT teams are like any other team, they thrive with collaboration and mutual understanding and respect for each other’s skillsets. It also makes installations and technical resolution(s) easier as you’re addressing a centralized resource.
  • Enterprise-wide information dissemination – the centralized organization will build its network from the center out – LOBs will typically share the same networked resources – such as an ERP or CRM. This avoids the dangers of siloed information that could be critical to another LOB, but without access, there’s no visibility into the information that is available.

Despite the benefits stated above, a centralized team has several limitations and challenges – one of those challenges with the greatest enterprise-wide exposure is how best to prioritize project requests from each of the LOBs – enter decentralization and cloud — IaaS, PaaS and SaaS.

Decentralization is a type of organizational structure in which daily operations and decision-making responsibilities are delegated by top management to middle and lower-level managers and their respective business units. This frees up top management to focus more on major decisions. For a small business, growth may create the need to decentralize to continue efficient operations. Decentralization offers several advantages and is a practical approach when different departments or business units in a company have different IT needs and strategies.

Benefits of Decentralized IT Structures

  • The ability to tailor IT selection and configuration. When individual departments have IT decision-making power, they can choose and configure IT resources based on their own specific needs. For example, each department has its own servers optimized to run its required applications.
  • More fail-safes and organizational redundancy. Decentralizing makes servers and applications more resilient—and it can do the same for IT networks, too. If each department maintains its own server, one can function as a backup server in case another server fails. (Of course, this type of redundancy would need to be properly configured in advance.)
  • Respond faster to new IT trends. Since departments in decentralized organizations can make independent decisions, it’s easier for them to take advantage of new technology in the cloud.

One drawback of decentralized IT structures is that this model often leads to information silos – collections of data and information that cannot be easily shared across departments. Centralized IT structures help prevent these silos, leading to better knowledge-sharing and cooperation between departments. For example, using one centrally managed CRM system makes it possible for any employee in a company to access customer information from anywhere — think SalesForce.

The Reality is Hybrid IT

As we see above and in real life, there are many reasons an organization might be tempted to move toward or away from a centralized IT organizational structure but in practice many companies practice a hybrid model – some IT systems like your CRM and ChatOps are centralized, while others like your Cloud Provider and Orchestration tool may be decentralized (buy business unit). The top reasons for this hybrid model that come to mind are technical agility and the availability of tools through SaaS, IaaS and PaaS providers – IT no longer needs to build every solution and tool for you. And decentralized IT organizational structures are typically best for companies that rely on technical agility to remain competitive. These include newer, smaller companies (e.g., startups), and organizations that need to respond quickly to new IT developments (e.g., software and hardware companies or app developers). And, for larger companies that want to bring that mentality and model to their business, here is a great example, Capital One, a bank wanting to be a technology company. 

What are your thoughts on the centralization vs decentralization of IT?