Cloud Computing Green Initiatives on the Rise

Cloud Computing Green Initiatives on the Rise

Over the past couple of months, we have seen a lot of articles about the Big Three cloud providers and their efforts to be environmentally friendly and make cloud computing green. What are Amazon Web Services (AWS), Microsoft Azure, and Google Cloud Platform (GCP) doing to make their IaaS services as green as possible? Does moving to the cloud help enterprises with their green initiatives and use of renewable energy?

It seems the cloud providers are focused on using renewable energy like solar and wind to power their massive data centers and are very actively touting that fact.

For example, Microsoft recently announced a new renewable energy initiative, the Sunseap project. This project, Microsoft’s first Asian clean energy deal, will install solar panels on hundreds of rooftops in Singapore, which they claim will generate 60MW to power Microsoft’s Singapore datacenter — making Microsoft Azure, Office 365 and numerous other cloud services. This deal is the third international clean energy announcement, following two wind deals announced in Ireland and The Netherlands in 2017. That’s pretty cool in my book, so kudos to them.

Google made a similar announcement recently, albeit a little more general, where they tout that Google is now buying enough renewable energy to match the power used in its data centers and offices. Google said that last year its total purchase of energy from sources including wind and solar exceeded the amount of electricity used by its operations around the world. According to a recent blog written by Google, they are the first public cloud, and company of their size, to have achieved that feat, so says Urs Hölzle, Google’s senior vice president of technical infrastructure. Now we can’t verify this but let’s take them at face value given the data in the chart below:

One observation we have in looking at this chart – where are IBM and Oracle? Once again, the Big Three always seem to be several steps ahead.

Speaking of, we’ve looked at Microsoft and Google, what about AWS? According to AWS’s self-reports, it seems that they are behind both Google and Microsoft in terms of relying 100% on renewable energy. AWS states a long-term commitment to achieve 100% renewable energy usage for their global infrastructure footprint, and had set a goal to be powered by 50% renewable energy by the end of 2017 (we could not find a recent 2018 update).

Moving to the cloud has many benefits – time to market, agility, innovation, lower upfront cost, and the commitment to renewable energy.! There’s one other way for cloud computing to be more sustainable – and that’s by all of us using fewer resources. In our small little way, ParkMyCloud helps – we help you turn cloud stuff off when it’s not being used, kind of like following your kids around the house and shutting off the lights, your at-home green initiative – you know you can automate that using Nest, right? Saving money in the process? That’s a win-win.

DevFinOps: Why Finance Needs to be Integrated with Development and Operations

DevFinOps: Why Finance Needs to be Integrated with Development and Operations

The formation of DevOps brought together two distinct worlds, causing a shift in IT culture that can only be made better (and more cost effective) by the integration of financial strategy  – enter DevFinOps. We say this partially in jest… yeah, we know, you’ve had enough of the Dev-blank-blank mashups. But really, this is something that we’ve been preaching about since the start of ParkMyCloud. As long as the public cloud remains a utility, everyone should be responsible for controlling the cost of their cloud use, meaning “continuous cost control” should be integrated into the processes of continuous integration and delivery.  

What is DevFinOps?

Hear us out — you at least need to start thinking of financial management as an element in the DevOps process. Time and time again, we see DevOps teams overspend and face major organizational challenges when inevitably the Finance team (or the CTO) starts enforcing a stricter budget. Cost control becomes a project, derailing forward development motion by rerouting valuable resources toward implementing spend management processes.  

It doesn’t need to be this way.

As financial resources are finite, they should be an integrated element from the very beginning when possible, and otherwise as soon as possible. Our product manager, Andy Richman,  recently discussed this concept further in a podcast for The CloudCast.

There are a number of ways that finance can be integrated into DevOps, but one near and dear to our hearts is with automated cloud cost control. A mental disconnect between cloud resources and their costs causes strain on budgets and top-down pressure to get spending under control.

Changing the Mindset: Cloud is a Utility

The reason for this disconnect is that as development and operations have moved to the cloud, the way we assess costs has changed profoundly in the same way that infrastructure has changed. A move to the cloud is a move to pay-as-you-go compute resources.

This is due to the change in pricing structure and mindset that happened with the shift from traditional infrastructure to public cloud. As one of our customers put it:

“It’s been a challenge educating our team on the cloud model. They’re learning that there’s a direct monetary impact for every hour that an idle instance is running. The world of physical servers was all CapEx driven, requiring big up-front costs, and ending in systems running full time. Now the model is OpEx, and getting our people to see the benefits of the new cost-per-hour model has been challenging but rewarding.”

In a world where IT costs already tend to exceed budgets, there’s an added struggle to calculating long-term cost estimates for applications that are developed, built and run on a utility. But wasn’t the public cloud supposed to be more cost effective? Yes, but only if every team and individual is aware of their usage, accountable for it, and empowered with tools that will give them insight and control over what they use. The public cloud needs to be thought of like any other utility.

Take your monthly electric bill, for example. If everyone in the office left the lights on 24 hours a day and 7 days a week, those costs would add up rather quickly. Meanwhile, you’d be wasting money on all those nights and weekends that your beautifully lit office is completely empty. But that doesn’t happen because in most cases, people understand that lights cost money, so people have automated this process in the office by using sensors either based on motion (usage) or time-based schedules. Now apply that same thinking to the cloud and it’s easy to see why cost-effectiveness goes down the drain when individuals and teams aren’t aware or accountable for the resources they’re using.

Financial decisions regarding IT infrastructure fall into the category of IT asset management (ITAM), an area that merges the financial, contractual and inventory components of an IT project to support lifecycle management and strategic decision-making. That brings us back to DevFinOps: an expansion of ITAM, fixing financial cost and value of IT assets directly into IT infrastructure, updating calculations in real time and simplifying the budgeting process.

Why this is important now that you’re on cloud

DevFinOps proposes a more effective way to estimate costs is by breaking them down into smaller estimates over time as parts of the work get completed, integrating financial planning directly into IT and cloud development operations. To do this, the DevOps team needs visibility into how and when resources are being used and an understanding on opportunities for saving.

Like we’ve been saying: the public cloud is a utility – you pay for what you use. And with that in mind, the easiest way to waste money is by leaving your instances or VMs running 24 hours a day and 7 days a week, and the easiest way to save money is just as simple: turn them off when they’re idle. In a future post, we’ll discuss further on how you can implement this process for your organization using automated cost control – stay tuned.

Dear Daniel Ek: We Made You a Playlist About Your Google Cloud Spend.

Dear Daniel Ek: We Made You a Playlist About Your Google Cloud Spend.

Dear Daniel Ek,

Congrats on Spotify’s IPO! It’s certainly an exciting time for you and the whole company. We’re a startup ourselves, and it’s inspiring to see you shaking up the norms and succeeding on your first day on the stock exchange.

Of course, with big growth comes big operational changes. Makes sense. As cloud enthusiasts ourselves, we were particularly interested to see that you committed to 365 million euros/$447 million in Google Cloud spend over the next three years.

Congrats on choosing an innovative cloud provider that will surely serve your infrastructure needs well.

But we’d like to issue a word of warning. No, not about competing with Google – about something that hits the bottom line more directly, which I’m sure will concern you.

Maybe a playlist on our favorite music streaming service is the best way to say this:

What do we mean when we say not to waste money on Google Cloud resources you don’t need?

In fact, we estimate that up to $90 million of that spend could be on compute hours that no one is actually using – meaning it’s complete wasted.

How did we get there? On average, ⅔ of cloud spend is spent on compute. Of that, 44% is on non-production resources such as those used for development, testing, staging, and QA. Typically, those resources are only needed for about 35% of hours during the week (a 40- hour work week plus a margin of error), meaning the other 65% of hours in the week are not needed. More here.

That’s not to mention potential waste on oversized resources, orphaned volumes, PaaS services, and more.

Companies like McDonald’s, Unilever, and Sysco have chosen ParkMyCloud to reduce that waste by automatically detecting usage and then turning those resources off when they’re not needed – all while providing simple, governed access to their end users.

Daniel, we know you won’t want your team to waste money on your Google Cloud spend.

We’re here when you’re ready.

Cheers,

Jay Chapel

CEO, ParkMyCloud

Announcing SmartParking for Google Cloud Platform: Automated, Custom On/Off Schedules Based on GCP Metric Data

Announcing SmartParking for Google Cloud Platform: Automated, Custom On/Off Schedules Based on GCP Metric Data

Today we’re excited to announce the latest cloud provider compatible with ParkMyCloud’s SmartParkingTM – Google Cloud Platform! In addition to AWS and Azure, Google users will now benefit from the use of SmartParking to get automatic, custom on/off schedules for cloud resources based actual usage metrics.

The method is simple: ParkMyCloud will import GCP metric data to look for usage patterns for your GCP virtual machine instances. With your utilization data, ParkMyCloud creates recommended schedules for each instance to turn off when they are typically idle, eliminating potential cloud waste and saving you money on your Google Cloud bill every month. You will no longer have to go through the process of creating your own schedule or manually shutting your VMs off – unless you want to. SmartParking automates the scheduling for you, minimizing idle time and cutting costs in the process.

Customized Scheduling – Not “One-Size-Fits-All”

SmartParking’s benefits are not “one-size-fits-all.” The recommended schedules can be customized like an investment portfolio – choose between “conservative”, “balanced”, or “aggressive” based on your preferences.

And like an investment, a bigger risk comes with a bigger reward. When receiving recommendations based on your GCP metric data, you’ll have the power to decide which of the custom schedules is best for you. If you’re going for maximum savings, aggressive SmartParking is your best bet since you’ll be parked most of the time, but with a small “risk” of occasionally finding an instance parked when needed. But in the event that this does happen – no fear! You can still use ParkMyCloud’s “snooze button” to override the schedule and get the instance turned back on — and you can give your team governed access to do the same.

If you’d rather completely avoid having your instances shut off when needed, you can opt for a conservative schedule. Conservative SmartParking only recommends a parking schedule during times that instances are never used, ensuring that you won’t miss a beat when it comes to having instances off during any given time that you’ve ever used them.

If you’re worried about the risk of aggressive parking for maximum savings, but want more opportunities to save than conservative schedules will give you, then a “balanced” SmartParking schedule is a happy medium.

What People are Saying: Save More, Easier than Ever

Since ParkMyCloud debuted SmartParking in January for AWS, adding Azure in March, customers have given positive feedback to the new functionality:

“ParkMyCloud has helped my team save so much on our AWS bill already, and SmartParking will make it even easier,” said Tosin Ojediran, DevOps Engineer at a FinTech company. “The automatic schedules will save us time and make sure our instances are never running when they don’t need to be.”

ParkMyCloud customer Sysco Foods has more than 500 users across 50 teams using ParkMyCloud to manage their AWS environments. “When I’m asked by a team how they should use the tool, they’re exceedingly happy that they can go in and see when systems are idle,” Kurt Brochu, Sysco Foods’ Senior Manager of the Cloud Enablement Team, said of SmartParking. “To me, the magic is that the platform empowers the end user to make decisions for the betterment of the business.”

Already a ParkMyCloud user? Log in to your account to try out the new SmartParking. Note that you will need to update the permissions that ParkMyCloud has to access your GCP metric data — see the user guide for instructions on that.

Not yet a ParkMyCloud user? Start a free trial here.

Why the NCAA Google Cloud Ads Matter

Why the NCAA Google Cloud Ads Matter

NCAA, Google Cloud? What does the cloud have to do with March Madness? Actually, public cloud is increasingly being used and promoted in sports. When you watch the tournament on NCAA, Google Cloud ads will show prominently. Plus, the NCAA has chosen to run its infrastructure on Google Cloud Platform (GCP).

(By the way, have your done your bracket yet? I just did mine – I went chalk and picked Villanova. Couldn’t see my WVU Mountaineers winning it all).

So we will see and hear a lot of Google Cloud in the coming weeks. Google recently announced a multiyear sponsorship deal with the NCAA and will run these ads throughout the upcoming NCAA basketball tournament. Google is hoping to expand its cloud business by taking complex topics such as cloud computing, machine learning and artificial intelligence and making them relatable to a wider audience.

So why does is matter that NCAA and Google Cloud will appear so prominently together this March Madness?

First of all, Google Cloud is always matching wits with the other major cloud providers — and in this case, they’ve had their hooks in various mainstream sporting leagues and events for several years. For example, did you notice the partnership between AWS and the National Football League (NFL)? Both AWS and NFL promote machine-learning capabilities — software that helps recognize patterns and make predictions — to quickly analyze data captured during games. The data could provide new kinds of statistics for fans and insights that could help coaches.

Second, there’s the infrastructure that supports these huge events. I can tell you as a sports fan that me and my mates will all be live streaming football, basketball, golf and soccer (yes the English Premier League) on our phones and tablets wherever we are. We do this while watching the kids play sports, working in the office, and even while we are playing golf – hook it up to cart (a buggy for my UK mates). Many of these content providers are using AWS, Microsoft Azure, GCP, and IBM Cloud to get this content to us in real time, and to analyze it and provide valuable insights for a better user experience.

Or take a look at the Masters golf tournament. Usually IBM and ATT are big sponsors, although the Masters is usually very hush hush about a lot of this. Last year there was a lot of talk of IBM Watson, the Masters and the surreal experience they were able to deliver. This is a really good read on what went on behind the scenes and how Watson and IBM’s cloud delivered that experience. IBM used Machine learning, Visual recognition, Speech-to-text, and cognitive computing to build a phenomenal user experience for Masters viewers and visitors.

The NCAA and Google Cloud are not just ad partners, but the NCAA is also a GCP customer. The NCAA is migrating 80+ years of historical and play-by-play data, from 90 championships and 24 sports to GCP. To start, the NCAA will tap into decades of historical basketball data using BigQuery, Cloud Spanner, Datalab, Cloud Machine Learning and Cloud Dataflow, to power the analysis of team and player performance. So Google Cloud not only gets advertising prominence for one of the most-watched events of the year, it gets a high-profile customer and one of the coolest use cases out there.

Enjoy the tournament – let’s go Cats!