Why Use One Cloud, When You Can Use Any Cloud?

Why Use One Cloud, When You Can Use Any Cloud?

No, seriously, why would we just use one cloud?

Let’s stop for a moment and think about what has happened over the course of the last few years in public cloud computing and the hypervisor wars on-premises.  VMware has largely dominated the data center, but we are seeing a strong push from Microsoft on the hypervisor front.  KVM and Xen continue to grow in popularity for certain sectors, and all across the spectrum we see lots of folks running more than one hypervisor.

The cloud is no different.  The reason that we are all seeking the “AWS killer” just like the elusive “iPhone killer” is that there is some bizarre need to locate a winner of the platform war. 

This isn’t a zero-sum game.  The real shift in our industry is the broad acceptance of multiple platforms inside every IT portfolio.  We jumped right past the cloud to the multi-cloud.

Why Run More Than One Cloud?

Technology is not the problem, it’s the solution.  Business challenges are being answered by technology which is what really matters.  So, why would we run more than one cloud?  The reason is a technological one usually.  Certain features, APIs, and architectures may be supported on one more than another.  There are raw economics involved as well.  There are overall availability concerns which drive businesses to disperse their IT across multiple data centres, so why not do the same in the cloud?

The reason that AWS and OpenStack are often pitted against each other is that there are capabilities to enable AWS API access within the OpenStack platform. This is something that Randy Bias and many in the community fought for over the last few years.  The reason that it becomes important is that we see the huge adoption of AWS and being able to take the same workloads and move them to OpenStack using the same API calls and interactions would be a massive win for OpenStack as a platform.

If we stick to strictly public cloud providers, we can start with what we would call the big three:  AWS, Microsoft Azure, Google Cloud Platform.  Among those three, we see a lot of parrying as we see features and pricing updates happening regularly.  Features more so than pricing lately. That results in an ever-growing set of services that can be easily consumed.  As we see common orchestration and operational platforms like Mesos, Kubernetes, and the like gaining in popularity, it gives even more credence to the commoditization of cloud.  (Author’s opinion note:  The supposed “race to zero” for cloud costs is over.  They have all agreed that pricing isn’t where they win the customers any more)

Reducing the Complexity of Multi-Cloud

Complexity is the one thing that will slow the multi-cloud adoption a bit longer.  There are clearly different ways to consume resources, and to programmatically create and destroy resources in the public cloud platforms.   Especially when you go outside of the big three.  That means consumers of the public cloud will have to start with one target and generally work up to a deep comfort there before moving to embrace a multi-cloud strategy.

Once we remove or reduce complexity from the list of barriers, that opens up the door for embracing the economic value of a multi-cloud strategy.  This is where we can embrace spot pricing and on-demand growth to tackle scaling needs, while making the workload truly portable and making sure that price becomes the real win.  Networking stacks across the clouds are rather different for a reason.  If every car manufacturer used the same exact parts, they would lower the chances of you coming back to them for up-sell opportunities.  The same goes for the cloud.  Networking and security (they should always be paired) will most likely be the greatest challenge that technologists face in architecting their single multi-cloud solutions.

Next-Generation applications are being built as cloud-native where possible.  This opens up the door for what has been talked about for years.  Supposed freedom from vendor lock-in.  I’m always rather skeptical when a representative from one cloud company says “come to us and avoid vendor lock-in” because every vendor, even public cloud ones, have lock-in.  

What we do gain by embracing the cloud-native approach to application development and deployment is that we reduce the risk of lock-in.

The more we learn from the forward-leaning development teams, the more we are able to give ourselves agility in a multi-cloud architecture.  As all of the public cloud pundits who represent one faction or another are arguing over who will be the last one to be all-in on the public cloud running cloud-native applications, they forgot about one thing:  they opened the door for their competition too.

How to Manage Hybrid & Multi-Cloud Environments with Google Cloud Composer

How to Manage Hybrid & Multi-Cloud Environments with Google Cloud Composer

As we continue to evaluate ways to automate various aspects of software development, today we’ll take a look at Google Cloud Composer. This is a fully managed workflow orchestration service built on Apache Airflow that makes workflow management and creation simple and consistent.

The evolution of hybrid and multi-cloud environments continue to grow as enterprises want to take advantage of the cloud’s scalability, flexibility, and global reach. Of the three major providers, Google Cloud has been the most open to supporting this multi-cloud reality. For example, earlier this year, Google launched Anthos, a new managed service offering for hybrid and multi-cloud environments to give enterprises operational consistency by running quickly on any existing hardware, leverage open APIs and give developers the freedom to modernize. But, implementing the management of these environments can be either an invaluable proposition for your company or one to completely challenge your infrastructure instead – which brings us to Google’s solution, Cloud Composer.

How does Google Cloud Composer work?

With Cloud Composer, you can monitor, schedule and manage workflows across your hybrid and multi-cloud environment. Here is how:

  • As part of Google Cloud Platform (GCP), Cloud Composer integrates with tools like BigQuery, Dataflow, Dataproc, Datastore, Cloud Storage, Pub/Sub and Cloud ML Engine, giving users the ability to orchestrate end-to-end GCP workloads.
  • You can code directed acyclic graphs (DAGs) using Python to improve workflow readability and pinpoint areas in need of assistance.
  • It has one-click deployment built-in to give you instant and easy access to a range of connectors and graphical representations that show your workflow in action.
  • Cloud Composer allows you to pull workflows together from wherever they live, supporting a fully-functioning and connected cloud environment.
  • Since Cloud Composer is built on Apache Airflow – an open-source technology – it provides freedom from vendor lock-in as well as integration with a wide variety of platforms.  

Simplifying hybrid and multi-cloud environment management

Cloud Composer is ideal for hybrid and multi-cloud management because it’s built on Apache Airflow and operated with the Python programming language. Using open-source technology and the “no lock-in” approach and portability gives users the flexibility to create and deploy workflows seamlessly across clouds for a unified data environment.

Setting up your environment is quick and simple. Pipelines created with Cloud Composer will be configured as DAGs with easy integration for any required Python libraries, giving users of almost any level the ability to create and schedule their own workflows. With the built-in one-click deployment, you get instant and easy access to a range of connectors and graphical representations that show your workflow in action.

However, costs can be a drawback to making the most of your cloud environment when using Cloud Composer. Landing on specific costs for Cloud Composer can be hard to calculate, as Google measures the resources your deployments use and add the total cost of your Apache Airflow deployments onto your wider GCP bill. 

Cloud Composer Pricing 

Pricing for Cloud Composer is based on the size of a Cloud Composer environment and the duration the environment runs, so you pay for what you use, as measured by vCPU/hour, GB/month, and GB transferred/month. Google offers multiple pricing units for Cloud Composer because it uses several GCP products as building blocks. You can also use the Google Cloud Platform pricing calculator to estimate the cost of using Cloud Composer. 

So, should you use Google Cloud Composer? Cloud Composer environments are meant to be long-running compute resources that are always online so that you can schedule repeating workflows whenever necessary. Unfortunately, since you can’t turn on and off a Cloud Composer environment; you can only create or destroy, it may not be right for every environment and could cost more than the advantages may be worth.

How to Create a Business Case to Buy vs. Build Software

How to Create a Business Case to Buy vs. Build Software

When approaching new problems, such as cost optimization or task automation, development and IT teams are faced with the decision to buy vs. build a solution. There are a number of financial and strategic factors to consider when determining the best choice in each case, which can be difficult to parse through. Here are our tips for building a buy vs. build business case, whether for your own use or to present to management.

Reasons to Build Your Own Solution 

1. An off-the-shelf product doesn’t exist to solve your problem. If you can’t buy a product, or hack together several different existing solutions, you are probably going to have to build your own software. There is not too much “blue ocean” left out there, but if you have a need and no product can solve it, then it can make sense. Be wary and make sure you’ve completed your research before determining this is the case: perhaps the solution is called something other than what you’re searching, or exists as part of a larger suite of offerings. 

2. It will provide you with a significant competitive advantage over your rivals. This typically requires unique IP (some special sauce) that you can build into the product which other existing products can not offer and which will help your company succeed.

3. You can see a business opportunity whereby not only can you use the product yourself in-house, but you will also be able to offer it to your customers, thus leveraging your company’s investment.

4. You have a team of engineers sitting on the bench with nothing better to do (i.e. minimal opportunity cost). This does actually happen from time-to-time and such a project can make them productive.

5. The specialist knowledge already exists within the company and a natural product owner exists. This is not reason enough to decide to build, but without it, things are likely more difficult.

Reasons to Buy Pre-Built Solutions 

1. Building software is complex and expensive. If this is a software product that you are going to roll out across the enterprise, it will require support and likely a commitment for the life of the product to feature updates and improvements. 

2. Supporting products that your team might build is a significant commitment and typically is where the ‘big bucks are spent’. An MVP style product is unlikely to keep the masses happy for long, and you will need to budget for ongoing updates, improvements, patching and support. This typically multiplies the cost of building v1.0.

3. Commercializing a product built primarily for in-house usage is a great theory but in reality rarely works. Such examples do exist but are few and far between. Building a new product company requires a lot more than just technology and execution risk is high unless it is to become the #1 priority for your company. 

4. A long time to value of a new product venture means that you are often missing out on significant value which would be realized if an existing ‘off the shelf’ (today that often means a SaaS solution) were selected.

5. Enterprise-grade software comes with the bells and whistles that enterprises need. This typically means lots of points of integration, single sign-on requirements, and security as a given. Home-baked products typically do not include these items which are considered ‘added extras’ and not core to solving the problem at hand.

Create Your Business Case

If you work in an organization with access to technical resources (which today includes a lot of companies), there is often a desire to build because “they can” and a sense that they can meet the needs in a more custom manner solving the precise needs of their organization. Even if the opportunity cost of diverting resources away from other projects is low, there can be a tendency to overlook to include the longer term maintenance, upgrade, and support requirements of enterprise-grade software. Additionally, we often encounter companies who have started on the journey toward building an in-house solution, only to discover additional complexity or seeing internal priorities change. In such cases, even when there are significant sunk costs, reappraising alternative paths and third-party solutions can still make sense. 

Ultimately, every case is unique and weighing the relative pros/cons and building the business case to buy vs. build will require considering both financial and non-financial aspects to help the right decision is made. 

Do Cloud Providers Care About Green Computing?

Do Cloud Providers Care About Green Computing?

Is green computing something cloud providers like Amazon, Microsoft, and Google care about? And whether they do or not – how much does it matter? As the data center market continues to grow, it’s making an impact not only on the economy but on the environment as well. 

Public cloud offers enterprises more scalability and flexibility compared to their on-premise infrastructures. One benefit occasionally touted by the major cloud providers is that organizations will be more socially responsible when moving to the cloud by reducing their carbon footprint. But is this true?

Here is one example: Northern Virginia is the east coast’s capital of data centers, where “Data Center Alley” is located (and, as it happens, the ParkMyCloud offices), home to more than 100 data centers and more than 10 million square feet of data center space. Northern Virginia welcomed the data center market because of its positive economic impact. But as the demand for cloud services continues to grow, the expansion of data centers also increases dramatically. Earlier this year, the cloud boom in Northern Virginia alone was reaching over 4.5 gigawatts in commissioned energy, about the same power output needed from nine large (500-megawatt) coal power plants. 

Environmental groups like Greenpeace have accused major cloud providers like Amazon Web Services (AWS) of not doing enough for the environment when operating data centers. According to them, the problem is that cloud providers rely on commissioned energy from energy companies that are only focused on dirty energy (coal and natural gas) and very little from initiatives that include renewable energy. While the claims bring the spotlight on energy companies as well, we wanted to know what (if anything) the major cloud providers are doing to rely less on these types of energy and provide data centers with cleaner energy to make green computing a reality.

Data Center Sustainability Projects from AWS

According to AWS’s sustainability team, they’re investing in green energy initiatives and are striving to commit to an ambitious goal of 100% use of renewable energy by 2040. They are doing this with the proposition and support of smart environmental policies, and leveraging expertise in technology that drives sustainable innovation by working with state and local environmental groups and through power purchase agreements (PPAs) from power companies.

AWS’s Environmental Layer, which is dedicated to site selection, construction, operations and the mitigation of environmental risks for data centers, also includes sustainability considerations when making such decisions. According to them, “When companies move to the AWS Cloud from on-premises infrastructure, they typically reduce carbon emissions by 88%.” This is because their data suggests companies generally use 77% fewer servers, 84% less power, and gain access to a 28% cleaner mix of energy – solar and wind power – compared to using on-premise infrastructure. 

Amazon Solar Farm

So, how much of this commitment has AWS been able to achieve and is it enough? In 2018, AWS said they had made a lot of progress in their sustainability commitment, and exceeded 50% of renewable energy use. Currently, AWS has nine renewable energy farms in the US, including six solar farms located in Virginia and three wind farms in North Carolina. AWS plans to add three more renewable energy projects, one more here in the US, one in Ireland and one in Sweden. Once completed they expect to create approximately 2.7 gigawatts of renewable energy annually.

Microsoft’s Environmental Initiatives for Data Centers

Microsoft has stated that they are committed to change and make a positive impact on the environment, by “leveraging technology to solve some of the world’s most urgent environmental issues.”

In 2016, they announced they would power their data centers with more renewable energy, and set a target goal of 50% renewable energy by the end of 2018. But according to them, they were able to achieve that goal by 2017, earlier than they expected. Looking ahead they plan to surpass their next milestone of 70% and hope to reach 100% of renewable energy by 2023. If they were to meet these targets, they would be far ahead of AWS.

Beyond renewable energy, Microsoft plans to use IoT, AI and blockchain technology to measure, monitor and streamline the reuse, resale, and recycling of data center assets. Additionally, Microsoft will implement new water replenishment initiatives that will utilize rainfall for non-drinking water applications in their facilities.

Google’s Focus for Efficient Data Centers 

Google claims that making data centers run as efficiently as possible is a very big deal, and that reducing energy usage has been a major focus to them for over the past 10 years. 

Google’s innovation in the data center market came from the process of building facilities from the ground up instead of buying existing infrastructures. According to Google, using machine learning technology to monitor and improve power-usage-effectiveness (PUE) and find new ways to save energy in their data centers gave them the ability to implement new cooling technologies and operational strategies that would reduce energy consumption in their buildings by 30%. Additionally, they deployed custom-designed, high-performance servers that use very little energy as possible by stripping them of unnecessary components, helping them reduce their footprint and add more load capacity. 

By 2017, Google announced they were using 100% renewable energy through power purchase agreements (PPAs) from wind and solar farms and then reselling it back to the wholesale markets where data centers are located. 

The Environmental Argument

Despite the pledges cloud providers are committing to in renewable energy, cloud services continue to grow beyond those commitments, and how much energy is needed to operate data centers is still very dependant on “dirty energy.”

Breakthroughs for cloud sustainability are taking place, whether big or small, providing the cloud with better infrastructures, high-performance servers, and the reduction of carbon emissions with more access to renewable energy resources like wind and solar power. 

However, some may argue the time might be against us, but if cloud providers continue to better improve existing commitments that keep up with growth, then data centers – and ultimately the environment – will benefit from them.

Why Abstraction Layers are Key to IT Success

Why Abstraction Layers are Key to IT Success

A recent conversation I had with Turbonomic founder and president Shmuel Kliger highlighted the importance of abstraction layers. Shmuel told me, “there’s only one reason why IT exists,” which quickly led to a discussion of cloud and abstraction.

It’s easy enough to get caught up in the whirlwind of ever-evolving technologies that returning to a single, fundamental purpose of IT is actually quite an intriguing idea.

Why Does IT Exist?

So, why does IT exist? As Shmuel put it, the purpose of IT is to get applications the resources they need in order to perform. That’s it!

Others have said the purpose of IT is to “make productivity friction free” or “enable the business to drive new opportunities”, but it all comes down to enabling the performance of the business. 

That key step of “enablement” is where we get to the plethora of technologies – private cloud, public cloud, serverless cloud, containers, managed containers, container orchestration, IoT data, data warehouses, data lakes, the list goes on and on. There’s no lack of solutions to the many productivity and technology-related problems faced in businesses today. Really, the problem is that such a wide and constantly changing array of technologies exist, inadvertently (or perhaps advertently, depending on your view!) creating more complexity in the wake of the problems they solve.

Complexity is no stranger, but it’s no friend, either. Simplification leads to efficiencies across the board, and should be one of the primary goals IT departments seek to achieve.

How Abstraction Provides Simplification

First of all: what do we mean by abstraction? An abstraction layer is something that hides implementation details and replaces it with more easily understandable and usable functions. In other words, it makes complicated things simpler to use. These layers can include hardware, programmable logic, and software. 

When you start to think about the layers between hardware and an application end user, you see that the abstraction layers also include on-premises hardware; cloud providers and IaaS; PaaS; FaaS; and containers. These middlemen start to add up, but ultimately, in order for an application to execute its underlying sequence of code, it needs CPU, memory, I/O, network, and storage. 

On this point, Shmuel said: “I always say the artifact of demand can change and the artifact of supply can change, but the problem of matching demand to supply doesn’t go away.” 

By using layers of abstraction to match this demand to supply, you remove the burden of the vast majority of decisions from the developer and the end user – in other words, simplification. One of the most prominent 

The Full Benefits of Operating Through Abstraction Layers

In addition to simplification, other benefits of abstraction include: 

  • Alleviating Vendor Lock-In – this can occur across the board – for example, by using a layer of multi-cloud management tools, you reduce your reliance on any single cloud provider, which is important for enterprise risk mitigation strategies.
  • Reducing Complexity of Analysis – by bringing data into one place and one format, abstraction makes data analytics simpler and broader reaching.
  • Reducing Required Expertise – by rolling up multiple hardware and software problems into a single management layer, you eliminate much of the heterogeneity that requires diverse skills in your organization’s workforce and generally reduces the limits imposed by the human end user.
  • Optimize everything – by eliminating silos and allowing for a single point of analysis, abstraction management opens doors to resource and cost optimization. 

IT organizations should attack the problems of complexity in two ways: one, by identifying the most messy and complex areas of your technology stack and creating a plan of attack to simplify their management. 

Two, by identifying “quick wins” where you can abstract away the problem with automation, achieving a better environment, automatically. We’ve got one for you: try ParkMyCloud to automatically optimize your cloud costs, saving you time, money, and effort.

8 Non-Obvious Benefits of SSO

8 Non-Obvious Benefits of SSO

As an enterprise or organization grows in size, the benefits of SSO grow along with it. Some of these benefits are easy to see, but there are other things that come up as side-effects that might just become your favorite features. If you’re on the fence about going all-in on Single Sign-On, then see if anything here might push you over the edge.

1. Multi-factor Authentication

One of the best ways to secure a user’s account is to make the account not strictly based on a password. Passwords can be hacked, guessed, reused, or written down on a sticky note on the user’s monitor. A huge benefit of SSO is the ease of adding MFA security to the SSO login. By adding a second factor, which is typically a constantly-rotating number or token, you vastly increase the security of the account by eliminating the immediate access of a hacked password. Some organizations even choose to add a third factor, which is typically something you are (like a fingerprint or eye scan) for physical access to a location. Speaking of passwords…

2. Increased Password Complexity

Forcing users to go through an SSO login instead of remembering passwords for each individual application or website means they are much more open to forming complex passwords that rotate frequently. A big complaint about passwords is having to remember a bunch of them without reusing them, so a limitation on the number of passwords means that one password can be much stronger.

3. Easier User Account Deployment

This one might seem obvious to some, but by using an SSO portal for all applications, user provisioning can be greatly accelerated and secured. The IT playbook can be codified within the SSO portal, so a new user in the accounting department can get immediate access to the same applications that the rest of the accounting department has access to. Now, when you get that inevitable surprise hire that no one told you about, you can make it happen and be the hero.

4. Easier User Account Deletion

On the flip side of #3, sometimes the playbook for removing users after they leave the company can be quite convoluted, and there’s always that nagging feeling that you’re forgetting to change a password or remove a login from somewhere. With SSO, you just have one account to disable, which means access is removed quickly and consistently. If your admins were using SSO for administrative access, it also means fewer password changes you have to make on your critical systems.

5. Consistent Audit Logging

Another one of the benefits of SSO is consistent audit logging. Funneling all of a user’s access through the same SSO login means that tracking that user’s activity is easier than ever. In financial and regulated industries, this is a crucial piece of the puzzle, as you can make guarantees about what you are tracking. In the case of a user who is no longer employed by the enterprise, it can make it easier to have your monitoring tools look for such attempts at access (but you know they can’t get in, from point #4!).

6. Quickly Roll Out New Applications

Tell your IT staff that you need to roll out a new application to all users without SSO and you’ll hear groans starting in record time. However, with SSO, rolling out an application is a matter of a few clicks. This means you have plenty of options ranging from a slow rollout to select groups to start all the way to a full deployment within a matter of minutes. This flexibility can really help maximize your user’s productivity, and will make your IT staff happy to put services into play.

7. Simplify the User Experience

If you use a lot of SaaS applications or web apps that require remembering a URL, you’re just asking for your users to need reminders of how to get into them. With an SSO portal, you can make all services and websites show up as clickable items, so users don’t need to remember the quirky spelling of that tool you bought yesterday. Users will love having everything in one place, and you’ll love not having to type anything anymore.

8. Empower Your Users

Speaking of SaaS applications, one of the main blockers for deploying an application to a wider audience is the up-front setup time and effort, which leads to IT and Operations shouldering the load of the work (since they have the access). SSO can accelerate that deployment, which means the users have more power and can directly access the tools they need. Take an example of ParkMyCloud, where instead of users asking IT to turn on their virtual machines and databases, the users can log directly into the ParkMyCloud portal (with limited access) and control the costs of their cloud environments. Users feel empowered, and IT feels relieved.

Don’t Wait To Use SSO

Whether you’ve already got something in place that you’re not fully utilizing, or you’re exploring different providers, the benefits of SSO are numerous. Small companies can quickly make use of single sign-on, while large enterprises might consider it a must-have. Either way, know that your staff and user base will love having it as their main access portal!