How Cloud has affected the Centralization vs Decentralization of IT

How Cloud has affected the Centralization vs Decentralization of IT

Every week, we find ourselves having a conversation about cost optimization with a wide variety of enterprises. In larger companies, we often talk to folks in the business unit that most people traditionally refer to as Information Technology (IT). These meetings usually include discussions about the centralization vs decentralization of IT and oftentimes they don’t realize it, as we are discussing cloud and how it’s built, run and managed in the organization. 

Enterprises traditionally organized their IT team as a single department under the leadership of the CIO. The IT team works across organizational departments and supports the enterprise to meet various tooling and project needs requested by other business units or the executive team.  Although there are significant efficiencies from this type of approach, there are some risks that can affect the entire organization, in particular, one that seems to stem from the ‘need for speed’ (agility). The LOB depends on IT to deliver services, hardware, software, and other ‘tools’, but this is not always done quickly and efficiently, mostly due to internal processes.

Benefits of Centralized IT Structures

The benefits of this type of organizational structure were often associated with increased purchasing power, improved information flow between IT team members, skilled hiring efficiencies, and a watchful view of the enterprise’s technical infrastructure from both an operational network and security perspective. Let’s dig into these in a bit more detail.  

  • Lowered expenses and increased purchasing power – the centralized environment will always provide a business with more buying power at a lower cost by combining all of the needs of the business into a centralized buying pool.
  • Improved productivity for IT staff – IT teams are like any other team, they thrive with collaboration and mutual understanding and respect for each other’s skillsets. It also makes installations and technical resolution(s) easier as you’re addressing a centralized resource.
  • Enterprise-wide information dissemination – the centralized organization will build its network from the center out – LOBs will typically share the same networked resources – such as an ERP or CRM. This avoids the dangers of siloed information that could be critical to another LOB, but without access, there’s no visibility into the information that is available.

Despite the benefits stated above, a centralized team has several limitations and challenges – one of those challenges with the greatest enterprise-wide exposure is how best to prioritize project requests from each of the LOBs – enter decentralization and cloud — IaaS, PaaS and SaaS.

Decentralization is a type of organizational structure in which daily operations and decision-making responsibilities are delegated by top management to middle and lower-level managers and their respective business units. This frees up top management to focus more on major decisions. For a small business, growth may create the need to decentralize to continue efficient operations. Decentralization offers several advantages and is a practical approach when different departments or business units in a company have different IT needs and strategies.

Benefits of Decentralized IT Structures

  • The ability to tailor IT selection and configuration. When individual departments have IT decision-making power, they can choose and configure IT resources based on their own specific needs. For example, each department has its own servers optimized to run its required applications.
  • More fail-safes and organizational redundancy. Decentralizing makes servers and applications more resilient—and it can do the same for IT networks, too. If each department maintains its own server, one can function as a backup server in case another server fails. (Of course, this type of redundancy would need to be properly configured in advance.)
  • Respond faster to new IT trends. Since departments in decentralized organizations can make independent decisions, it’s easier for them to take advantage of new technology in the cloud.

One drawback of decentralized IT structures is that this model often leads to information silos – collections of data and information that cannot be easily shared across departments. Centralized IT structures help prevent these silos, leading to better knowledge-sharing and cooperation between departments. For example, using one centrally managed CRM system makes it possible for any employee in a company to access customer information from anywhere — think SalesForce.

The Reality is Hybrid IT

As we see above and in real life, there are many reasons an organization might be tempted to move toward or away from a centralized IT organizational structure but in practice many companies practice a hybrid model – some IT systems like your CRM and ChatOps are centralized, while others like your Cloud Provider and Orchestration tool may be decentralized (buy business unit). The top reasons for this hybrid model that come to mind are technical agility and the availability of tools through SaaS, IaaS and PaaS providers – IT no longer needs to build every solution and tool for you. And decentralized IT organizational structures are typically best for companies that rely on technical agility to remain competitive. These include newer, smaller companies (e.g., startups), and organizations that need to respond quickly to new IT developments (e.g., software and hardware companies or app developers). And, for larger companies that want to bring that mentality and model to their business, here is a great example, Capital One, a bank wanting to be a technology company. 

What are your thoughts on the centralization vs decentralization of IT?

The 5 Components of Azure DevOps

The 5 Components of Azure DevOps

In the search to accelerate and simplify the DevOps process, we take a look at Microsoft’s Azure DevOps, a hosted service providing development and collaboration tool that was formerly known as Visual Studio Team Services (VSTS). Last year, Microsoft split VSTS into five separate Azure-branded services, under the banner Azure DevOps for a complete offering in public cloud that makes it easier for developers to adopt portions of the Azure DevOps platform, without requiring them to go “all in” like the former VSTS.

Azure DevOps supports both public and private cloud configurations – the services include:

  • Azure Boards – A work tracking system with Kanban boards, dashboards, and reporting
  • Azure Pipelines – A CI/CD, testing, and deployment system that can connect to any Git repository
  • Azure Repos – A cloud-hosted private Git repository service
  • Azure Test Plans – A solution for tests and capturing data about defects
  • Azure Artifacts – A hosting facility for Maven, npm, and NuGet packages

Each of these Azure DevOps services is open and extensible and can be used with all varieties of applications, regardless of the framework, platform or cloud. Built-in cloud-hosted agents are provided for Windows, Mac OS and Linux and workflows are enabled for native container support and Kubernetes deployment options, virtual machines, and serverless environments.

With all five services together users can take advantage of an integrated suite that provides end to end DevOps functionalities. But, since they are broken up into separate components, Azure DevOps gives users the flexibility to just pick which services to employ without the need to use the full suite. For example, with Kubernetes having a standard interface and running the same way on all cloud providers, Azure Pipelines can be used for deploying to Azure Kubernetes Service (AKS), Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS), or clusters from any other cloud provider without requiring the use of any of the other Azure DevOps components.

Embracing Azure DevOps

One of the main benefits for teams using Azure DevOps is developers will be able to work securely from anywhere and in any format and embrace open-source technology. Azure DevOps addresses the vendor lock-in problem from its early version by providing extensive integration with industry and community tools.

With the many integrations available, users can log in using SSO tools like Azure AD or communicate with their team via Slack integration while accessing both cloud and on-premises resources.

Azure Pipelines offers free CI/CD with unlimited minutes and 10 parallel jobs for every open source project and many of the top open-source projects already use Azure Pipelines for CI/CD, such as Atom, CPython, Pipenv, Tox, Visual Studio Code, and TypeScript.

Benefits of Azure DevOps 

Azure DevOps use cases include:

  • Planning – Azure DevOps makes it easy for DevOps teams to manage their work with full visibility across products and projects, helping them keep development efforts transparent and on schedule. Teams can define, track, and layout work with Kanban boards, backlogs, custom dashboards and reporting capabilities using Azure Boards.
  • Developing –  Allows teams to share code and collaborate together with Visual Studio and Visual Studio Code. Users can create automatic workflows for automated testing and continuous integration in the cloud with Azure Pipelines. 
  • Delivery – Helps teams deploy applications to any Azure service automatically and with full control. Users can define and spin up multiple cloud environments with Azure Resource Manager or HashiCorp Terraform, and then create continuous delivery pipelines into these environments using Azure Pipelines or tools such as Jenkins and Spinnaker.
  • Operations –  With Azure Monitor, users can implement full stack monitoring, get actionable alerts, and gain insights from logs and telemetry.

Pricing

As for Azure DevOps pricing, there are a lot of open-source tools that can be combined to deliver the functionality that Azure DevOps promises to provide, but the basic plan for open source projects and small projects is free up to five users. For larger teams, the cost can range from $30 per month for 10 users to $90 per month for 20 users and so forth.

In summary, Azure DevOps is an all in one focussed project tracking and planning tool mixed with Developer and DevOps tools for writing, building and deploying code that’s relatively quick and easy to use. But, while maintenance cost is decreased, developers only need an active subscription to have constant access to the latest version. Azure DevOps will indirectly utilize Azure Storage and compute services that will increase usage and impact costs.

Why Use One Cloud, When You Can Use Any Cloud?

Why Use One Cloud, When You Can Use Any Cloud?

No, seriously, why would we just use one cloud?

Let’s stop for a moment and think about what has happened over the course of the last few years in public cloud computing and the hypervisor wars on-premises.  VMware has largely dominated the data center, but we are seeing a strong push from Microsoft on the hypervisor front.  KVM and Xen continue to grow in popularity for certain sectors, and all across the spectrum we see lots of folks running more than one hypervisor.

The cloud is no different.  The reason that we are all seeking the “AWS killer” just like the elusive “iPhone killer” is that there is some bizarre need to locate a winner of the platform war. 

This isn’t a zero-sum game.  The real shift in our industry is the broad acceptance of multiple platforms inside every IT portfolio.  We jumped right past the cloud to the multi-cloud.

Why Run More Than One Cloud?

Technology is not the problem, it’s the solution.  Business challenges are being answered by technology which is what really matters.  So, why would we run more than one cloud?  The reason is a technological one usually.  Certain features, APIs, and architectures may be supported on one more than another.  There are raw economics involved as well.  There are overall availability concerns which drive businesses to disperse their IT across multiple data centres, so why not do the same in the cloud?

The reason that AWS and OpenStack are often pitted against each other is that there are capabilities to enable AWS API access within the OpenStack platform. This is something that Randy Bias and many in the community fought for over the last few years.  The reason that it becomes important is that we see the huge adoption of AWS and being able to take the same workloads and move them to OpenStack using the same API calls and interactions would be a massive win for OpenStack as a platform.

If we stick to strictly public cloud providers, we can start with what we would call the big three:  AWS, Microsoft Azure, Google Cloud Platform.  Among those three, we see a lot of parrying as we see features and pricing updates happening regularly.  Features more so than pricing lately. That results in an ever-growing set of services that can be easily consumed.  As we see common orchestration and operational platforms like Mesos, Kubernetes, and the like gaining in popularity, it gives even more credence to the commoditization of cloud.  (Author’s opinion note:  The supposed “race to zero” for cloud costs is over.  They have all agreed that pricing isn’t where they win the customers any more)

Reducing the Complexity of Multi-Cloud

Complexity is the one thing that will slow the multi-cloud adoption a bit longer.  There are clearly different ways to consume resources, and to programmatically create and destroy resources in the public cloud platforms.   Especially when you go outside of the big three.  That means consumers of the public cloud will have to start with one target and generally work up to a deep comfort there before moving to embrace a multi-cloud strategy.

Once we remove or reduce complexity from the list of barriers, that opens up the door for embracing the economic value of a multi-cloud strategy.  This is where we can embrace spot pricing and on-demand growth to tackle scaling needs, while making the workload truly portable and making sure that price becomes the real win.  Networking stacks across the clouds are rather different for a reason.  If every car manufacturer used the same exact parts, they would lower the chances of you coming back to them for up-sell opportunities.  The same goes for the cloud.  Networking and security (they should always be paired) will most likely be the greatest challenge that technologists face in architecting their single multi-cloud solutions.

Next-Generation applications are being built as cloud-native where possible.  This opens up the door for what has been talked about for years.  Supposed freedom from vendor lock-in.  I’m always rather skeptical when a representative from one cloud company says “come to us and avoid vendor lock-in” because every vendor, even public cloud ones, have lock-in.  

What we do gain by embracing the cloud-native approach to application development and deployment is that we reduce the risk of lock-in.

The more we learn from the forward-leaning development teams, the more we are able to give ourselves agility in a multi-cloud architecture.  As all of the public cloud pundits who represent one faction or another are arguing over who will be the last one to be all-in on the public cloud running cloud-native applications, they forgot about one thing:  they opened the door for their competition too.

How to Manage Hybrid & Multi-Cloud Environments with Google Cloud Composer

How to Manage Hybrid & Multi-Cloud Environments with Google Cloud Composer

As we continue to evaluate ways to automate various aspects of software development, today we’ll take a look at Google Cloud Composer. This is a fully managed workflow orchestration service built on Apache Airflow that makes workflow management and creation simple and consistent.

The evolution of hybrid and multi-cloud environments continue to grow as enterprises want to take advantage of the cloud’s scalability, flexibility, and global reach. Of the three major providers, Google Cloud has been the most open to supporting this multi-cloud reality. For example, earlier this year, Google launched Anthos, a new managed service offering for hybrid and multi-cloud environments to give enterprises operational consistency by running quickly on any existing hardware, leverage open APIs and give developers the freedom to modernize. But, implementing the management of these environments can be either an invaluable proposition for your company or one to completely challenge your infrastructure instead – which brings us to Google’s solution, Cloud Composer.

How does Google Cloud Composer work?

With Cloud Composer, you can monitor, schedule and manage workflows across your hybrid and multi-cloud environment. Here is how:

  • As part of Google Cloud Platform (GCP), Cloud Composer integrates with tools like BigQuery, Dataflow, Dataproc, Datastore, Cloud Storage, Pub/Sub and Cloud ML Engine, giving users the ability to orchestrate end-to-end GCP workloads.
  • You can code directed acyclic graphs (DAGs) using Python to improve workflow readability and pinpoint areas in need of assistance.
  • It has one-click deployment built-in to give you instant and easy access to a range of connectors and graphical representations that show your workflow in action.
  • Cloud Composer allows you to pull workflows together from wherever they live, supporting a fully-functioning and connected cloud environment.
  • Since Cloud Composer is built on Apache Airflow – an open-source technology – it provides freedom from vendor lock-in as well as integration with a wide variety of platforms.  

Simplifying hybrid and multi-cloud environment management

Cloud Composer is ideal for hybrid and multi-cloud management because it’s built on Apache Airflow and operated with the Python programming language. Using open-source technology and the “no lock-in” approach and portability gives users the flexibility to create and deploy workflows seamlessly across clouds for a unified data environment.

Setting up your environment is quick and simple. Pipelines created with Cloud Composer will be configured as DAGs with easy integration for any required Python libraries, giving users of almost any level the ability to create and schedule their own workflows. With the built-in one-click deployment, you get instant and easy access to a range of connectors and graphical representations that show your workflow in action.

However, costs can be a drawback to making the most of your cloud environment when using Cloud Composer. Landing on specific costs for Cloud Composer can be hard to calculate, as Google measures the resources your deployments use and add the total cost of your Apache Airflow deployments onto your wider GCP bill. 

Cloud Composer Pricing 

Pricing for Cloud Composer is based on the size of a Cloud Composer environment and the duration the environment runs, so you pay for what you use, as measured by vCPU/hour, GB/month, and GB transferred/month. Google offers multiple pricing units for Cloud Composer because it uses several GCP products as building blocks. You can also use the Google Cloud Platform pricing calculator to estimate the cost of using Cloud Composer. 

So, should you use Google Cloud Composer? Cloud Composer environments are meant to be long-running compute resources that are always online so that you can schedule repeating workflows whenever necessary. Unfortunately, since you can’t turn on and off a Cloud Composer environment; you can only create or destroy, it may not be right for every environment and could cost more than the advantages may be worth.

How to Create a Business Case to Buy vs. Build Software

How to Create a Business Case to Buy vs. Build Software

When approaching new problems, such as cost optimization or task automation, development and IT teams are faced with the decision to buy vs. build a solution. There are a number of financial and strategic factors to consider when determining the best choice in each case, which can be difficult to parse through. Here are our tips for building a buy vs. build business case, whether for your own use or to present to management.

Reasons to Build Your Own Solution 

1. An off-the-shelf product doesn’t exist to solve your problem. If you can’t buy a product, or hack together several different existing solutions, you are probably going to have to build your own software. There is not too much “blue ocean” left out there, but if you have a need and no product can solve it, then it can make sense. Be wary and make sure you’ve completed your research before determining this is the case: perhaps the solution is called something other than what you’re searching, or exists as part of a larger suite of offerings. 

2. It will provide you with a significant competitive advantage over your rivals. This typically requires unique IP (some special sauce) that you can build into the product which other existing products can not offer and which will help your company succeed.

3. You can see a business opportunity whereby not only can you use the product yourself in-house, but you will also be able to offer it to your customers, thus leveraging your company’s investment.

4. You have a team of engineers sitting on the bench with nothing better to do (i.e. minimal opportunity cost). This does actually happen from time-to-time and such a project can make them productive.

5. The specialist knowledge already exists within the company and a natural product owner exists. This is not reason enough to decide to build, but without it, things are likely more difficult.

Reasons to Buy Pre-Built Solutions 

1. Building software is complex and expensive. If this is a software product that you are going to roll out across the enterprise, it will require support and likely a commitment for the life of the product to feature updates and improvements. 

2. Supporting products that your team might build is a significant commitment and typically is where the ‘big bucks are spent’. An MVP style product is unlikely to keep the masses happy for long, and you will need to budget for ongoing updates, improvements, patching and support. This typically multiplies the cost of building v1.0.

3. Commercializing a product built primarily for in-house usage is a great theory but in reality rarely works. Such examples do exist but are few and far between. Building a new product company requires a lot more than just technology and execution risk is high unless it is to become the #1 priority for your company. 

4. A long time to value of a new product venture means that you are often missing out on significant value which would be realized if an existing ‘off the shelf’ (today that often means a SaaS solution) were selected.

5. Enterprise-grade software comes with the bells and whistles that enterprises need. This typically means lots of points of integration, single sign-on requirements, and security as a given. Home-baked products typically do not include these items which are considered ‘added extras’ and not core to solving the problem at hand.

Create Your Business Case

If you work in an organization with access to technical resources (which today includes a lot of companies), there is often a desire to build because “they can” and a sense that they can meet the needs in a more custom manner solving the precise needs of their organization. Even if the opportunity cost of diverting resources away from other projects is low, there can be a tendency to overlook to include the longer term maintenance, upgrade, and support requirements of enterprise-grade software. Additionally, we often encounter companies who have started on the journey toward building an in-house solution, only to discover additional complexity or seeing internal priorities change. In such cases, even when there are significant sunk costs, reappraising alternative paths and third-party solutions can still make sense. 

Ultimately, every case is unique and weighing the relative pros/cons and building the business case to buy vs. build will require considering both financial and non-financial aspects to help the right decision is made. 

Do Cloud Providers Care About Green Computing?

Do Cloud Providers Care About Green Computing?

Is green computing something cloud providers like Amazon, Microsoft, and Google care about? And whether they do or not – how much does it matter? As the data center market continues to grow, it’s making an impact not only on the economy but on the environment as well. 

Public cloud offers enterprises more scalability and flexibility compared to their on-premise infrastructures. One benefit occasionally touted by the major cloud providers is that organizations will be more socially responsible when moving to the cloud by reducing their carbon footprint. But is this true?

Here is one example: Northern Virginia is the east coast’s capital of data centers, where “Data Center Alley” is located (and, as it happens, the ParkMyCloud offices), home to more than 100 data centers and more than 10 million square feet of data center space. Northern Virginia welcomed the data center market because of its positive economic impact. But as the demand for cloud services continues to grow, the expansion of data centers also increases dramatically. Earlier this year, the cloud boom in Northern Virginia alone was reaching over 4.5 gigawatts in commissioned energy, about the same power output needed from nine large (500-megawatt) coal power plants. 

Environmental groups like Greenpeace have accused major cloud providers like Amazon Web Services (AWS) of not doing enough for the environment when operating data centers. According to them, the problem is that cloud providers rely on commissioned energy from energy companies that are only focused on dirty energy (coal and natural gas) and very little from initiatives that include renewable energy. While the claims bring the spotlight on energy companies as well, we wanted to know what (if anything) the major cloud providers are doing to rely less on these types of energy and provide data centers with cleaner energy to make green computing a reality.

Data Center Sustainability Projects from AWS

According to AWS’s sustainability team, they’re investing in green energy initiatives and are striving to commit to an ambitious goal of 100% use of renewable energy by 2040. They are doing this with the proposition and support of smart environmental policies, and leveraging expertise in technology that drives sustainable innovation by working with state and local environmental groups and through power purchase agreements (PPAs) from power companies.

AWS’s Environmental Layer, which is dedicated to site selection, construction, operations and the mitigation of environmental risks for data centers, also includes sustainability considerations when making such decisions. According to them, “When companies move to the AWS Cloud from on-premises infrastructure, they typically reduce carbon emissions by 88%.” This is because their data suggests companies generally use 77% fewer servers, 84% less power, and gain access to a 28% cleaner mix of energy – solar and wind power – compared to using on-premise infrastructure. 

Amazon Solar Farm

So, how much of this commitment has AWS been able to achieve and is it enough? In 2018, AWS said they had made a lot of progress in their sustainability commitment, and exceeded 50% of renewable energy use. Currently, AWS has nine renewable energy farms in the US, including six solar farms located in Virginia and three wind farms in North Carolina. AWS plans to add three more renewable energy projects, one more here in the US, one in Ireland and one in Sweden. Once completed they expect to create approximately 2.7 gigawatts of renewable energy annually.

Microsoft’s Environmental Initiatives for Data Centers

Microsoft has stated that they are committed to change and make a positive impact on the environment, by “leveraging technology to solve some of the world’s most urgent environmental issues.”

In 2016, they announced they would power their data centers with more renewable energy, and set a target goal of 50% renewable energy by the end of 2018. But according to them, they were able to achieve that goal by 2017, earlier than they expected. Looking ahead they plan to surpass their next milestone of 70% and hope to reach 100% of renewable energy by 2023. If they were to meet these targets, they would be far ahead of AWS.

Beyond renewable energy, Microsoft plans to use IoT, AI and blockchain technology to measure, monitor and streamline the reuse, resale, and recycling of data center assets. Additionally, Microsoft will implement new water replenishment initiatives that will utilize rainfall for non-drinking water applications in their facilities.

Google’s Focus for Efficient Data Centers 

Google claims that making data centers run as efficiently as possible is a very big deal, and that reducing energy usage has been a major focus to them for over the past 10 years. 

Google’s innovation in the data center market came from the process of building facilities from the ground up instead of buying existing infrastructures. According to Google, using machine learning technology to monitor and improve power-usage-effectiveness (PUE) and find new ways to save energy in their data centers gave them the ability to implement new cooling technologies and operational strategies that would reduce energy consumption in their buildings by 30%. Additionally, they deployed custom-designed, high-performance servers that use very little energy as possible by stripping them of unnecessary components, helping them reduce their footprint and add more load capacity. 

By 2017, Google announced they were using 100% renewable energy through power purchase agreements (PPAs) from wind and solar farms and then reselling it back to the wholesale markets where data centers are located. 

The Environmental Argument

Despite the pledges cloud providers are committing to in renewable energy, cloud services continue to grow beyond those commitments, and how much energy is needed to operate data centers is still very dependant on “dirty energy.”

Breakthroughs for cloud sustainability are taking place, whether big or small, providing the cloud with better infrastructures, high-performance servers, and the reduction of carbon emissions with more access to renewable energy resources like wind and solar power. 

However, some may argue the time might be against us, but if cloud providers continue to better improve existing commitments that keep up with growth, then data centers – and ultimately the environment – will benefit from them.