Why Abstraction Layers are Key to IT Success

Why Abstraction Layers are Key to IT Success

A recent conversation I had with Turbonomic founder and president Shmuel Kliger highlighted the importance of abstraction layers. Shmuel told me, “there’s only one reason why IT exists,” which quickly led to a discussion of cloud and abstraction.

It’s easy enough to get caught up in the whirlwind of ever-evolving technologies that returning to a single, fundamental purpose of IT is actually quite an intriguing idea.

Why Does IT Exist?

So, why does IT exist? As Shmuel put it, the purpose of IT is to get applications the resources they need in order to perform. That’s it!

Others have said the purpose of IT is to “make productivity friction free” or “enable the business to drive new opportunities”, but it all comes down to enabling the performance of the business. 

That key step of “enablement” is where we get to the plethora of technologies – private cloud, public cloud, serverless cloud, containers, managed containers, container orchestration, IoT data, data warehouses, data lakes, the list goes on and on. There’s no lack of solutions to the many productivity and technology-related problems faced in businesses today. Really, the problem is that such a wide and constantly changing array of technologies exist, inadvertently (or perhaps advertently, depending on your view!) creating more complexity in the wake of the problems they solve.

Complexity is no stranger, but it’s no friend, either. Simplification leads to efficiencies across the board, and should be one of the primary goals IT departments seek to achieve.

How Abstraction Provides Simplification

First of all: what do we mean by abstraction? An abstraction layer is something that hides implementation details and replaces it with more easily understandable and usable functions. In other words, it makes complicated things simpler to use. These layers can include hardware, programmable logic, and software. 

When you start to think about the layers between hardware and an application end user, you see that the abstraction layers also include on-premises hardware; cloud providers and IaaS; PaaS; FaaS; and containers. These middlemen start to add up, but ultimately, in order for an application to execute its underlying sequence of code, it needs CPU, memory, I/O, network, and storage. 

On this point, Shmuel said: “I always say the artifact of demand can change and the artifact of supply can change, but the problem of matching demand to supply doesn’t go away.” 

By using layers of abstraction to match this demand to supply, you remove the burden of the vast majority of decisions from the developer and the end user – in other words, simplification. One of the most prominent 

The Full Benefits of Operating Through Abstraction Layers

In addition to simplification, other benefits of abstraction include: 

  • Alleviating Vendor Lock-In – this can occur across the board – for example, by using a layer of multi-cloud management tools, you reduce your reliance on any single cloud provider, which is important for enterprise risk mitigation strategies.
  • Reducing Complexity of Analysis – by bringing data into one place and one format, abstraction makes data analytics simpler and broader reaching.
  • Reducing Required Expertise – by rolling up multiple hardware and software problems into a single management layer, you eliminate much of the heterogeneity that requires diverse skills in your organization’s workforce and generally reduces the limits imposed by the human end user.
  • Optimize everything – by eliminating silos and allowing for a single point of analysis, abstraction management opens doors to resource and cost optimization. 

IT organizations should attack the problems of complexity in two ways: one, by identifying the most messy and complex areas of your technology stack and creating a plan of attack to simplify their management. 

Two, by identifying “quick wins” where you can abstract away the problem with automation, achieving a better environment, automatically. We’ve got one for you: try ParkMyCloud to automatically optimize your cloud costs, saving you time, money, and effort.

8 Non-Obvious Benefits of SSO

8 Non-Obvious Benefits of SSO

As an enterprise or organization grows in size, the benefits of SSO grow along with it. Some of these benefits are easy to see, but there are other things that come up as side-effects that might just become your favorite features. If you’re on the fence about going all-in on Single Sign-On, then see if anything here might push you over the edge.

1. Multi-factor Authentication

One of the best ways to secure a user’s account is to make the account not strictly based on a password. Passwords can be hacked, guessed, reused, or written down on a sticky note on the user’s monitor. A huge benefit of SSO is the ease of adding MFA security to the SSO login. By adding a second factor, which is typically a constantly-rotating number or token, you vastly increase the security of the account by eliminating the immediate access of a hacked password. Some organizations even choose to add a third factor, which is typically something you are (like a fingerprint or eye scan) for physical access to a location. Speaking of passwords…

2. Increased Password Complexity

Forcing users to go through an SSO login instead of remembering passwords for each individual application or website means they are much more open to forming complex passwords that rotate frequently. A big complaint about passwords is having to remember a bunch of them without reusing them, so a limitation on the number of passwords means that one password can be much stronger.

3. Easier User Account Deployment

This one might seem obvious to some, but by using an SSO portal for all applications, user provisioning can be greatly accelerated and secured. The IT playbook can be codified within the SSO portal, so a new user in the accounting department can get immediate access to the same applications that the rest of the accounting department has access to. Now, when you get that inevitable surprise hire that no one told you about, you can make it happen and be the hero.

4. Easier User Account Deletion

On the flip side of #3, sometimes the playbook for removing users after they leave the company can be quite convoluted, and there’s always that nagging feeling that you’re forgetting to change a password or remove a login from somewhere. With SSO, you just have one account to disable, which means access is removed quickly and consistently. If your admins were using SSO for administrative access, it also means fewer password changes you have to make on your critical systems.

5. Consistent Audit Logging

Another one of the benefits of SSO is consistent audit logging. Funneling all of a user’s access through the same SSO login means that tracking that user’s activity is easier than ever. In financial and regulated industries, this is a crucial piece of the puzzle, as you can make guarantees about what you are tracking. In the case of a user who is no longer employed by the enterprise, it can make it easier to have your monitoring tools look for such attempts at access (but you know they can’t get in, from point #4!).

6. Quickly Roll Out New Applications

Tell your IT staff that you need to roll out a new application to all users without SSO and you’ll hear groans starting in record time. However, with SSO, rolling out an application is a matter of a few clicks. This means you have plenty of options ranging from a slow rollout to select groups to start all the way to a full deployment within a matter of minutes. This flexibility can really help maximize your user’s productivity, and will make your IT staff happy to put services into play.

7. Simplify the User Experience

If you use a lot of SaaS applications or web apps that require remembering a URL, you’re just asking for your users to need reminders of how to get into them. With an SSO portal, you can make all services and websites show up as clickable items, so users don’t need to remember the quirky spelling of that tool you bought yesterday. Users will love having everything in one place, and you’ll love not having to type anything anymore.

8. Empower Your Users

Speaking of SaaS applications, one of the main blockers for deploying an application to a wider audience is the up-front setup time and effort, which leads to IT and Operations shouldering the load of the work (since they have the access). SSO can accelerate that deployment, which means the users have more power and can directly access the tools they need. Take an example of ParkMyCloud, where instead of users asking IT to turn on their virtual machines and databases, the users can log directly into the ParkMyCloud portal (with limited access) and control the costs of their cloud environments. Users feel empowered, and IT feels relieved.

Don’t Wait To Use SSO

Whether you’ve already got something in place that you’re not fully utilizing, or you’re exploring different providers, the benefits of SSO are numerous. Small companies can quickly make use of single sign-on, while large enterprises might consider it a must-have. Either way, know that your staff and user base will love having it as their main access portal!

Can Azure Dev/Test Pricing Save You Money?

Can Azure Dev/Test Pricing Save You Money?

Azure Dev/Test pricing is an option that Azure offers to give developers access to the tools that are necessary to support ongoing development and testing in Microsoft Azure services. This, hopefully, should give the user more control of their applications and environments reducing waste. 

Azure Dev/Test Pricing Options

With Azure Dev/Test pricing, three different options are available to users – Individual, Teams (Enterprise Agreement Customers), and another Teams option for those customers that don’t fall under the enterprise agreement. These pricing options are offered solely to active Visual Studio subscribers. We’ll dig in a little deeper to the pricing options and the benefits associated with each one. 

Option 1: Individuals

The individual option is meant to let users explore and get familiar with Azure’s services. As you can imagine, pricing for individuals is a little different than team pricing. Individuals are given the pricing option of monthly Azure credits for those who are subscribed to Visual Studio. If this pricing option is chosen, the individual is given a separate Azure subscription with a monthly credit balance ranging from $50-150.

You get to decide how you use your monthly credit. There are several Azure services that you can put the credit towards. The software included in your Visual Studio subscription can be used on Azure VMs for no additional charges, you pay a reduced rate for the VMs that you run. 

These monthly credits are ideal for personal workloads, but other options are more optimal for team workloads.

Option 2: Teams – Enterprise Agreement Customers

Teams that have an Enterprise Agreement in place have access to low Dev/Test rates for multiple subscriptions. The funds that are on the customer’s Enterprise Agreement will be used – there is no separate payment. A discount is given to customers at this level – all Windows and Windows Server, Virtual Machines, Cloud Services, and more are discounted off normal Enterprise Agreement rates. 

Unlike the option for Individuals, the team’s option for enterprise agreement customers allow end-users to access the application to provide feedback and to run tests – only Visual Studio subscribers can actually use the Azure resources running in this subscription. 

Option 3: Teams – All Other Customers

If a user isn’t an enterprise agreement customer but wants to use Azure for their teams, they would fall under this category. This rate offers a pay-as-you-go Dev/Test pricing option. This pricing option is very appealing because it allows users to quickly get their teams up and running with dev/test environments. Users are only allowed to use these environments for development and testing.  

This is a more flexible and inclusive option, it allows for multiple team members to interact with the resources, it’s not limited to just the account owners. 

Can Azure Dev/Test Save You Money?

All three options allow users to use the software that is included in their Visual Studio subscription for dev/testing. For VMs being run in environments in all three of these options, users are given a discounted price that is based on a Linux VM rate. 

Microsoft Azure users that are looking to save money on their cloud costs may want to use one of these options. These pricing options come with the benefit of no additional Microsoft software charges on Azure Virtual Machines and exclusive dev/test rates on other Azure services. 

Shadow IT: Not a Problem or Worse than Ever?

Shadow IT: Not a Problem or Worse than Ever?

Shadow IT: you’ve probably heard of it. Also known as Stealth IT, this refers to information technology (IT) systems built and used within organizations without explicit organizational approval or deployed by departments other than the IT department. 

A recent survey of IT decision makers ranked shadow IT as the lowest priority concern for 2019 out of seven possible options. Are these folks right not to worry?  In the age of public cloud, how much of a problem is shadow IT?

What is Shadow IT?

So-called shadow IT includes any system employees are using for work that is not explicitly approved by the IT department. These unapproved systems are common, and chances are you’re using some yourself. One survey found that 86% of cloud applications used by enterprises are not explicitly approved.

A common example of shadow IT is the use of online cloud storage. With the numerous online or cloud-based storage services like Dropbox, Box, and Google Drive, users have quick and easy methods to store files online. These solutions may or may not have been approved and vetted by your IT department as “secure” and/or a “company standard”. 

Another example is personal email accounts. Companies require their employees to conduct business using the corporate email system. However, users frequently use their personal email accounts either because they want to attach large files, connect using their personal devices, or because they think the provided email is too slow. One in three federal employees has stated they had used personal email for work. Another survey found that 4 in 10 employees overall used personal email for work. 

After consumer applications, we come to the issue of public cloud. Companies employ infrastructure standards to make support manageable throughout the organization, manage costs, and protect data security. However, employees can find these limiting. 

In our experience, the spread of technologies without approval comes down to enterprise IT not serving business needs well enough. Typically, the IT group is too slow or not responsive enough to the business users. Technology is too costly and doesn’t align well with the needs of the business. IT focuses on functional costs per unit as the value it delivers; but the business cares more about gaining quick functionality and capability to serve its needs and its customers’ needs. IT is also focused on security and risk management, and vetting of the numerous cloud-based applications takes time – assuming the application provider even makes the information available. Generally, enterprise IT simply doesn’t or cannot operate at the speed of the other business units it supports. So, business users build their own functionalities and capabilities through shadow IT purchases. 

Individuals or even whole departments may turn to public cloud providers like AWS to have testing or even production environments ready to go in less time than their own IT departments, with the flexibility to deploy what they like, on demand.

Is Shadow IT a problem?

With the advent of SaaS, IaaS and PaaS services with ‘freemium’ offerings that anyone can start using (like Slack, GitHub, Google Drive, and even AWS), Shadow IT has become an adoption strategy for new technologies. Many of these services count on individuals to use and share their applications so they can grow organically within an organization. When one person or department decides one of these tools or solutions makes their job easier, shares that service with their co-workers, and that service grows from there, spreads from department to department, growing past the free tier, until IT’s hand is forced to explicit or implicit approve through support. In cases like these, shadow IT could be considered a route to innovation and official IT approval.

On the other hand, shadow IT solutions are not often in line with organizational requirements for control, documentation, security, and reliability. This can open up both security and legal risks for a company. Gartner predicted in 2016 that by 2020, a third of successful attacks experienced by enterprises will be on their shadow IT resources. It’s impossible for enterprises to secure what they’re not aware of.

There is also the issue of budgeting and spend. Research from Everest Group estimates that shadow IT comprises 50% or more of IT spending in large enterprises. While this could reduce the need for chargeback/showback processes by putting spend within individual departments, it makes technology spend far less trackable, and such fragmentation eliminates the possibility of bulk or enterprise discounting when services are purchased for the business as a whole. 

Is it a problem? 

As with many things, the answer is “it depends.” Any given Shadow IT project needs to be evaluated from a risk-management perspective. What is the nature of the data exposed in the project? Is it a sales engineer’s cloud sandbox where she is getting familiar with new technology? Or is it a marketing data mining and analysis project using sensitive customer information? Either way, the reaction to a Shadow IT “discovery” should not be to try to shame the users, but rather, to adapt the IT processes and provide more approved/negotiated options to the users in order to make their jobs easier. If Shadow IT is particularly prevalent in your organization, you may want to provide some risk management guidance and training of what is acceptable and what is not. In this way, Shadow IT can be turned into a strength rather than a weakness, by outsourcing the work to the end users.

But, of course, IT cannot evaluate the risk of systems it does not know about. The hardest part is still finding those in the shadows.

Why Azure Databricks Usage is On the Rise

Why Azure Databricks Usage is On the Rise

Have you been hearing a lot about Azure Databricks lately? We have. One of the nice things about talking with ParkMyCloud users is that we get to see trends often before they are more widely recognized within the industry. Whether it is adoption of new instances or databases, or usage of new tools and services it’s always interesting to see change occur. 

What is Databricks?

One such change over the last year or so has been an enormous increase in the use of very short-lived instances, typically less than 60 minutes, which get spun up as part of clusters. These are in fact Databricks being used to undertake data analytics workloads. I had come across Databricks in relation to their unicorn status in the startup world – as of six months ago were valued at close to $4B – so I guess it was only a matter of time before we began to see the fruits of their labor become popular. 

The Databricks story is an interesting one which begins at UC Berkeley with the development of a research project, Apache Spark in 2009. Apache Spark is described as a unified analytics engine for large-scale data processing. It provides an extremely rapid cluster computing technology, designed for fast computation. The team who developed Spark went on to found Databricks in 2013 since which time they have raised $500MM in funding. 

The Databricks platform allows enterprises to build their data pipelines across data storage systems and prepare data sets for data scientists and engineers. To do this, Databricks offers a range of tools for building, managing and monitoring data pipelines. It enables the building of machine learning (ML) models, which have grown in parallel with the growth in big data within the enterprise. 

The product also has an interesting approach to pricing with the introduction of their own usage-based billing methodology based on DBU’s. A DBU is a Databricks Unit (DBU) which is a unit of processing capability per hour, billed on per-second usage. This cost excludes the cost of the underlying instance (VM). The good thing is that the model is very transparent and provides a number of pricing options and tiers. Based on the tier and type of service required prices range from $0.07/DBU for their Standard product on the Data Engineering Light tier to $0.55 for the Premium product on the Data Analytics tier. Helpfully, they do offer online calculators for both Azure and AWS to help estimate cost including underlying infrastructure. The Azure Databricks pricing example can be seen here.

Databricks + Microsoft = Azure Databricks

A major breakthrough for the company was a unique partnership with Microsoft whereby their product is not just another item in the MS Azure Marketplace but rather is fully integrated into Azure with the ability to spin up Azure Databricks in the same way you would a virtual machine. Once running, the service can scale automatically as the users need change in the same way cloud is able to scale using autoscaling groups to match supply against demand. 

Databricks are also available for other public cloud vendors, most notably AWS (available within the Marketplace). However, the level of integration is not the same as on Azure, and the service looks much more like a standard AWS marketplace offering.

Why More and More Companies are Using Azure Databricks

What is clear is that opportunities for use of ML and AI has progressed from experimentation to workloads, and these workloads are now at a massive scale. This has also been accompanied by the emergence of a new subset of DevOps called AIOps, which makes a lot of sense given the amount of infrastructure and services now needing to be configured and deployed to run such workloads.

In a forthcoming blog we will dig a little deeper in terms of the usage patterns for such workloads and the changes in terms of the way organizations running these workloads are now utilizing the public cloud for these non-production workloads.

VMware Cloud on AWS: A Hybrid Cloud Midpoint

VMware Cloud on AWS: A Hybrid Cloud Midpoint

VMware Cloud on AWS is an integrated hybrid cloud offering jointly developed by AWS and VMware. It’s targeted at enterprises (or companies) who are looking to migrate on-premises vSphere-based workloads to public cloud, and provides access to native AWS services. 

Overview of VMware Cloud on AWS 

VMware Cloud on AWS provides an integrated hybrid cloud environment, allowing you to maintain a consistent infrastructure between the vSphere environment in your on-prem data center and the vSphere Software-Defined Data Center (SDDC) on AWS. It also provides a unified view and resource management of your on-prem data center and VMware SDDC on AWS with a single console. 

Digital transformation continues to drive businesses to the cloud to stay competitive. But integrating public cloud with existing private cloud infrastructure requires many technical processes, and skill differences between on-prem and cloud environments to be leveraged for both of these to work simultaneously. This combined offering makes it easier for those familiar with VMware to integrate into the public cloud without having to rewrite applications or modify operating models.

One reason this offering is attractive to customers is that it provides optimized access to native AWS services including compute, database, analytics, IoT, AI/ML, security, mobile, resource deployment, and application services.

Another reason is that with automatic scaling and load balancing VMware Cloud on AWS can adapt to the changing business needs across global regions. They also position themselves as a cost-effective solution for reducing upfront investment costs with no application re-factoring or re-architecting needed when migrating. We’ll take a look at the pricing solutions it offers for on-demand and subscription models, but first, let’s see what VMware Cloud for AWS can do for the enterprise.

Use Cases for VMware Cloud on AWS 

Accelerated and Simplified Data Center Migration

VMware Cloud on AWS claims to accelerate and simplify the migration process for businesses by reducing migration efforts and complexity between on-prem environments and the cloud. Once in the cloud, users can leverage VMware and AWS services to modernize applications and run mission-critical applications quickly with VMware availability and performance combined with the elastic scale of AWS.

Extend the Data Center to the Cloud with Your Existing Skillset

This offering lets users who are used to VMware keep a consistent and familiar environment on the cloud. Since VMware Cloud on AWS doesn’t require re-tooling or re-educating, IT teams can continue to deliver consistently on vSphere-based infrastructure and operations that are already implemented in existing on-prem data centers. 

Add a Robust Disaster Recovery Service to Your Environment

One offering available is VMware Site Recovery: on-demand disaster recovery as a service, optimized for VMware Cloud on AWS to reduce risk without the need to maintain a secondary on-prem site. You can securely replicate workloads to VMware Cloud on AWS so you can spin them up on-demand if disaster strikes. 

Flexible Dev/Test Environment

You can use VMware SDDC-consistent dev/test environments that can integrate with modern CI/CD automation tools and access native AWS services seamlessly. You can spin up an entire VMware SDDC in under two hours and scale host capacity in a few minutes.

VMware Cloud on AWS Cost Compared

So, how does the pricing shake out?  Hosts can be purchased on-demand or as a 1-year or 3-year subscription. If you choose on-demand pricing, you’ll pay for the physical host by the hour that the host is active with no upfront cost, while the long-term subscription is set to provide up to 50% cost savings over an equivalent period compared to on-demand service, but you pay the costs upfront. It’s a similar idea to AWS Reserved Instances, which may or may not be worth the cost.

Depending on the use case, pricing is similar to standard AWS pricing. See how it compares in price with standard AWS or estimate your costs with the pricing estimator. 

Top Tips for Using VMware Cloud on AWS

VMware Cloud on AWS is a good hybrid cloud option for those who want to stay in the VMware ecosystem while dipping their toe in AWS. Here are our top tips for using this offering:

  • Estimate prices in advance: One of the main reasons you want to estimate your pricing before committing to a subscription is to avoid overspend. Idle and overprovisioned resources you are not actually using result in wasted cloud spend, so make sure you’re not oversizing or spending money on cloud resources that should be turned off. 
  • Educate stakeholders on the fact that this allows you to bridge on-premises infrastructure and public cloud without disruption.
  • Consider whether jumping straight to the cloud is possible for some workloads – many companies start with dev/test. If so, you may be able to skip this intermediary step.