Can Azure Dev/Test Pricing Save You Money?

Can Azure Dev/Test Pricing Save You Money?

Azure Dev/Test pricing is an option that Azure offers to give developers access to the tools that are necessary to support ongoing development and testing in Microsoft Azure services. This, hopefully, should give the user more control of their applications and environments reducing waste. 

Azure Dev/Test Pricing Options

With Azure Dev/Test pricing, three different options are available to users – Individual, Teams (Enterprise Agreement Customers), and another Teams option for those customers that don’t fall under the enterprise agreement. These pricing options are offered solely to active Visual Studio subscribers. We’ll dig in a little deeper to the pricing options and the benefits associated with each one. 

Option 1: Individuals

The individual option is meant to let users explore and get familiar with Azure’s services. As you can imagine, pricing for individuals is a little different than team pricing. Individuals are given the pricing option of monthly Azure credits for those who are subscribed to Visual Studio. If this pricing option is chosen, the individual is given a separate Azure subscription with a monthly credit balance ranging from $50-150.

You get to decide how you use your monthly credit. There are several Azure services that you can put the credit towards. The software included in your Visual Studio subscription can be used on Azure VMs for no additional charges, you pay a reduced rate for the VMs that you run. 

These monthly credits are ideal for personal workloads, but other options are more optimal for team workloads.

Option 2: Teams – Enterprise Agreement Customers

Teams that have an Enterprise Agreement in place have access to low Dev/Test rates for multiple subscriptions. The funds that are on the customer’s Enterprise Agreement will be used – there is no separate payment. A discount is given to customers at this level – all Windows and Windows Server, Virtual Machines, Cloud Services, and more are discounted off normal Enterprise Agreement rates. 

Unlike the option for Individuals, the team’s option for enterprise agreement customers allow end-users to access the application to provide feedback and to run tests – only Visual Studio subscribers can actually use the Azure resources running in this subscription. 

Option 3: Teams – All Other Customers

If a user isn’t an enterprise agreement customer but wants to use Azure for their teams, they would fall under this category. This rate offers a pay-as-you-go Dev/Test pricing option. This pricing option is very appealing because it allows users to quickly get their teams up and running with dev/test environments. Users are only allowed to use these environments for development and testing.  

This is a more flexible and inclusive option, it allows for multiple team members to interact with the resources, it’s not limited to just the account owners. 

Can Azure Dev/Test Save You Money?

All three options allow users to use the software that is included in their Visual Studio subscription for dev/testing. For VMs being run in environments in all three of these options, users are given a discounted price that is based on a Linux VM rate. 

Microsoft Azure users that are looking to save money on their cloud costs may want to use one of these options. These pricing options come with the benefit of no additional Microsoft software charges on Azure Virtual Machines and exclusive dev/test rates on other Azure services. 

Shadow IT: Not a Problem or Worse than Ever?

Shadow IT: Not a Problem or Worse than Ever?

Shadow IT: you’ve probably heard of it. Also known as Stealth IT, this refers to information technology (IT) systems built and used within organizations without explicit organizational approval or deployed by departments other than the IT department. 

A recent survey of IT decision makers ranked shadow IT as the lowest priority concern for 2019 out of seven possible options. Are these folks right not to worry?  In the age of public cloud, how much of a problem is shadow IT?

What is Shadow IT?

So-called shadow IT includes any system employees are using for work that is not explicitly approved by the IT department. These unapproved systems are common, and chances are you’re using some yourself. One survey found that 86% of cloud applications used by enterprises are not explicitly approved.

A common example of shadow IT is the use of online cloud storage. With the numerous online or cloud-based storage services like Dropbox, Box, and Google Drive, users have quick and easy methods to store files online. These solutions may or may not have been approved and vetted by your IT department as “secure” and/or a “company standard”. 

Another example is personal email accounts. Companies require their employees to conduct business using the corporate email system. However, users frequently use their personal email accounts either because they want to attach large files, connect using their personal devices, or because they think the provided email is too slow. One in three federal employees has stated they had used personal email for work. Another survey found that 4 in 10 employees overall used personal email for work. 

After consumer applications, we come to the issue of public cloud. Companies employ infrastructure standards to make support manageable throughout the organization, manage costs, and protect data security. However, employees can find these limiting. 

In our experience, the spread of technologies without approval comes down to enterprise IT not serving business needs well enough. Typically, the IT group is too slow or not responsive enough to the business users. Technology is too costly and doesn’t align well with the needs of the business. IT focuses on functional costs per unit as the value it delivers; but the business cares more about gaining quick functionality and capability to serve its needs and its customers’ needs. IT is also focused on security and risk management, and vetting of the numerous cloud-based applications takes time – assuming the application provider even makes the information available. Generally, enterprise IT simply doesn’t or cannot operate at the speed of the other business units it supports. So, business users build their own functionalities and capabilities through shadow IT purchases. 

Individuals or even whole departments may turn to public cloud providers like AWS to have testing or even production environments ready to go in less time than their own IT departments, with the flexibility to deploy what they like, on demand.

Is Shadow IT a problem?

With the advent of SaaS, IaaS and PaaS services with ‘freemium’ offerings that anyone can start using (like Slack, GitHub, Google Drive, and even AWS), Shadow IT has become an adoption strategy for new technologies. Many of these services count on individuals to use and share their applications so they can grow organically within an organization. When one person or department decides one of these tools or solutions makes their job easier, shares that service with their co-workers, and that service grows from there, spreads from department to department, growing past the free tier, until IT’s hand is forced to explicit or implicit approve through support. In cases like these, shadow IT could be considered a route to innovation and official IT approval.

On the other hand, shadow IT solutions are not often in line with organizational requirements for control, documentation, security, and reliability. This can open up both security and legal risks for a company. Gartner predicted in 2016 that by 2020, a third of successful attacks experienced by enterprises will be on their shadow IT resources. It’s impossible for enterprises to secure what they’re not aware of.

There is also the issue of budgeting and spend. Research from Everest Group estimates that shadow IT comprises 50% or more of IT spending in large enterprises. While this could reduce the need for chargeback/showback processes by putting spend within individual departments, it makes technology spend far less trackable, and such fragmentation eliminates the possibility of bulk or enterprise discounting when services are purchased for the business as a whole. 

Is it a problem? 

As with many things, the answer is “it depends.” Any given Shadow IT project needs to be evaluated from a risk-management perspective. What is the nature of the data exposed in the project? Is it a sales engineer’s cloud sandbox where she is getting familiar with new technology? Or is it a marketing data mining and analysis project using sensitive customer information? Either way, the reaction to a Shadow IT “discovery” should not be to try to shame the users, but rather, to adapt the IT processes and provide more approved/negotiated options to the users in order to make their jobs easier. If Shadow IT is particularly prevalent in your organization, you may want to provide some risk management guidance and training of what is acceptable and what is not. In this way, Shadow IT can be turned into a strength rather than a weakness, by outsourcing the work to the end users.

But, of course, IT cannot evaluate the risk of systems it does not know about. The hardest part is still finding those in the shadows.

Why Azure Databricks Usage is On the Rise

Why Azure Databricks Usage is On the Rise

Have you been hearing a lot about Azure Databricks lately? We have. One of the nice things about talking with ParkMyCloud users is that we get to see trends often before they are more widely recognized within the industry. Whether it is adoption of new instances or databases, or usage of new tools and services it’s always interesting to see change occur. 

What is Databricks?

One such change over the last year or so has been an enormous increase in the use of very short-lived instances, typically less than 60 minutes, which get spun up as part of clusters. These are in fact Databricks being used to undertake data analytics workloads. I had come across Databricks in relation to their unicorn status in the startup world – as of six months ago were valued at close to $4B – so I guess it was only a matter of time before we began to see the fruits of their labor become popular. 

The Databricks story is an interesting one which begins at UC Berkeley with the development of a research project, Apache Spark in 2009. Apache Spark is described as a unified analytics engine for large-scale data processing. It provides an extremely rapid cluster computing technology, designed for fast computation. The team who developed Spark went on to found Databricks in 2013 since which time they have raised $500MM in funding. 

The Databricks platform allows enterprises to build their data pipelines across data storage systems and prepare data sets for data scientists and engineers. To do this, Databricks offers a range of tools for building, managing and monitoring data pipelines. It enables the building of machine learning (ML) models, which have grown in parallel with the growth in big data within the enterprise. 

The product also has an interesting approach to pricing with the introduction of their own usage-based billing methodology based on DBU’s. A DBU is a Databricks Unit (DBU) which is a unit of processing capability per hour, billed on per-second usage. This cost excludes the cost of the underlying instance (VM). The good thing is that the model is very transparent and provides a number of pricing options and tiers. Based on the tier and type of service required prices range from $0.07/DBU for their Standard product on the Data Engineering Light tier to $0.55 for the Premium product on the Data Analytics tier. Helpfully, they do offer online calculators for both Azure and AWS to help estimate cost including underlying infrastructure. The Azure Databricks pricing example can be seen here.

Databricks + Microsoft = Azure Databricks

A major breakthrough for the company was a unique partnership with Microsoft whereby their product is not just another item in the MS Azure Marketplace but rather is fully integrated into Azure with the ability to spin up Azure Databricks in the same way you would a virtual machine. Once running, the service can scale automatically as the users need change in the same way cloud is able to scale using autoscaling groups to match supply against demand. 

Databricks are also available for other public cloud vendors, most notably AWS (available within the Marketplace). However, the level of integration is not the same as on Azure, and the service looks much more like a standard AWS marketplace offering.

Why More and More Companies are Using Azure Databricks

What is clear is that opportunities for use of ML and AI has progressed from experimentation to workloads, and these workloads are now at a massive scale. This has also been accompanied by the emergence of a new subset of DevOps called AIOps, which makes a lot of sense given the amount of infrastructure and services now needing to be configured and deployed to run such workloads.

In a forthcoming blog we will dig a little deeper in terms of the usage patterns for such workloads and the changes in terms of the way organizations running these workloads are now utilizing the public cloud for these non-production workloads.

VMware Cloud on AWS: A Hybrid Cloud Midpoint

VMware Cloud on AWS: A Hybrid Cloud Midpoint

VMware Cloud on AWS is an integrated hybrid cloud offering jointly developed by AWS and VMware. It’s targeted at enterprises (or companies) who are looking to migrate on-premises vSphere-based workloads to public cloud, and provides access to native AWS services. 

Overview of VMware Cloud on AWS 

VMware Cloud on AWS provides an integrated hybrid cloud environment, allowing you to maintain a consistent infrastructure between the vSphere environment in your on-prem data center and the vSphere Software-Defined Data Center (SDDC) on AWS. It also provides a unified view and resource management of your on-prem data center and VMware SDDC on AWS with a single console. 

Digital transformation continues to drive businesses to the cloud to stay competitive. But integrating public cloud with existing private cloud infrastructure requires many technical processes, and skill differences between on-prem and cloud environments to be leveraged for both of these to work simultaneously. This combined offering makes it easier for those familiar with VMware to integrate into the public cloud without having to rewrite applications or modify operating models.

One reason this offering is attractive to customers is that it provides optimized access to native AWS services including compute, database, analytics, IoT, AI/ML, security, mobile, resource deployment, and application services.

Another reason is that with automatic scaling and load balancing VMware Cloud on AWS can adapt to the changing business needs across global regions. They also position themselves as a cost-effective solution for reducing upfront investment costs with no application re-factoring or re-architecting needed when migrating. We’ll take a look at the pricing solutions it offers for on-demand and subscription models, but first, let’s see what VMware Cloud for AWS can do for the enterprise.

Use Cases for VMware Cloud on AWS 

Accelerated and Simplified Data Center Migration

VMware Cloud on AWS claims to accelerate and simplify the migration process for businesses by reducing migration efforts and complexity between on-prem environments and the cloud. Once in the cloud, users can leverage VMware and AWS services to modernize applications and run mission-critical applications quickly with VMware availability and performance combined with the elastic scale of AWS.

Extend the Data Center to the Cloud with Your Existing Skillset

This offering lets users who are used to VMware keep a consistent and familiar environment on the cloud. Since VMware Cloud on AWS doesn’t require re-tooling or re-educating, IT teams can continue to deliver consistently on vSphere-based infrastructure and operations that are already implemented in existing on-prem data centers. 

Add a Robust Disaster Recovery Service to Your Environment

One offering available is VMware Site Recovery: on-demand disaster recovery as a service, optimized for VMware Cloud on AWS to reduce risk without the need to maintain a secondary on-prem site. You can securely replicate workloads to VMware Cloud on AWS so you can spin them up on-demand if disaster strikes. 

Flexible Dev/Test Environment

You can use VMware SDDC-consistent dev/test environments that can integrate with modern CI/CD automation tools and access native AWS services seamlessly. You can spin up an entire VMware SDDC in under two hours and scale host capacity in a few minutes.

VMware Cloud on AWS Cost Compared

So, how does the pricing shake out?  Hosts can be purchased on-demand or as a 1-year or 3-year subscription. If you choose on-demand pricing, you’ll pay for the physical host by the hour that the host is active with no upfront cost, while the long-term subscription is set to provide up to 50% cost savings over an equivalent period compared to on-demand service, but you pay the costs upfront. It’s a similar idea to AWS Reserved Instances, which may or may not be worth the cost.

Depending on the use case, pricing is similar to standard AWS pricing. See how it compares in price with standard AWS or estimate your costs with the pricing estimator. 

Top Tips for Using VMware Cloud on AWS

VMware Cloud on AWS is a good hybrid cloud option for those who want to stay in the VMware ecosystem while dipping their toe in AWS. Here are our top tips for using this offering:

  • Estimate prices in advance: One of the main reasons you want to estimate your pricing before committing to a subscription is to avoid overspend. Idle and overprovisioned resources you are not actually using result in wasted cloud spend, so make sure you’re not oversizing or spending money on cloud resources that should be turned off. 
  • Educate stakeholders on the fact that this allows you to bridge on-premises infrastructure and public cloud without disruption.
  • Consider whether jumping straight to the cloud is possible for some workloads – many companies start with dev/test. If so, you may be able to skip this intermediary step.
AWS Postgres Pricing Comparison

AWS Postgres Pricing Comparison

Maybe you’re looking to use PostgreSQL in your AWS environment – if so, you need to make sure to evaluate pricing and compare your options before you decide. A traditional “lift and shift” of your database can cause quite a headache, so your DBA team likely wants to do it right the first time (and who doesn’t?). Let’s take a look at some of your options for running PostgreSQL databases in AWS.

Option 1: Self-Managed Postgres on EC2

If you’re currently running your databases on-premises or in a private cloud, then the simplest conversion to public cloud in AWS is to stand up an EC2 virtual machine and install the Postgres software on that VM. Since PostgreSQL is open-source, there’s no additional charge for running the software, so you’ll just be paying for the VM (along with associated costs like storage and network transfer). AWS doesn’t have custom instance sizes, but they have enough different sizes across instance families that you can find an option to match your existing server.

As an example, let’s say you’d like to run an EC2 instance with 2 CPUs and 8 GB of memory and 100GB of storage in the us-east-1 region. An m5.large system would work for this, which would cost approximately $70 per month for compute, plus $10 per month for storage. On the plus side, there will be no additional costs if you are transferring existing data into the system (there’s only outbound data transfer costs for AWS).

The biggest benefit of running your own EC2 server with Postgres installed is that you can do any configuration changes or run external software as you see fit. Tools like pgbouncer for connection pooling or pg_jobmon for logging within transactions requires the self-management provided by this EC2 setup. Additional performance tuning that is based on direct access to the Postgres configuration files is also possible with this method.

Option 2: AWS Relational Database Service for Hosted Postgres Databases

If your database doesn’t require custom configuration or community projects to run, then using the AWS RDS service may work for you. This hosted service comes with some great options that you may not take the time to implement with your own installation, including:

    • Automated backups
    • Multi-AZ options (for automatic synchronization to a standby in another availability zone)
    • Behind-the-scenes patching to the latest version of Postgres
    • Monitoring via CloudWatch
    • Built-in encryption options

These features are all fantastic, but they do come at a price. The same instance size as above, an m5.large with 2 CPUs and 8 GB of memory, is approximately $130 per month for a single AZ, or $260 per month for a multi-AZ setup.

Option 3: Postgres-Compatible AWS Aurora

One additional option when looking at AWS Postgres pricing is AWS Aurora. This AWS-created database option is fully compatible with existing Postgres workloads, but enables auto-scaling and additional performance throughput. The price is also attractive, as a similar size of r5.db.large in a multi-AZ configuration would be $211 per month (plus storage and backup costs per GB). This is great if you’re all-in on AWS services, but might not work if you don’t like staying on the absolute latest Postgres version (or don’t want to become dependant on AWS).

AWS Postgres Pricing Comparison

Comparing these costs of these 3 options gives us: 

  • Self-managed EC2 – $80/month
  • Hosted RDS running Postgres in a single AZ – $130/month
  • Hosted RDS running Postgres in multiple AZ’s – $260/month
  • Hosted RDS running Aurora in multiple AZ’s – $211/month

Running an EC2 instance yourself is clearly the cheapest option from a pure cost perspective, but you better know how to manage and tune your settings in Postgres for this to work.  If you want your database to “just work” without worrying about losing data or accessibility, then the Aurora option is the best value, as the additional costs cover many more features that you’ll wonder how you ever lived without.

What the Five Levels of Vehicle Autonomy Tell us About Adoption of Infrastructure Automation Tools

What the Five Levels of Vehicle Autonomy Tell us About Adoption of Infrastructure Automation Tools

On our first day as Turbonomic employees, our team had some great discussions with CTO Charles Crouchman about Turbonomic, ParkMyCloud, and the market for infrastructure automation tools. Charles explained his vision of the future of infrastructure automation, which parallels the automation trajectory that cars and other vehicles have been following for decades. It’s a comparison that’s useful in order to understand the goals of fully-automated cloud infrastructure – and the mindset of cloud users adopting this paradigm. (And of course, given our name, we’re all in on driving analogies!) 

The Five Levels of Vehicle Autonomy

The idea of the five levels of vehicle autonomy – or six, if you include level 0 – is an idea that comes from the Society of Automotive Engineers. 

The levels are as follows:

  • Level 0  – No Automation. The driver performs all driving tasks with no tools or assistance.
  • Level 1 – Driver Assistance. The vehicle is controlled by the driver, but the vehicle may have driver-assist features such as cruise control or an automated emergency brake.
  • Level 2 – Partial Automation or Occasional Self-Driving. The driver must remain in control and engaged in driving and monitoring, but the vehicle has combined automated functions such as acceleration and steering/lane position. 
  • Level 3 – Conditional Automation or Limited Self-Driving. The driver is a necessity, but not required to monitor the environment. The vehicle monitors the road and traffic, and informs the driver when he or she must take control. 
  • Level 4 – High Automation or Full Self-Driving Under Certain Conditions. The vehicle is capable of driving under certain conditions, such as urban ride-sharing, and the driver may have the option to control the vehicle. This is where airplanes are today – for the most part, they can fly themselves, but there’s always a human pilot present.
  • Level 5 – Full Automation or Full Self-Driving Under All Conditions. The vehicle can drive without a human driver or occupants under all conditions. This is an ideal, but right now, neither the technology nor the people are ready for this level of automation.

How These Levels Apply to Infrastructure Automation Tools

Now let’s take a look at how these levels apply to infrastructure automation tools and infrastructure:

  • Level 0 – No Automation. No tools in place.
  • Level 1 – Driver Assistance. Some level of script-based automation with limited applications, such as scripting the installation of an application so it’s just one user command, instead of hand-installing it.
  • Level 2 – Partial Automation or Occasional Self-Driving. In cloud infrastructure, this translates to having a monitoring system in place that can alert you to potential issues, but cannot take action to resolve those issues.
  • Level 3 – Conditional Automation or Limited Self-Driving. Think of this as traditional incident resolution or traditional orchestration. You can build specific automations to handle specific use cases, such as opening a ticket in a service desk, but you have to know what the event trigger is in order to automate a response.
  • Level 4 – High Automation or Full Self-Driving Under Certain Conditions. This is the step where analytics are integrated. A level-4 automated infrastructure system uses analytics to decide what to do. A human can monitor this, but is not needed to take action.
  • Level 5 – Full Automation or Full Self-Driving Under All Conditions. Full automation. Like in the case of vehicles, both the technology and the people are a long way from this nirvana.

So where are most cloud users in the process right now? There are cloud users and organizations all over this spectrum, which makes sense when you think about vehicle automation: there are early adopters who are perfectly willing to buy a Tesla, turn on auto-pilot, and let the car drive them to their destination. But, there are also plenty of laggards who are not ready to take their hands off the wheel, or even turn on cruise control.

Most public cloud users have at least elements of levels 1 and 2 via scripts and monitoring solutions. Many are at level 3, and with the most advanced platforms, organizations reach level 4. However, there is a barrier between levels 4 and 5: you will need an integrated hardware/software solution. The companies that are closest to full automation are the hyperscale cloud companies like Netflix, Facebook, and Google who have basically built their own proprietary stack including the hardware. This is where Kubernetes comes from and things like Netflix Scryer.

In our conversation, Charles said: “The thing getting in the way is heterogeneity, which is to say, most customers buy their hardware from one vendor, application software from another company, storage from another, cloud capacity from another, layer third-party software applications in there, use different development tools –– and none of these things were effectively built to be automated. So right now, automation needs to happen from outside the system, with adaptors into the systems. To get to level 5, the automation needs to be baked in from the system software through the application all the way up the stack.”

What Defines Early Adopters of Infrastructure Automation Tools

While there’s a wide scale of adoption in the market right now, there are a few indicators that can predict whether an organization or an individual will be open to infrastructure automation tools. 

The first is a DevOps approach. If an organization using DevOps, they have already agreed to let software automate deployments, which means they’re accepting of automation in general – and likely to be open to more.

Another is whether resource management is centralized within the organization or not. If it is centralized, the team or department doing the management tends to be more open to automation and software solutions. If ownership is distributed throughout the organization, it’s naturally more difficult to make unified change.

Ultimately, the goal we should all be striving for is to use infrastructure automation tools to step up the levels of automated resource configuration and cost control. Through automation, we can reduce management time and room for human error to achieve optimized environments.