The Multi-Cloud Environment in 2020: Advantages and Disadvantages

The Multi-Cloud Environment in 2020: Advantages and Disadvantages

Now more than ever, organizations have been implementing multi-cloud environments for their public cloud infrastructure. 

We not only see this in our customers’ environments: a growing proportion use multiple cloud providers. Additionally,  industry experts and analysts report the same. In early June, IDG released its 8th Cloud Computing Survey results where they broke down IT environments, multi-cloud and IT budgets by the numbers. This report also goes into both the upsides and downsides using multiple public clouds. Here’s what they found:

  • More than half (55%) of respondents use multiple public clouds: 
    • 34% use two, 10% use three, and 11% use more than three
  • 49% of respondents say they adopted a multi-cloud approach to get best-of-breed platform and service options. 
  • Other goals include:
    • Cost savings/optimization (41%)
    • Improved disaster recovery/business continuity (40%) 
    • Increased platform and service flexibility (39%).

Interestingly, within multi-cloud customers of ParkMyCloud, the majority are users of AWS and Google Cloud, or AWS and Azure; very few are users of Azure and Google Cloud. About 1% of customers have a presence in all three. 

Multi-Cloud Across Organizations

The study found that the likelihood of an organization using a multi-cloud environment depends on its size and industry. For instance, government, financial services and manufacturing organizations are less likely to stick to one cloud due to possible security concerns that come with using multiple clouds. IDG concluded that enterprises are more concerned with avoiding vendor lock-in while SMBs are more likely to make cost savings/optimization a priority (makes sense, the smaller the company, the more worried they are about finances). 

  • Fewer than half of SMBs (47%) use multiple public clouds
  • Meanwhile, 66% of enterprises use multiple clouds

What are the advantages of multi-cloud?

Since multi-cloud has been a growing trend over the last few years, we thought it’d be interesting to take a look at why businesses are heading this direction with their infrastructure. More often than not, public cloud users and enterprises have adopted multi-cloud to meet their cloud computing needs. The following are a few advantages and typically the most common reasons users adopt multi-cloud. 

  • Risk Mitigation – create resilient architectures
  • Managing vendor lock-in – get price protection
  • Workload Optimization – place your workloads to optimize for cost and performance
  • Cloud providers’ unique capabilities – take advantage of offerings in AI, IOT, Machine Learning, and more

While taking advantage of features and capabilities from different cloud providers can be a great way to get the most out of the benefits that cloud services can offer, if not used optimally, these strategies can also result in wasted time, money, and computing capacity. The reality is that these are sometimes only perceived advantages that never come to fruition.

What are the negatives?

As companies implement their multi-cloud environments, they are finding downsides. A staggering 94% of respondents – regardless of the number of clouds they use or size of their organization – find it hard to fully take advantage of their public cloud resources. The survey cited the biggest challenge is controlling cloud costs – users think they’ll be saving money but end up spending more. When organizations migrate to multi-cloud they think they will be cutting costs, but what they typically fail to account for is the growing cloud services and data as well as lack of visibility. For many organizations we talk to, multiple clouds are being used because different groups within the organization use different cloud providers, which makes for challenges in centralized control and management. Controlling these issues brings about another issue of increased costs due to the need of cloud management tools. 

Some other challenges companies using multiple public clouds run into are:

  • Data privacy and security issues (38%)
  • Securing and protecting cloud resources (31%)
  • Governance/ compliance concerns (30%)
  • Lack of security skills/expertise (30%)

Configuring and managing different CSPs requires deep expertise which makes it more of a pressing need to find employees that have the experience and capabilities to manage multiple clouds. This means that more staff are needed to manage multi-cloud environments confidentiality so it can be done in a way that is secure and highly available. The lack of skills and expertise for managing multiple clouds can become a major issue for organizations as their cloud environments won’t be managed efficiently. In order to try fix this issue, organizations are allocating a decent amount of their IT budget to cloud-specific roles with the hope that adding more specialization in this area can help improve efficiency. 

Multi-Cloud Statistics: Use is Still Growing

The statistics on cloud computing show that companies not only use multiple clouds today, but they have plans to expand multi-cloud investments:

  • In a survey of nearly 551 IT people who are involved in the purchasing process for cloud computing, 55% of organizations currently use multiple public clouds. 
  • Organizations using multiple cloud platforms say they will allocate more (35%) of their IT budget to cloud computing.
  • SMBs plan to include slightly more for cloud computing in their budgets (33%) compared to enterprises 
    • While this seems significant, if measured in dollars, enterprises plan a much larger cloud spend than SMBs do $158 million compared to $11.5 million.

The Future of Managing Cloud Costs for Multi-Cloud

As cloud costs remain a primary concern, especially for SMBs, it’s important organizations keep up with the latest cloud usage trends to manage spend and prevent waste. To keep costs in check for a multi-cloud, you can make things easier for your IT department and implement an optimization tool that can track usage and spend across different cloud providers.

For more insight on the rise of multi-cloud and hybrid cloud strategies, and to demonstrate the impact on cloud spend, check out the drain of wasted spend on IT budgets here.

Microsoft Azure VM Types Comparison

Microsoft Azure VM Types Comparison

There are a wide range of Microsoft Azure VM types that are optimized to meet various needs. Machine types are specialized, and vary by virtual CPU (vCPU), disk capability, and memory size, offering a number of options to match any workload.

With so many options available, finding the right machine type for your workload can be confusing – which is why we’ve created this overview of Azure VM types (as we’ve done with EC2 instance types, and Google Cloud machine types). Note that while AWS EC2 instance types have names associated with their purpose, Azure instance type names are simply in a series from A to N. The chart below and written descriptions are a brief and easy reference, but remember that finding the right machine type for your workload will always depend on your needs.

General Purpose

General purpose VMs have a balanced CPU and memory, making them a great option for testing and development, smaller to medium databases, and web servers with low to medium traffic:

DCsv2-series

The newest size recommendation in the DC-series, the DCsv2, stands out because of the data protection and code confidentiality it provides while it’s being processed in the cloud. SGX technology and the latest generation of Intel XEON E-2288G Processor back these machines – these VMs can go up to 5.0GHz. 

Av2 Series

A-series VMs have a CPU-to-memory ratio that works best for entry-level workloads, like those for development and testing. Sizing is throttled for consistent processor performance to run the instance. Av2-series has the option to be deployed on a number of hardware types and processors. To figure out which hardware the size should be deployed on, users must query the virtual hardware in the VM. 

Dv2 and Dsv2-series

Dv2 VMs boast powerful CPUs – roughly 35% faster than D-series VMs – and optimized memory, great for production workloads. They have the same memory and disk configurations as the D-series, based upon either a 2.1 GHz, 2.3 GHz or 2.4 GHz processor and Intel Boost Technology.

Dsv2-series sizes run on the same Dv2 processors with Intel Turbo Boost Technology 2.0 and also use premium storage.

Dv3-series 

With expanded memory (from ~3.5 GiB/vCPU to 4 GiB/vCPU) and adjustments for disk and network limits, the Dv3 series Azure VM type offers the most value to general purpose workloads. The sizes in this series offer a combination of memory, temporary storage, and vCPU that best fits best for enterprise applications, relational databases, in-memory caching, and analytics. It’s important to note that the Dv3-series no longer has the high memory VM sizes of the D/Dv2-series. 

Dsv3-series

This series’ sizes feature premium storage disks and run on 2.1, 2.3, or 2.4 GHz Intel Xeon processors with Intel Turbo Boost Technology 2.0, the Dsv3-series is best suited for most production workloads.  

B-series

Similar to the AWS t-series machine type family, B-series burstable VMs and ideal for workloads that do not rely on full and continuous CPU performance. Use cases for this series’ VM types include small databases, dev and test environments and low-traffic web servers, microservices and more. Thanks to the B-series, customers can purchase a VM size that builds up credits when underutilized compared to its base performance, and the accumulated credits can be used as bursts. Spikes in compute power allow the VM to burst above the base performance if for higher CPU performance when needed. 

Dav4 and Dasv4-series 

Dav4-series are one of the new sizes that utilize a 2.35Ghz AMD EPYCTM 7452 processor and can reach a max frequency of 3.35GHz. The combination of memory, temporary storage and vCPU makes these VMs suitable for most production workloads. For premium SSD, Dasv4-series sizes are the best option.  

Ddv4 and Ddsv4-series  

Similar to other VMs in the D-series, these sizes utilize a combination of memory, temporary disk storage and vCPU that provides a better value for most general-purpose workloads. These new VM sizes have faster and 50% larger local storage (up to 2,400 GiB) and are designed for applications that benefit from low latency, high-speed local storage. The Ddv4-series processors run in a hyper-threaded configuration making them a great option for enterprise-grade applications, relational databases, in-memory caching, and analytics.

The major difference between the two series is that the Ddsv4-series supports Premium Storage and premium Storage caching, while Ddv4-series does not.

Dv4 and Dsv4-series

Both of these new series are currently in preview. The Dv4-series is optimal for general purpose workloads since they run on processors in a hyper-threaded configuration. It features a sustained all core Turbo clock speed of 3.4 GHz.

The Dsv4-series runs on the same processors as the Dv4-series, and even has the same features. The major difference between the two series is that the Dsv4-series supports Premium Storage and premium Storage caching, while Dv4-series does not.

Compute Optimized

Compute optimized Azure VM types offer a high CPU-to-memory ratio. They’re suitable for medium traffic web servers, network appliances, batch processing, and application servers.

Fsv2-series

With a base core frequency of 3.4 GHz and a maximum single-core turbo frequency of 3.7 GHz, Fsv2 series VM types offer up to twice the performance boost for vector processing workloads. Not only do they offer great speed for any workload, the Fsv2 also offers the best value for its price based on the ratio of Azure Compute Unit (ACU) per vCPU.

Memory Optimized

Memory optimized VM types are higher in memory as opposed to CPU, and best suited for relational database services, analytics, and larger caches.

M-Series 

Enterprise applications and large databases will benefit most from the M-series for having the most memory (up to 3.8 TiB) and the highest vCPU count (up to 128) of any VM in the cloud.

Mv2-series

The VMs in this series offer the highest vCPU count (up to 416 vCPUs) and largest memory (up to 11.4 TiB) of any VM. Because of these features, It’s ideal for extremely large databases or applications that benefit from high vCPU counts and large amounts of memory. The Mv2-series runs on an Intel® Xeon® processor with an all core base frequency of 2.5 GHz and a max turbo frequency of 3.8 GHz. 

Dv2 and DSv2-series 11-15

The Dv2 and DSv2-series 11-15 follow in the footsteps of the original D-series, the main differentiation is a more powerful CPU. For applications that require fast vCPUs, reliable temporary storage, and demand more memory, the Dv2 and DSv2-series all fit the bill for enterprise applications. The Dv2 series offers speed and power with a CPU about 35% faster than that of the D-series. Based on the 2.1, 2.3 and 2.4 GHz Intel Xeon® processors and with Intel Turbo Boost Technology 2.0, they can reach up to 3.1 GHz. The Dv2-series also has the same memory and disk configurations as the D-series.

Ev3-series and Esv3-series

The Ev3 follows in the footsteps of the high memory VM sizes originating from the D/Dv2 families. This Azure VM types provides excellent value for general purpose workloads, boasting expanded memory (from 7 GiB/vCPU to 8 GiB/vCPU) with adjustments to disk and network limits per core basis in alignment with the move to hyperthreading.

The Esv3-series is the optimal choice for memory-intensive enterprise applications. If you want premium storage disks, the Esv3-series sizes are the perfect ones. A difference between the two series is that the Esv3-series supports Premium Storage and premium Storage caching, while Ev3-series does not.

Eav4 and Easv4-series

The Eav4 and Easv4-series utilize the processors they run on in a multi-threaded configuration increasing options for running memory optimized workloads. Though the Eav4-series and Easv4-series have the same memory and disk configurations as the Ev3 & Esv3-series, the Eav4-series sizes are ideal for memory-intensive enterprise applications. 

Use the Easv4-series sizes for premium SSD. The Easv4-series sizes are ideal for memory-intensive enterprise applications. Easv4-series sizes can achieve a boosted maximum frequency of 3.35GHz. 

Edv4 and Edsv4-series

High vCPU counts and large amounts of memory make Edv4 and Edsv4-series the ideal option for extremely large databases and other applications that benefit from these features.  It features a sustained all core Turbo clock speed of 3.4 GHz and many new technology features. Unlike the Ev3/Esv3 sizes with Gen2 VMs, these new VM sizes will have 50% larger local storage, as well as better local disk IOPS for both read and write.

The Edv4 and Edsv4 virtual machine sizes feature up to 504 GiB of RAM, in addition to fast and large local SSD storage (up to 2,400 GiB). These virtual machines are ideal for memory-intensive enterprise applications and applications that benefit from low latency, high-speed local storage. You can attach Standard SSDs and Standard HDDs disk storage to the Edv4 VMs.

Ev4 and Esv4-series

These new sizes are currently under Public Preview Only – you can signup to access them here.

The Ev4 and Esv4-series are ideal for various memory-intensive enterprise applications. They run in a hyper-threaded configuration on 2nd Generation Intel® Xeon® processors and feature up to 504 GiB of RAM.

Storage Optimized

For big data, data warehousing, large transactional databases, SQL, and NoSQL databases, storage optimized VMs are the best type for their high disk throughput and IO. 

Lsv2-series

Lsv2-series VMs provide high throughput, low latency, directly mapped local NVMe making it these VMs ideal for NoSQL stores such as Apache Cassandra and MongoDB. The Lsv2-series comes in sizes 8 to 80 vCPU and each vCPU has 8 GiB of memory. VMs in this series are optimized and use the local disk on the node that is attached directly to the VM.

GPU

GPU VM types, specialized with single or multiple NVIDIA GPUs, work best for video editing and heavy graphics rendering – as in compute-intensive, graphics-intensive, and visualization workloads.

NC, NCv2 and NCv3-series 

The sizes in these series are optimized for compute-intensive and network-intensive applications and algorithms. The NCv2-series is powered by NVIDIA Tesla P100 GPUs and provides more than double the computational performance of the NC-series. The NCv3-series is powered by NVIDIA Tesla V100 GPUs and can provide 1.5x the computational performance of the NCv2-series. 

NV and NVv3-series

These sizes were made and optimized for remote visualization, streaming, gaming, encoding, and VDI scenarios. These VMs are targeted for GPU accelerated graphics applications and virtual desktops where customers want to visualize their data, simulate results to view, work on CAD, or render and stream content. 

ND and NDv2-series

These series are focused on training and inference scenarios for deep learning. The ND-series VMs are a new addition to the GPU family and offer excellent performance for training and inference making them ideal for Deep Learning workloads and AI. The ND-series is also enabled to fit much larger neural net models thanks to the much larger GPU memory size (24 GB).

The NDv2-series is another new addition to the GPU family and with its excellent performance, it meets the needs of the most demanding machine learning, GPU-accelerated AI, HPC workloads and simulation.

NVv4-series

The NVv4-series VMs are optimized and designed for remote visualization and VDI. With partitioned GPUs, NVv4 offers the right size for workloads requiring smaller GPU resources. With separated GPUs, this series offers the perfect size VMs for workloads that require smaller GPU resources. 

High Performance Compute

For the fastest and most powerful virtual machines, high performance compute is the best choice with optional high-throughput network interfaces (RDMA).

H-series

The H-series VMs were built for handling batch workloads, analytics, molecular modeling, and fluid dynamics. These 8 or 16 vCPU VMs are built on the Intel Haswell E5-2667 v3 processor technology and up to 14 GB of RAM per CPU core, and no hyperthreading.

Besides sizable CPU power, the H-series provides options for low latency RDMA networking with FDR InfiniBand and different memory configurations for supporting memory intensive compute requirements.

HB-series

Applications driven by memory bandwidth, such as explicit finite element analysis, fluid dynamics, and weather modeling are the best fit for HB-series VMs. These VMs feature 4 GB of RAM per CPU core and no simultaneous multithreading. 

HC-series

For applications driven by dense computation, like implicit finite element analysis, molecular dynamics, and computational chemistry HC-series VMs are the best fit. HC VMs feature 8 GB of RAM per CPU core, and no hyperthreading.

HBv2-series

Similar to other VMs in the High Performance compute family, HBv2-series VMs are optimized for applications driven by memory bandwidth, such as fluid dynamics, finite element analysis, and reservoir simulation. HBv2 VMs feature 4 GB of RAM per CPU core, and no simultaneous multithreading. These VMs enhance application performance, scalability, and consistency.

What Azure VM type is right for your workload?

The good news is that with this many options VM types, you’re bound to find the right type to meet your computing needs – as long as you know what those needs are. With good insight into your workload, usage trends, and business needs, you’ll be able to find the Azure VM type that’s right for your workloads.

Spot Instances Can Save Money – But Are Cloud Customers Too Scared to Use Them?

Spot Instances Can Save Money – But Are Cloud Customers Too Scared to Use Them?

Spot instances and similar “spare capacity” models are frequently cited as one of the top ways to save money on public cloud. However, we’ve noticed that fewer cloud customers are taking advantage of this discounted capacity than you might expect.

We say “spot instances” in this article for simplicity, but each cloud provider has their own name for the sale of discounted spare capacity – AWS’s spot instances, Azure’s spot VMs and Google Cloud’s preemptible VMs.

Spot instances are a type of purchasing option that allows users to take advantage of spare capacity at a low price, with the possibility that it could be reclaimed for other workloads with just brief notice. In AWS, for example, the customer makes a Spot Request that essentially includes a “maximum bid” for how much they are willing to pay for a spot instance. If the current spot price is at or below this bid price, then the spot instance is started. When demand for cloud resources increases, the Spot Price increases, and shortly after it exceeds the customer bid price, the instance is terminated. This allows cloud vendors to deploy unused resources for a significantly lower cost, but requires that workloads are designed to be resilient against interruptions. Could this requirement be driving users away?

Spot Instances in Each Cloud

Variations of spot instances are offered across different cloud providers. AWS has Spot Instances while Google Cloud offers preemptible VMs and as of March of this year, Microsoft Azure announced an even more direct equivalent to Spot Instances, called Azure Spot Virtual Machines. 

Spot VMs have replaced the preview of Azure’s low-priority VMs on scale sets – all eligible low-priority VMs on scale sets have automatically been transitioned to Spot VMs. Azure Spot VMs provide access to unused Azure compute capacity at deep discounts. Spot VMs can be evicted at any time if Azure needs capacity. 

AWS spot instances have variable pricing. Azure Spot VMs offer the same characteristics as a pay-as-you-go virtual machine, the differences being pricing and evictions. Google Preemptible VMs offer a fixed discounting structure. Google’s offering is a bit more flexible, with no limitations on the instance types. Preemptible VMs are designed to be a low-cost, short-duration option for batch jobs and fault-tolerant workloads.

Adoption of Spot Instances 

Our research indicates that less than 20% of cloud users use spot instances on a regular basis, despite spot being on nearly every list of ways to reduce costs (including our own).

While applications can be built to withstand interruption, specific concerns remain, such as loss of log data, exhausting capacity and fluctuation in the spot market price. 

In AWS, the issue in the market occurs when the price of a spot instance can rise beyond its typical historic price. This can make it difficult for a customer to judge the best bid price to use.  If the spot price is the same as the on-demand price, it defeats the purpose of using the Spot Instance. AWS addresses this problem with the notion of a Spot Fleet, in which you specify a certain capacity of instances you want to maintain. If the Spot instances are terminated, the Spot Fleet will automatically backfill the fleet with on-demand instances, allowing you to take advantage of whatever discounts you can, while maintaining your operations.

In any given zone, another potential issue is that capacity of an instance type could be completely exhausted. If capacity is exhausted it prevents applications from running if they are dependent on a specific instance type or zone.  Not to turn into a commercial for Spot Fleet, but this is addressed as well, by allowing you to specify a range of instance types that would be acceptable for your workload.

Is “Eviction” Driving People Away?

There is one main caveat when it comes to spot instances – they are interruptible. All three major cloud providers have mechanisms in place for these spare capacity resources to be interrupted, related to changes in capacity availability and/or changes in pricing.

This means workloads can be “evicted” from a spot instance or VM. Essentially, this means that if a cloud provider needs the resource at any given time, your workloads can be kicked off. You are notified when an  AWS  spot instance is going to be evicted:  AWS emits an event two minutes prior to the actual interruption. In Azure, you can opt to receive notifications that tell you when your VM is going to be evicted. However, you will have only 30 seconds to finish any jobs and perform shutdown tasks prior to the eviction making it almost impossible to manage. Google Cloud also gives you 30 seconds to shut down your instances when you’re preempted so you can save your work for later. Google also always terminates preemptible instances after 24 hours of running. All of this means your application must be designed to be interruptible, and should expect it to happen regularly – difficult for some applications, but not so much for others that are rather stateless, or normally process work in small chunks.

Companies such as Spot – recently acquired by NetApp (congrats!) – help in this regard by safely moving the workload to another available spot instance automatically.

Our research has indicated that fewer than one-quarter of users agree that their spot eviction rate was too low to be a concern – which means for most, eviction rate is a concern. Of course, it’s certainly possible to build applications to be resilient to eviction. For instance, applications can make use of many instance types in order to tolerate market fluctuations and make appropriate bids for each type. 

AWS also offers an automatic scaling feature that has the ability to increase or decrease the target capacity of your Spot Fleet automatically based on demand. The goal of this is to allow users to scale in conservatively in order to protect your application’s availability.

Early Adopters of Spot and Other Innovations May be One and the Same

People who are hesitant to build for spot more likely use regular VMs, perhaps with Reserved Instances for savings. It’s likely that people open to the idea of spot instances are the same who would be early adopters for other tech, like serverless, and no longer have a need for Spot. 

For the right architecture, spot instances can provide significant savings. It’s a matter of whether you want to bother.

10 Azure Best Practices for 2020

10 Azure Best Practices for 2020

There’s a vast amount of available resources that give advice on Azure best practices. Based on recent recommendations given by experts in the field, we’ve put together this list of 10 of the best practices for 2020 to help you fully utilize and optimize your Azure environment.

1. Ensure Your Azure VMs are the Correct Size

  • “There are default VM sizes depending on the image that you choose and the affected Region so be careful and check if the proposed one is really what you need. The majority of the times you can reduce the size to something that fits you better and at a lower cost.”

2.  If you use the Azure Cost Management Tool, Know the Limitations

  • Azure Cost Management can be a useful tool in your arsenal: “Listed as “cost management + billing” in the Azure portal, the Azure Cost Management service’s cost analysis feature offers comprehensive insights into the costs incurred by your Azure resources—starting from the subscription level. This can then be drilled down to specific resource groups and/or resources. The service also provides an overview of current costs as well as a monthly forecast based on the current consumption rate.”
  • However, know that visibility and action are not equivalent: “Even though [cloud efficiency] is a core tenant of Microsoft Azure Cost Management, optimization is one of the weakest features of the product. The essence of the documentation around this is that you should manually eliminate waste, without going into much detail about what is being wasted or how to eliminate it. Plus, this expects manual intervention and review of each resource without giving direct actions to eliminate the waste.”

3.  Approach Role-Based Access Control (RBAC) Systematically

  • “Using Azure RBAC, you can segregate duties within your team and grant only the amount of access to users that they need to perform their jobs. Instead of giving everybody unrestricted permissions in your Azure subscription or resources, you can allow only certain actions at a particular scope.”
  • “Even with these specific pre-defined roles, the principle of least privilege shows that you’re almost always giving more access than is truly needed. For even more granular permissions, you can create Azure custom roles and list specific commands that can be run.”

4. Ensure you aren’t paying for orphaned disks

  • “When you delete a virtual machine in Azure, by default, in order to protect against data loss, any disks that are attached to the VM aren’t deleted. One thing to remember is that after a VM is deleted, you will continue to pay for these “orphaned” unattached disks. In order to minimise storage costs, make sure that you identify and remove any orphaned disk resource.”

5. Tag Everything

  • “Centralize tagging across your Azure environments. This enables you to discover, group and consistently tag cloud resources across your cloud providers – manually or through automated tag rules. Maintaining a consistent tagging structure allows you to see resource information from all cloud providers for enhanced governance, cost analytics and chargeback.”

6. Decipher how and when to utilize the Azure logging services

  • “Logs are a major factor when it comes to successful cloud management. Azure users can access a variety of native logging services to maintain reliable and secure operations. These logging options can be broken down into three overarching types, as well as eight log categories. The granular data collected by Azure logs enables enterprises to monitor resources and helps identify potential system breaches.”

7. Know Your Serverless Options 

  • “Serverless computing provides a layer of abstraction that offloads maintenance of the underlying infrastructure to the cloud provider. That’s a form of workload automation in and of itself, but IT teams can take it a step further with the right tools.
  • Developers and admins can use a range of serverless offerings in Azure, but they need to understand how they want their workflow to operate in order to select the right services. To start, determine whether your application has its own logic to direct events and triggers, or whether that orchestration is defined by something else.”

8. API Authentication

  • “APIs handle an immense amount of data, which is why it’s imperative to invest in API security. Think of authentication as an identification card that proves you are who you say you are. Although Azure Database provides a range of security features, end users are required to practice additional security measures. For example, you must manage strong credentials yourself. Active Directory is the authentication solution of choice for enterprises around the world, and the Azure-hosted version only adds to the attraction as companies continue migrating to the cloud.”

9. Ensure the VM you need is available in your location

  • “Have the following 3 things in mind when choosing the location for your virtual machine:
  1. Place your VMs in a region close as possible to your users to improve performance and to meet any legal, compliance, or tax requirements.
  2. Each region has different hardware available and some configurations are not available in all regions, so this can limit your available options.
  3. There are price differences between locations, but if you choose to place your VM in a cheaper region it may impact negatively the performance if the region is far from your users (see point 1).”

10. Multi-Factor Authentication for all standard users

  • “Businesses that don’t add extra layers of access protection – such as two-step authentication – are more susceptible to credential theft. Credential thefts are usually achieved by phishing or by planting key-logging malware on a user’s device; and it only takes one compromised credential for a cybercriminal to potentially gain access to the whole network.
  • Enforcing multi-factor authentication for all users is one of the easiest – yet most effective – of the seven Azure security best practices, as it can be done via Azure Active Directory within a few minutes.”

 

You can use these best practices as a reference to help you ensure you are fully optimizing all available features in your Azure environment. Have any Azure best practices you’ve learned recently? Let us know in the comments below!

Further Reading:

Google Cloud Best Practices: 2020 Roundup

15 AWS Best Practices for 2019

16 Tips to Manage Cloud Costs

The Three Core Components of Microsoft Azure Cost Management

Google Cloud Best Practices: 2020 Roundup

Google Cloud Best Practices: 2020 Roundup

There is an abundance of great resources that cover Google Cloud best practices. To give a little more insight into the most recent practices offered by Google Cloud, here’s a list of 17 recent articles on best practices consisting of different tips and tricks to help you fully utilize and optimize your Google Cloud environment. 

Data Management

1. Ensure You Have Total Visibility of Data

    • “Without a holistic view of data and its sources, it can be difficult to know what data you have, where data originated from, and what data is in the public domain that shouldn’t be.”

2. Design Data Loss Prevention Policies in G Suite 

    • “Data Loss Prevention in G Suite is a set of policies, processes, and tools that are put in place to ensure your sensitive information won’t be lost during a fire, natural disaster or break in. You never know when tragedy will strike, that’s why you should invest in prevention policies before it’s too late.”

3. Have a Logging Policy in Place

    • “It is important to create a comprehensive logging policy within your cloud platform to help with auditing and compliance. Access logging should be enabled on storage buckets so that you have an easily accessible log of object access. Administrator audit logs are created by default, but you should enable Data Access logs for Data Writes in all services.”

4. Use Display Names in your Dataflow Pipelines

    • “Always use the name field to assign a useful, at-a-glance name to the transform. This field value is reflected in the Cloud Dataflow monitoring UI and can be incredibly useful to anyone looking at the pipeline. It is often possible to identify performance issues without having to look at the code using only the monitoring UI and well-named transforms.”

Cost Optimization

5. Automate Cost Optimizations

    • “The one of the best practices for cost optimization is to automate the tasks and reduce manual intervention. Automation is simplified using a label – which is a key-value pair applied to various Google Cloud services. You can attach a label to each resource (such as Compute instances), then filter the resources based on their labels.”

6. Take Advantage of Committed & Sustained Use Discounts

    • “At a commitment of up to 3 years and no upfront payment, customers can save money up to 57% of the normal price with this purchase. Availing these discounts can be one among GCP best practices as these discounts can be utilized for standard, highcpu, highmem and custom machine types and node groups which are sole-tenant.”
    • “GCP has a plan called “Sustained Use Discounts” which you can avail when you consume certain resources for a better part of a billing month. As these discounts are applicable to a lot of resource like sole-tenant nodes, GPU devices, custom machine, etc. opting for these discounts would be another best practice on GCP.”

7. Use Preemptible VMs

    • “As with most trade-offs, the biggest reason to use a preemptible VM is cost. Preemptible VMs can save you up to 80% compared to a normal on-demand virtual machine. This is a huge savings if the workload you’re trying to run consists of short-lived processes or things that are not urgent and can be done any time.”

8. Purchase Commitments

    • “The sustained usage discounts are a major differentiator for GCP. They apply automatically once your instance is online for more than 25% of the monthly billing cycle and can net you a discount of up to 30% depending on instance (“machine”) type. You can combine sustained and committed use discounts but not at the same time. Committed use can get you a discount of up to 57% for most instance types and up to 70% for memory-optimized types.”

9. Apply Compute Engine Rightsizing Recommendations

    • “Compute Engine provides machine type rightsizing recommendations to help you optimize the resource utilization of virtual machine (VM) instances. These recommendations are generated automatically based on system metrics gathered by the Stackdriver Monitoring service over the previous eight days. Use these recommendations to resize your computer instance’s machine type to more efficiently use the instance’s resources.”

10. Utilize Cost Management Tools That Take Action

    • “Using third-party tools for cloud optimization help with cost visibility and governance and cost optimization. Make sure you aren’t just focusing on cost visibility and recommendations, but find a tool that takes that extra step and takes those actions for you…This automation reduces the potential for human error and saves organizations time and money by allowing developers to reallocate their time to more beneficial tasks. ”

11. Ensure You’re Only Paying for the Compute Resources You Need

    • When adopting or optimizing your public cloud use, it’s important to eliminate wasted spend from idle resources – which is why you need to include an instance scheduler in your plan. An instance scheduler ensures that non-production resources – those used for development, staging, testing, and QA – are stopped when they’re not being used, so you aren’t charged for compute time you’re not actually using.

12. Optimize Performance and Storage Costs 

    • “In the cloud, where storage is billed as a separate line item, paying attention to storage utilization and configuration can result in substantial cost savings. And storage needs, like compute, are always changing. It’s possible that the storage class you picked when you first set up your environment may no longer be appropriate for a given workload.”

13. Optimize Persistent Disk Performance

    • “When you launch a virtual machine compute engine in GCP, a disk is attached to perform as the local storage for the application. When you terminate this compute engine, the unattached disk can still be running. Google continues to charge for the full price of the disk, even though the disks are not active. This can significantly increase your cloud costs. Make sure that you don’t have any unattached disks that are still running.”

Security 

14. Apply Least Privilege Access Controls /Identity and access management

    • “The principle of least privilege is a critical foundational element in GCP security and security more broadly. The principle is the concept of only providing employees with access to applications and resources they need to properly do their jobs.”

15. Manage Unrestricted Traffic and Firewalls

    • “Limit the IP ranges that you assign to each firewall to only the networks that need access to those resources. GCP’s advanced VPC features allow you to get very granular with traffic by assigning targets by tag and Service Accounts. This allows you to express traffic flows logically in a way that you can identify later, such as allowing a front-end service to communicate to VMs in a back-end service’s Service Account.”

16. Ensure Your Bucket Names are Unique Across the Whole Platform

    • “It is recommended to append random characters to the bucket name and not include the company name in it. An example is “prod-logs-b7b12b36511ac3462d12e62164dfff4e”. This will make it harder for an attacker to locate buckets in a targeted attack.”

17. Set Up a Google Cloud Organizational Structure

    • “When you first log into your Google Admin console, everything will be grouped into a single organizational unit. Any settings you apply to this group will apply to all the users and devices in the organization. Planning out how you want to organize your units and hierarchy before diving in will help you save time and create a more structured security strategy.”

You can use the best practices listed above as a quick reference of things to keep in mind when using Google Cloud. Have any Google Cloud best practices you’ve learned recently? Let us know in the comments below!

Further Reading:

16 Tips to Manage Cloud Costs

15 AWS Best Practices for 2019