AWS re:Invent 2020 Guide – How to Make The Most of The Virtual Event

AWS re:Invent 2020 Guide – How to Make The Most of The Virtual Event

It’s time to start thinking about AWS re:Invent 2020! Since 2012, AWS re:Invent has been one of the biggest cloud conferences every year – last year they drew over 60,000 attendees from around the world. Like many other big events, such as Google Cloud Next and Microsoft Ignite, re:Invent is going to look a little different this year. Over the course of three weeks (November 30 – December 18), AWS will be showing keynotes, launches and sessions – the best part is that it’s free to attend this year. 

How to Maximize Your Digital Experience

Every year there’s an unofficial listing of AWS and Vendor parties that are going on at re:Invent, while they obviously won’t be happening in person this year, we can expect there to be some watch parties and online events to look out for. Follow @reInventParties on twitter or check out their website to stay up to date on all the events. 

Although planning your schedule will look different this year, here are a few things to keep in mind:

  • Make a schedule – block out some time on your calendar in advance to allocate enough time for the event. Once dates and times have been announced for keynote speakers and sessions, we recommend putting them into your calendar for a clean visual of your day, and reminders. With 3 weeks’ worth of events, it will certainly take some time to determine which sessions you are most interested in. We’ll keep this article updated once there is more information about the schedule and whether some of the tools like this one from Carlos E Silva will be available to help navigate the scheduler.
  • Take advantage of the online resources – since the event is virtual this year – and free – you aren’t as limited to what you can attend/do, so make sure you optimize this unique experience. 
  • Attend a watch party – virtual watch parties allow you to connect with individuals around the world, helping to supplement what would be the in-person mingling at the actual event. If friends or coworkers are tuning in, create a channel to chat about sessions and announcements. Also, make sure to follow the #reinvent2020 and #reinvent hashtags on Twitter to follow along. Earlier this year for the online AWS Summit there were watch parties so we can expect the same for their biggest event of the year.
  • Look for swag and other offers from sponsors – just because you can’t visit sponsors booths doesn’t mean you can’t get swag or see a product/service. Most sponsors will likely have an online swag/prize giveaway as a creative way to get the audience involved to make up for the loss of time at the conference hall. If there are any vendors you’re interested in, sign up for their mailing lists now or make a Twitter list to keep an eye out for fun (Millenium Falcon lego set) and useful (free product license) offers. 
  • Get engaged now – of course, AWS isn’t waiting for late November to offer new product intros, case studies, and best practice guides. Check out the upcoming Tech Talks now.

 

What Will Sponsor “Booths” Look Like This Year?

On that note, AWS is offering sponsor booths for past sponsors, and their materials offer a glimpse into the as-yet-unclear format of the conference, with mentions of virtual meeting rooms and attendee chats. It remains to be seen how much participation we’ll see overall in these. They seem to have a few special events in mind, which we’ll share here once there’s more information.

Look Forward to the Announcements

Last year at re:Invent, AWS announced the launch of a bunch of services and additional services that were in preview – lookout for the announcements this year! Although this isn’t your typical re:Invent experience, this virtual platform will be able to engage individuals like never before. AWS is anticipating more than 250,000 attendees during AWS re:Invent 2020.  

Keep up with updates as the event gets closer: https://reinvent.awsevents.com/

While we excitedly await more information to be posted about the speakers and events for this year’s conference, you can always watch the recap of re:Invent 2019 in the meantime.

Cloud Financial Management – The New Focus of the AWS Well-Architected Cost Optimization Pillar

Cloud Financial Management – The New Focus of the AWS Well-Architected Cost Optimization Pillar

In July, AWS updated the cost optimization pillar of their Well-Architected Framework to focus on cloud financial management. This change is a rightful acknowledgment of the importance of functional ownership and cross-team collaboration in order to optimize public cloud costs.

AWS Well-Architected Framework and the Cost Optimization Pillar

If you use AWS, you are probably familiar with the Well-Architected Framework. This is a guide of best practices to help you understand the impact of the decisions you make while designing and building systems on AWS. AWS Well-Architected allows users to learn best practices for building high-performing, resilient, secure, and efficient infrastructure for their workloads and applications. 

This framework is based on five pillars — operational excellence, security, reliability, performance efficiency, and cost optimization. Overall, AWS has done a great job with these particular resources, making them clear and accessible with links to further detail. 

The Cost Optimization pillar generally covers principles we have been preaching for a long time: expenditure and usage awareness; choosing cost-effective resources; managing demand and supply resources; and regularly reviewing your environments and architectural decisions for cost. 

Now, they have added Cloud Financial Management to this pillar. Cloud Financial Management is a set of activities that enables Finance and Technology organizations to manage, optimize and predict costs as they run workloads on AWS. 

Why Do Businesses Need Cloud Financial Management? 

Incorporating Cloud Financial Management into an organization’s cost optimization plans allows them to accelerate business value realization and optimize cost, usage and scale to maximize financial success. 

This is an important part of the cost optimization pillar as it dedicates resources and time to build capability in specific industries and technology domains. Similar to the other pillars, users need to build capability with different resources, programs, knowledge building, and processes to become a cost-efficient organization.

The first step AWS proposes for CFM is functional ownership. (Further reading: Who Should Manage App Development Costs? and 5 Priorities for the Cloud Center of Excellence).The reason all of this is important is since many organizations are composed of different units that have different priorities, there’s not one standard set of objectives for everyone to follow. By aligning your organization on a set of financial objectives, and providing them with the means to make it happen, organizations will become more efficient. Once an organization is running more efficiently, this will lead to more innovation and the ability to build faster. Not to mention organizations will be more agile and have the means to adjust to any factors. 

What You Need to Keep in Mind

When most people think of cost optimization they think of cutting costs  – but that’s not exactly what AWS is getting at by adding cloud financial management to their framework. It’s about assigning responsibility; partnering between finance and technology; and creating a cost-aware culture

In a survey conducted earlier this year by 451 Research, they found that adopting Cloud Financial Management practices doesn’t only lower IT costs. In fact, enterprises that adopted Cloud Financial Management practices also benefited in many other aspects of the organization such as, growing revenue through increased business agility, increasing operational resilience to decrease risk, improved profitability and the potential for increased staff productivity. 

Cloud Financial Management increases with cloud maturity, so it’s important to be patient with the process and remember that small changes can have huge impacts and benefits can increase as time goes on.  

Amazon provides you with a few services to manage cloud costs such as Cost Explorer, AWS Budgets, AWS Cost and Usage Report (CUR), Reserved Instances Recommendation and Reporting, and EC2 Rightsizing Recommendations.  But, it’s important to note that while many CFM tools are free to use, there can be some costs associated with labor to build ongoing use of these tools and continuous organizational processes – it may be in your best interest to look into a tool that can optimize costs on an ongoing basis. Ensure your people and/or tools are able to scale applications to address new demands.

By using the framework to evaluate and implement your cloud financial management practices, you’ll not only achieve cost savings, but more importantly, you’ll see business value increase across operational resilience, staff productivity and business agility.

AWS vs Azure vs Google Free Tier Comparison

AWS vs Azure vs Google Free Tier Comparison

Whether you’re new to public cloud altogether or already use one provider and are interested in trying another, you may be interested in a comparison of the AWS vs Azure vs Google free tier.  The big three cloud providers – AWS, Azure and Google Cloud – each have a free tier available that’s designed to give users the cloud experience without all the costs. They include free trial versions of numerous services so users can test out different products and learn how they work before they make a huge commitment. While they may only cover a small environment, it’s a good way to learn more about each cloud provider. For all of the cloud providers, the free trials are available to only new users.

AWS Free Tier Offerings

AWS free tier includes more than 60 products. There are two different types of free options that are available depending on the product used: always free and 12 months free. To help customers get started on AWS, the services that fall under the free 12-months are for new trial customers and give customers the ability to use the products for free (up to a specific level of usage) for one year from the date the account was created. Keep in mind that once the free 12 months are up, your services will start to be charged at the normal rate. Be prepared and review this checklist of things to do when you outgrow the AWS free tier. 

Azure Free Tier Offerings

The Azure equivalent of a free tier is referred to as a free account. As a new user in Azure, you’re given a $200 credit that has to be used in the first 30 days after activating your account. When you’ve used up the credit or 30 days have expired, you’ll have to upgrade to a paid account if you wish to continue using certain products. Ensure that you have a plan to reduce Azure costs in place. If you don’t need the paid products, there’s also the always free option. 

Some of the ways people choose to use their free account are to gain insights from their data, test and deploy enterprise apps, create custom mobile experiences and more. 

Google Cloud Free Tier Offerings

The Google Cloud Free Tier is essentially an extended free trial that gives you access to free cloud resources so you can learn about Google Cloud services by trying them on your own. 

The Google Cloud Free Tier has two parts – a 90 day free trial with a $300 credit to use with any Google Cloud services and always free, which provides limited access to many common Google Cloud resources, free of charge. Google Cloud gives you a little more time with your credit than Azure, you get the full 90 days of the free trial to use your credit. Unlike free trials from the other cloud providers, Google does not automatically charge you once the trial ends – this way you’re guaranteed that the free tier is actually 100% free. Keep in mind that your trial ends after 90 days or once you’ve exhausted the $300 credit. Any usage beyond the free monthly usage limits are covered by the $300 free credit – you must upgrade to a paid account to continue using Google Cloud. 

Free Tier Limitations

It’s important to note that the always-free services vary widely between the cloud providers and there are usage limitations. Keep in mind the cloud providers’ motivations: they want you to get attached to the services so you start paying for them. So, be aware of the limits before you spin up any resources, and don’t be surprised by any charges. 

In AWS, when your free tier expires or if your application use exceeds the free tier limits, you pay standard, pay-as-you-go service rates. Azure and Google both offer credits for new users that start a free trial, which are a handy way to set a spending limit. However, costs can get a little tricky if you aren’t paying attention. Once the credits have been used you’ll have to upgrade your account if you wish to continue using the products. Essentially, the credit that was acting as a spending limit is automatically removed so whatever you use beyond the free amounts, you will now have to pay for. In Google Cloud, there is a cap on the number of virtual CPUs you can use at once – and you can’t add GPUs or use Windows Server instances.

For 12 months after you upgrade your account, certain amounts of popular products are free. After 12 months, unless decommissioned, any products you may be using will continue to run, and you’ll be billed at the standard pay-as-you-go rates.

Another limitation is that commercial software and operating system licenses typically aren’t available under the free tiers.

These offerings are “use it or lose it” – if you don’t use all your credits or utilize all your usage, there will be no rollover into future months. 

Popular Services, Products, and Tools to Check Out for Free

AWS has 33 products that fall under the one-year free tier – here are some of the most popular: 

  • Amazon EC2 Compute: 750 hours per month of compute time, per month of Linux, RHEL, SLES t2.micro or t3.micro instance and Windows t2.micro or t3.micro instance dependent on region.
  • Amazon S3 Storage: 5GB of standard storage
  • Amazon RDS Database: 750 hours per month of db.t2.micro database usage using MySQL, PostgreSQL, MariaDB, Oracle BYOL, or SQL Server, 20 GB of General Purpose (SSD) database storage and 20 GB of storage for database backups and DB Snapshots. 

For the always-free option, you’ll find a number of products as well, some of these include:

  • AWS Lambda: 1 million free compute requests per month and up to 3.2 million seconds of compute time per month.
  • Amazon DynamoDB: 25 GB of database storage per month, enough to handle up to 200M requests per month.
  • Amazon CloudWatch: 10 custom metrics and alarms per month, 1,000,000 API requests, 5GB of Log Data Ingestion and Log Data Archive and 3 Dashboards with up to 50 metrics.

Azure has 19 products that are free each month for 12 months – here are some of the most popular:

  • Linux and Windows virtual machines: 750 hours (using B1S VM) of compute time 
  • Managed Disk Storage: 64 GB x 2 (P6 SSD) 
  • Blob Storage: 5GB (LRS hot block) 
  • File Storage: 5GB (LRS File Storage) 
  • SQL databases: 250 GB

For their always free offerings, you’ll find even more popular products – here are a few:

  • Azure Kubernetes Service: no charge for cluster management, you only pay for the virtual machines and the associated storage and networking resources consumed.
  • Azure DevOps: 5 users for open source projects and small projects (with unlimited private Git repos). For larger teams, the cost ranges from $6-$90 per month.
  • Azure Cosmos DB (400 RU/s provisioned throughput)

Unlike AWS and Azure, Google Cloud does not have a 12 months free offerings. However, Google Cloud does still have a free tier with a wide range of always free services – some of the most popular ones include:

  • Google BigQuery: 1 TB of queries and 10 GB of storage per month.
  • Kubernetes Engine: One zonal cluster per month
  • Google Compute Engine: 1 f1-micro instance per month only in U.S. regions. 30 GB-months HDD, 5 GB-months snapshot in certain regions and 1 GB of outbound network data from North America to all region destinations per month.
  • Google Cloud Storage: 5 GB of regional storage per month, only in the US. 5,000 Class A, and 50,000 Class B operations, and 1 GB  of outbound network data from North America to all region destinations per month.

 

Check out these blog posts on free credits for each cloud provider to see how you can start saving:

The Multi-Cloud Environment in 2020: Advantages and Disadvantages

The Multi-Cloud Environment in 2020: Advantages and Disadvantages

Now more than ever, organizations have been implementing multi-cloud environments for their public cloud infrastructure. 

We not only see this in our customers’ environments: a growing proportion use multiple cloud providers. Additionally,  industry experts and analysts report the same. In early June, IDG released its 8th Cloud Computing Survey results where they broke down IT environments, multi-cloud and IT budgets by the numbers. This report also goes into both the upsides and downsides using multiple public clouds. Here’s what they found:

  • More than half (55%) of respondents use multiple public clouds: 
    • 34% use two, 10% use three, and 11% use more than three
  • 49% of respondents say they adopted a multi-cloud approach to get best-of-breed platform and service options. 
  • Other goals include:
    • Cost savings/optimization (41%)
    • Improved disaster recovery/business continuity (40%) 
    • Increased platform and service flexibility (39%).

Interestingly, within multi-cloud customers of ParkMyCloud, the majority are users of AWS and Google Cloud, or AWS and Azure; very few are users of Azure and Google Cloud. About 1% of customers have a presence in all three. 

Multi-Cloud Across Organizations

The study found that the likelihood of an organization using a multi-cloud environment depends on its size and industry. For instance, government, financial services and manufacturing organizations are less likely to stick to one cloud due to possible security concerns that come with using multiple clouds. IDG concluded that enterprises are more concerned with avoiding vendor lock-in while SMBs are more likely to make cost savings/optimization a priority (makes sense, the smaller the company, the more worried they are about finances). 

  • Fewer than half of SMBs (47%) use multiple public clouds
  • Meanwhile, 66% of enterprises use multiple clouds

What are the advantages of multi-cloud?

Since multi-cloud has been a growing trend over the last few years, we thought it’d be interesting to take a look at why businesses are heading this direction with their infrastructure. More often than not, public cloud users and enterprises have adopted multi-cloud to meet their cloud computing needs. The following are a few advantages and typically the most common reasons users adopt multi-cloud. 

  • Risk Mitigation – create resilient architectures
  • Managing vendor lock-in – get price protection
  • Workload Optimization – place your workloads to optimize for cost and performance
  • Cloud providers’ unique capabilities – take advantage of offerings in AI, IOT, Machine Learning, and more

While taking advantage of features and capabilities from different cloud providers can be a great way to get the most out of the benefits that cloud services can offer, if not used optimally, these strategies can also result in wasted time, money, and computing capacity. The reality is that these are sometimes only perceived advantages that never come to fruition.

What are the negatives?

As companies implement their multi-cloud environments, they are finding downsides. A staggering 94% of respondents – regardless of the number of clouds they use or size of their organization – find it hard to fully take advantage of their public cloud resources. The survey cited the biggest challenge is controlling cloud costs – users think they’ll be saving money but end up spending more. When organizations migrate to multi-cloud they think they will be cutting costs, but what they typically fail to account for is the growing cloud services and data as well as lack of visibility. For many organizations we talk to, multiple clouds are being used because different groups within the organization use different cloud providers, which makes for challenges in centralized control and management. Controlling these issues brings about another issue of increased costs due to the need of cloud management tools. 

Some other challenges companies using multiple public clouds run into are:

  • Data privacy and security issues (38%)
  • Securing and protecting cloud resources (31%)
  • Governance/ compliance concerns (30%)
  • Lack of security skills/expertise (30%)

Configuring and managing different CSPs requires deep expertise which makes it more of a pressing need to find employees that have the experience and capabilities to manage multiple clouds. This means that more staff are needed to manage multi-cloud environments confidentiality so it can be done in a way that is secure and highly available. The lack of skills and expertise for managing multiple clouds can become a major issue for organizations as their cloud environments won’t be managed efficiently. In order to try fix this issue, organizations are allocating a decent amount of their IT budget to cloud-specific roles with the hope that adding more specialization in this area can help improve efficiency. 

Multi-Cloud Statistics: Use is Still Growing

The statistics on cloud computing show that companies not only use multiple clouds today, but they have plans to expand multi-cloud investments:

  • In a survey of nearly 551 IT people who are involved in the purchasing process for cloud computing, 55% of organizations currently use multiple public clouds. 
  • Organizations using multiple cloud platforms say they will allocate more (35%) of their IT budget to cloud computing.
  • SMBs plan to include slightly more for cloud computing in their budgets (33%) compared to enterprises 
    • While this seems significant, if measured in dollars, enterprises plan a much larger cloud spend than SMBs do $158 million compared to $11.5 million.

The Future of Managing Cloud Costs for Multi-Cloud

As cloud costs remain a primary concern, especially for SMBs, it’s important organizations keep up with the latest cloud usage trends to manage spend and prevent waste. To keep costs in check for a multi-cloud, you can make things easier for your IT department and implement an optimization tool that can track usage and spend across different cloud providers.

For more insight on the rise of multi-cloud and hybrid cloud strategies, and to demonstrate the impact on cloud spend, check out the drain of wasted spend on IT budgets here.

Microsoft Azure VM Types Comparison

Microsoft Azure VM Types Comparison

There are a wide range of Microsoft Azure VM types that are optimized to meet various needs. Machine types are specialized, and vary by virtual CPU (vCPU), disk capability, and memory size, offering a number of options to match any workload.

With so many options available, finding the right machine type for your workload can be confusing – which is why we’ve created this overview of Azure VM types (as we’ve done with EC2 instance types, and Google Cloud machine types). Note that while AWS EC2 instance types have names associated with their purpose, Azure instance type names are simply in a series from A to N. The chart below and written descriptions are a brief and easy reference, but remember that finding the right machine type for your workload will always depend on your needs.

General Purpose

General purpose VMs have a balanced CPU and memory, making them a great option for testing and development, smaller to medium databases, and web servers with low to medium traffic:

DCsv2-series

The newest size recommendation in the DC-series, the DCsv2, stands out because of the data protection and code confidentiality it provides while it’s being processed in the cloud. SGX technology and the latest generation of Intel XEON E-2288G Processor back these machines – these VMs can go up to 5.0GHz. 

Av2 Series

A-series VMs have a CPU-to-memory ratio that works best for entry-level workloads, like those for development and testing. Sizing is throttled for consistent processor performance to run the instance. Av2-series has the option to be deployed on a number of hardware types and processors. To figure out which hardware the size should be deployed on, users must query the virtual hardware in the VM. 

Dv2 and Dsv2-series

Dv2 VMs boast powerful CPUs – roughly 35% faster than D-series VMs – and optimized memory, great for production workloads. They have the same memory and disk configurations as the D-series, based upon either a 2.1 GHz, 2.3 GHz or 2.4 GHz processor and Intel Boost Technology.

Dsv2-series sizes run on the same Dv2 processors with Intel Turbo Boost Technology 2.0 and also use premium storage.

Dv3-series 

With expanded memory (from ~3.5 GiB/vCPU to 4 GiB/vCPU) and adjustments for disk and network limits, the Dv3 series Azure VM type offers the most value to general purpose workloads. The sizes in this series offer a combination of memory, temporary storage, and vCPU that best fits best for enterprise applications, relational databases, in-memory caching, and analytics. It’s important to note that the Dv3-series no longer has the high memory VM sizes of the D/Dv2-series. 

Dsv3-series

This series’ sizes feature premium storage disks and run on 2.1, 2.3, or 2.4 GHz Intel Xeon processors with Intel Turbo Boost Technology 2.0, the Dsv3-series is best suited for most production workloads.  

B-series

Similar to the AWS t-series machine type family, B-series burstable VMs and ideal for workloads that do not rely on full and continuous CPU performance. Use cases for this series’ VM types include small databases, dev and test environments and low-traffic web servers, microservices and more. Thanks to the B-series, customers can purchase a VM size that builds up credits when underutilized compared to its base performance, and the accumulated credits can be used as bursts. Spikes in compute power allow the VM to burst above the base performance if for higher CPU performance when needed. 

Dav4 and Dasv4-series 

Dav4-series are one of the new sizes that utilize a 2.35Ghz AMD EPYCTM 7452 processor and can reach a max frequency of 3.35GHz. The combination of memory, temporary storage and vCPU makes these VMs suitable for most production workloads. For premium SSD, Dasv4-series sizes are the best option.  

Ddv4 and Ddsv4-series  

Similar to other VMs in the D-series, these sizes utilize a combination of memory, temporary disk storage and vCPU that provides a better value for most general-purpose workloads. These new VM sizes have faster and 50% larger local storage (up to 2,400 GiB) and are designed for applications that benefit from low latency, high-speed local storage. The Ddv4-series processors run in a hyper-threaded configuration making them a great option for enterprise-grade applications, relational databases, in-memory caching, and analytics.

The major difference between the two series is that the Ddsv4-series supports Premium Storage and premium Storage caching, while Ddv4-series does not.

Dv4 and Dsv4-series

Both of these new series are currently in preview. The Dv4-series is optimal for general purpose workloads since they run on processors in a hyper-threaded configuration. It features a sustained all core Turbo clock speed of 3.4 GHz.

The Dsv4-series runs on the same processors as the Dv4-series, and even has the same features. The major difference between the two series is that the Dsv4-series supports Premium Storage and premium Storage caching, while Dv4-series does not.

Compute Optimized

Compute optimized Azure VM types offer a high CPU-to-memory ratio. They’re suitable for medium traffic web servers, network appliances, batch processing, and application servers.

Fsv2-series

With a base core frequency of 3.4 GHz and a maximum single-core turbo frequency of 3.7 GHz, Fsv2 series VM types offer up to twice the performance boost for vector processing workloads. Not only do they offer great speed for any workload, the Fsv2 also offers the best value for its price based on the ratio of Azure Compute Unit (ACU) per vCPU.

Memory Optimized

Memory optimized VM types are higher in memory as opposed to CPU, and best suited for relational database services, analytics, and larger caches.

M-Series 

Enterprise applications and large databases will benefit most from the M-series for having the most memory (up to 3.8 TiB) and the highest vCPU count (up to 128) of any VM in the cloud.

Mv2-series

The VMs in this series offer the highest vCPU count (up to 416 vCPUs) and largest memory (up to 11.4 TiB) of any VM. Because of these features, It’s ideal for extremely large databases or applications that benefit from high vCPU counts and large amounts of memory. The Mv2-series runs on an Intel® Xeon® processor with an all core base frequency of 2.5 GHz and a max turbo frequency of 3.8 GHz. 

Dv2 and DSv2-series 11-15

The Dv2 and DSv2-series 11-15 follow in the footsteps of the original D-series, the main differentiation is a more powerful CPU. For applications that require fast vCPUs, reliable temporary storage, and demand more memory, the Dv2 and DSv2-series all fit the bill for enterprise applications. The Dv2 series offers speed and power with a CPU about 35% faster than that of the D-series. Based on the 2.1, 2.3 and 2.4 GHz Intel Xeon® processors and with Intel Turbo Boost Technology 2.0, they can reach up to 3.1 GHz. The Dv2-series also has the same memory and disk configurations as the D-series.

Ev3-series and Esv3-series

The Ev3 follows in the footsteps of the high memory VM sizes originating from the D/Dv2 families. This Azure VM types provides excellent value for general purpose workloads, boasting expanded memory (from 7 GiB/vCPU to 8 GiB/vCPU) with adjustments to disk and network limits per core basis in alignment with the move to hyperthreading.

The Esv3-series is the optimal choice for memory-intensive enterprise applications. If you want premium storage disks, the Esv3-series sizes are the perfect ones. A difference between the two series is that the Esv3-series supports Premium Storage and premium Storage caching, while Ev3-series does not.

Eav4 and Easv4-series

The Eav4 and Easv4-series utilize the processors they run on in a multi-threaded configuration increasing options for running memory optimized workloads. Though the Eav4-series and Easv4-series have the same memory and disk configurations as the Ev3 & Esv3-series, the Eav4-series sizes are ideal for memory-intensive enterprise applications. 

Use the Easv4-series sizes for premium SSD. The Easv4-series sizes are ideal for memory-intensive enterprise applications. Easv4-series sizes can achieve a boosted maximum frequency of 3.35GHz. 

Edv4 and Edsv4-series

High vCPU counts and large amounts of memory make Edv4 and Edsv4-series the ideal option for extremely large databases and other applications that benefit from these features.  It features a sustained all core Turbo clock speed of 3.4 GHz and many new technology features. Unlike the Ev3/Esv3 sizes with Gen2 VMs, these new VM sizes will have 50% larger local storage, as well as better local disk IOPS for both read and write.

The Edv4 and Edsv4 virtual machine sizes feature up to 504 GiB of RAM, in addition to fast and large local SSD storage (up to 2,400 GiB). These virtual machines are ideal for memory-intensive enterprise applications and applications that benefit from low latency, high-speed local storage. You can attach Standard SSDs and Standard HDDs disk storage to the Edv4 VMs.

Ev4 and Esv4-series

These new sizes are currently under Public Preview Only – you can signup to access them here.

The Ev4 and Esv4-series are ideal for various memory-intensive enterprise applications. They run in a hyper-threaded configuration on 2nd Generation Intel® Xeon® processors and feature up to 504 GiB of RAM.

Storage Optimized

For big data, data warehousing, large transactional databases, SQL, and NoSQL databases, storage optimized VMs are the best type for their high disk throughput and IO. 

Lsv2-series

Lsv2-series VMs provide high throughput, low latency, directly mapped local NVMe making it these VMs ideal for NoSQL stores such as Apache Cassandra and MongoDB. The Lsv2-series comes in sizes 8 to 80 vCPU and each vCPU has 8 GiB of memory. VMs in this series are optimized and use the local disk on the node that is attached directly to the VM.

GPU

GPU VM types, specialized with single or multiple NVIDIA GPUs, work best for video editing and heavy graphics rendering – as in compute-intensive, graphics-intensive, and visualization workloads.

NC, NCv2 and NCv3-series 

The sizes in these series are optimized for compute-intensive and network-intensive applications and algorithms. The NCv2-series is powered by NVIDIA Tesla P100 GPUs and provides more than double the computational performance of the NC-series. The NCv3-series is powered by NVIDIA Tesla V100 GPUs and can provide 1.5x the computational performance of the NCv2-series. 

NV and NVv3-series

These sizes were made and optimized for remote visualization, streaming, gaming, encoding, and VDI scenarios. These VMs are targeted for GPU accelerated graphics applications and virtual desktops where customers want to visualize their data, simulate results to view, work on CAD, or render and stream content. 

ND and NDv2-series

These series are focused on training and inference scenarios for deep learning. The ND-series VMs are a new addition to the GPU family and offer excellent performance for training and inference making them ideal for Deep Learning workloads and AI. The ND-series is also enabled to fit much larger neural net models thanks to the much larger GPU memory size (24 GB).

The NDv2-series is another new addition to the GPU family and with its excellent performance, it meets the needs of the most demanding machine learning, GPU-accelerated AI, HPC workloads and simulation.

NVv4-series

The NVv4-series VMs are optimized and designed for remote visualization and VDI. With partitioned GPUs, NVv4 offers the right size for workloads requiring smaller GPU resources. With separated GPUs, this series offers the perfect size VMs for workloads that require smaller GPU resources. 

High Performance Compute

For the fastest and most powerful virtual machines, high performance compute is the best choice with optional high-throughput network interfaces (RDMA).

H-series

The H-series VMs were built for handling batch workloads, analytics, molecular modeling, and fluid dynamics. These 8 or 16 vCPU VMs are built on the Intel Haswell E5-2667 v3 processor technology and up to 14 GB of RAM per CPU core, and no hyperthreading.

Besides sizable CPU power, the H-series provides options for low latency RDMA networking with FDR InfiniBand and different memory configurations for supporting memory intensive compute requirements.

HB-series

Applications driven by memory bandwidth, such as explicit finite element analysis, fluid dynamics, and weather modeling are the best fit for HB-series VMs. These VMs feature 4 GB of RAM per CPU core and no simultaneous multithreading. 

HC-series

For applications driven by dense computation, like implicit finite element analysis, molecular dynamics, and computational chemistry HC-series VMs are the best fit. HC VMs feature 8 GB of RAM per CPU core, and no hyperthreading.

HBv2-series

Similar to other VMs in the High Performance compute family, HBv2-series VMs are optimized for applications driven by memory bandwidth, such as fluid dynamics, finite element analysis, and reservoir simulation. HBv2 VMs feature 4 GB of RAM per CPU core, and no simultaneous multithreading. These VMs enhance application performance, scalability, and consistency.

What Azure VM type is right for your workload?

The good news is that with this many options VM types, you’re bound to find the right type to meet your computing needs – as long as you know what those needs are. With good insight into your workload, usage trends, and business needs, you’ll be able to find the Azure VM type that’s right for your workloads.