Anytime you provision infrastructure from Amazon Web Services (AWS), you will need to choose which of the AWS Regions and Availability Zones it will live in. Here are 5 things you need to know about these geographic groupings, including tips on how to choose, and things to watch out for.
1. What are AWS Regions and How Many are There?
AWS Regions are the broadest geographic category that define the physical locations of AWS data centers. Currently, there are 22 regions dispersed worldwide across North America, South America, Europe, China, Africa, Asia Pacific and the Middle East. All Regions are isolated and independent of one another.
Every region consists of multiple, separate Availability Zones within a geographic area. AWS offers Regions with a multiple AZ design – unlike other cloud providers who see a region as one single data center.
AWS has a larger footprint around the globe than all the other cloud providers, and to support their customers and ensure they maintain this global footprint, AWS is constantly opening new Regions.
Here’s a look at the different regions and their AWS code.
|US East (Ohio)||us-east-2|
|US East (N. Virginia)||us-east-1|
|US West (N. California)||us-west-1|
|US West (Oregon)||us-west-2|
|US GovCloud West||us-gov-west-1|
|US GovCloud East||us-gov-east-1|
|Asia Pacific (Hong Kong)||ap-east-1|
|Asia Pacific (Mumbai) ||ap-south-1|
|Asia Pacific (Seoul)||ap-northeast-2|
|Asia Pacific (Singapore)||ap-southeast-1|
|Asia Pacific (Sydney)||ap-southeast-2|
|Asia Pacific (Tokyo)||ap-northeast-1|
|Middle East (Bahrain)||me-south-1|
|South America (São Paulo)||sa-east-1|
** it’s important to note that an AWS GovCloud (US-East) account provides access to the AWS GovCloud (US-East) Region only, same for AWS GovCloud (US-West), it only provides access to AWS GovCloud (US-West). Additionally, an Amazon AWS (China) account provides access to the Beijing and Ningxia Regions only.**
2. What are AWS Availability Zones and How Many are There?
An Availability Zone (AZ) consists of one or more data centers at a location within an AWS Region. Each AZ has independent cooling, power, and physical security. Additionally, they are connected through redundant, ultra-low-latency networks.
In AZ’s, customers are able to operate production applications and databases that are more fault tolerant, scalable, and highly available than you would see from a single data center.
Every AZ in an AWS Region is interconnected with high-bandwidth, low-latency networking, fully redundant, metro fiber in order to provide high-throughput, low-latency networking between AZ’s. All AZ’s are physically separated by a significant distance from any other AZ, although all are within 60 miles of each other.
Around the world, there are currently 69 Availability Zones. Here’s a breakdown of each Availability Zones you can find within a Region.
|Name||Code||# of Availability Zones||Availability Zones|
|US East (Ohio)||us-east-2||3||us-east-2a |
|US East (N. Virginia)||us-east-1||6||us-east-1a|
|US West (N. California)||us-west-1||3||us-west-1a|
|US West (Oregon)||us-west-2||4||us-west-2a|
|US GovCloud West||us-gov-west-1||3||us-gov-west-1a|
|US GovCloud East||us-gov-east-1||3||us-gov-east-1a|
|Asia Pacific (Hong Kong)||ap-east-1||3||ap-east-1a|
|Asia Pacific (Mumbai) ||ap-south-1||3||ap-south-1a|
|Asia Pacific (Seoul)||ap-northeast-2||3||ap-northeast-2a|
|Asia Pacific (Singapore)||ap-southeast-1||3||ap-southeast-1a|
|Asia Pacific (Sydney)||ap-southeast-2||3||ap-southeast-2a|
|Asia Pacific (Tokyo)||ap-northeast-1||4||ap-northeast-1a|
|Middle East (Bahrain)||me-south-1||3||me-south-1a|
|South America (São Paulo)||sa-east-1||3||sa-east-1a|
3. How to Choose a Region/AZ
So that’s what they are – now how do you choose a region and availability zone for
- Distance – choose regions close to you and your customers to keep latency low
- Service availability – as we’ll discuss more below, there are some regions that offer more services than others, and new services will tend to be introduced in these regions first.
- Cost – always check the AWS pricing calculator to compare the cost between regions. N. Virginia is usually the least expensive among others. Sao Paulo is typically the most expensive.
- Compliance – GDPR, government contracting, and other regulated industries may require a specific region or multiple regions
4. What Sorts of Functions are Defined by Region and Availability Zone?
Some services, like AWS IAM, do not support Regions. Therefore, the endpoints for those services do not include a Region. Other services, such as Amazon EC2, support Regions but, you are able to specify an endpoint that does not include a Region. Additionally, Amazon Simple Storage Service (Amazon S3), supports cross-Region replication.
AWS Regions introduced before March 20, 2019 are enabled by default. You can begin working in these Regions immediately. Regions introduced after March 20, 2019 are disabled by default – you must enable these Regions before you can use them. Administrators for an account can enable and disable Regions and use a policy condition that controls who can have access to AWS services in a particular AWS Region.
There are some less popular services such as Alexa for Business, Amazon Augmented AI (A2I), Amazon Fraud Detector, and Amazon Mobile Analytics are only available in the US East (N. Virginia) Region.
Region Differences Across Major Services
Amazon Simple Storage Service (S3) is storage for the internet. You can use Amazon S3 to store and retrieve any amount of data at any time, from anywhere on the web.
You specify an AWS Region when you create your Amazon S3 bucket. For S3 Standard, S3 Standard-IA, and S3 Glacier storage classes, your objects are automatically stored across multiple devices ranging on a minimum of three Availability Zones, each separated across an AWS Region. Objects stored in the S3 One Zone-IA storage class are stored redundantly within a single Availability Zone in the AWS Region you select.
S3 operates in a minimum of three AZs, each separated by miles to protect against local events like fires, floods, etc. S3 is available in all Regions in North America, South America, Europe, Africa, China, Asia Pacific and the Middle East.
Amazon Elastic Compute Cloud
Amazon Elastic Compute Cloud (EC2) provides resizable, scalable computing capacity in the cloud. Each Amazon EC2 Region is designed to be isolated from the other Amazon EC2 Regions. This achieves the greatest possible fault tolerance and stability.
When you view your resources, you see only the resources that are tied to the Region that you specified. Why does this happen? Because Regions are isolated from each other, and resources are not automatically replicated across Regions.
When you launch an EC2 instance, you must select an AMI that’s in the same Region. If the AMI is in another Region, you can copy the AMI to the Region you’re using. When you launch an instance, you can select an Availability Zone or let Amazon choose one for you.
EC2 is available in all Regions in North America, South America, Europe, Africa, China, Asia Pacific and the Middle East.
AWS Lambda runs your code in response to triggers and automatically manages the compute resources for you. Lambda maintains compute capacity across multiple AZ’s in each Region in order to help protect code against individual machine or data center facility failures.
AWS Lambda is available in all Regions in North America, South America, Europe, Africa, China, Asia Pacific and the Middle East. The only region Lambda is not available in is Osaka, which is a local region. This type of region is new and is made up of an isolated fault-tolerant infrastructure design located in a single data center.
Amazon Simple Notification Service
Amazon SNS is a highly available, durable, secure, fully managed messaging service that allows you to decouple distributed systems, microservices, and serverless applications. SNS uses cross availability zone message storage to provide high message longevity.
Amazon SNS is available in all Regions in North America, South America, Europe, Africa, China, Asia Pacific and the Middle East.
Amazon Elastic Block Store (EBS) is AWS’s block-level, persistent local storage solution for Amazon EC2 that allows you to minimize data loss and recovery time while being able to regularly back up your data and log files across different geographic regions.
EBS volumes are replicated within an Availability Zone (AZ) and can easily scale to petabytes of data. Each volume is designed to protect against failures by replicating within the Availability Zone (AZ), offering 99.999% availability and an annual failure rate (AFR) of between 0.1%-0.2%. You can also quickly restore new volumes to launch applications in new regions.
EBS Snapshots can be used to quickly restore new volumes across a region’s Availability Zones, enabling rapid scale.
EBS is available in all Regions in North America, South America, Europe, Africa, China, Asia Pacific and the Middle East.
Transferring Data Between Regions Can Matter Too
Transferring data between AWS services within a region costs differently depending on whether you’re transferring data within or across AZs.
Data transfers are free if you are within the same region, same availability zone, and use a private IP address. Data transfers within the same region, but in different availability zones, have a cost associated with them.
So, to summarize, AWS Regions are separate geographic areas and within these regions are isolated locations that are known as Availability Zones (AZ). It’s important to pay attention to the services offered in each Region and AZ so you can make sure you are getting the most optimal service in your area.
Looking for ways to manage cloud costs? If you use the cloud, the answer should always be yes. If you don’t have proper management of your cloud spend, then you could end up spending more than you actually need to. We’ve compiled a list of tips/best practices that will help guide you to track and rightsize cloud spend and align capacity and performance to actual demand so your cloud environment is optimized.
1. Start with the Organizational Problem
It’s easy to find lots of specific ways to reduce and manage public cloud costs – and we have plenty of those to share. But let’s start with the core issue. Public cloud resources are provisioned and used throughout organizations – and governance and budgeting are organizational issues. You need to start at the root of the problem: who is responsible for what cloud costs? And how do you evaluate whether those costs are acceptable – or need to be addressed for wasted spend?
Many organizations solve this problem with a dedicated enterprise cloud manager or cloud center of excellence, a person or department (depending on the size of the organization and extent of cloud deployment) dedicated entirely to the use of cloud by employees, with cost a major focus.
Ultimately, it’s an issue of economics – and you need to think of it that way.
2. Get Familiar with the Cloud-Native Management Tools
The major public cloud providers offer native resource and cost management tools. Since you’re already enmeshed in their infrastructure offerings, it makes sense to evaluate the options within the cloud portals.
For example, on the issue of resource on/off scheduling, AWS, Azure, and Google Cloud each offer a tool. However, they have limitations – ignoring resource types that may benefit from scheduling, not providing actions, and providing data but not recommendations, to name a few. Here is a quick rundown of each of those tools and what they include.
Another example is the AWS Compute Optimizer – a big name in promise, and certainly worth reviewing for AWS users.
3. But, Know that Cloud Providers Won’t Solve All the Problems they Create
Enter the realm of third-party software. Whether because cloud providers don’t actively want you to save money (you might guess this is the case, but they want their services to be “sticky” and therefore promote cost optimization options) or because it’s simply not a revenue driver for them, cloud cost management is often an afterthought for cloud providers. We’re seeing a change in the winds as providers turn toward built-in savings options (for example, Google Cloud’s sustained use discounts), but cloud resource provisioning and optimization are a wild, ever-changing beast that cloud providers aren’t keeping up with.
That’s why it may be time to…
4. Find a Cost Management Tool That Fits Your Needs
As IT infrastructure changes organizations need for tools and processes dedicated to cloud cost management and cost control have become a necessity. Using third-party tools for cloud optimization help with cost visibility and governance and cost optimization. Make sure you aren’t just focusing on cost visibility and recommendations, but find a tool that takes that extra step and takes those actions for you.
It’d be beneficial to find a tool that can work with multiple clouds, multiple accounts within each cloud and in multiple regions within each account so you can view recommendations across all your accounts in one place in one easy to use interface. This added visibility and insight helps simplify managing cloud costs.
By the way – automation is key. By including cost optimization software in your cloud strategy, organizations eliminate the need for developers to write scheduling scripts and deploy them to fit a specific team´s requirements. This automation reduces the potential for human error and saves organizations time and money by allowing developers to reallocate their time to more beneficial tasks.
5. Get Visibility on Your Bill
If you’re going to manage your cloud costs better, you need to understand where your spending is going. Here’s a guide to get a consolidated billing view in AWS.
Relatedly, you’re also going to need to understand what each resource is for – which means you need a robust labeling strategy.
6. Use a Resource Tagging Strategy to Better Manage Cloud Costs
Tags are labels or identifiers that are attached to your instances. This is a way for you to provide custom metadata to accompany the existing metadata, such as instance family and size, region, VPC, IP information, and more. This helps manage your cloud costs by sorting, searching and filtering through your cloud environment.
With the application of tagging best practices in place, you can automate governance, improve your workflows and make sure your costs are controlled. Additionally, there are management and provisioning tools that can automate and maintain your tagging standards.
In ParkMyCloud, our software reads the names and tags assigned to VMs and recommends which are suitable for scheduling (“parking”).
7. Identify Idle/Underutilized Resources
Okay, so that’s how you get to the step of optimizing costs. So what are the ways you can actually manage cloud costs and optimize spending?
The easiest way to quickly and significantly reduce cloud costs is to identify resources that are not actually being used (typically in non-production environments).
Examples of resources that you may leave idle are; On-Demand Instances/VMs, Relational Databases, Load Balancers and Containers.
Once you’ve identified them, then you can schedule them to turn off when not needed, or as we like to say, “park” them.
By setting schedules for your instances to turn off when they are typically idle, you are eliminating potential cloud waste and saving you money on your cloud bill. Typically, schedules would turn off instances between the hours of 7:00 pm and 9:00 am on weekdays and on weekends. This way you don’t have to worry about manually turning on and off instances when you aren’t using them. By keeping workloads on just during business hours, you can save around 65% on your cloud bill.
8. Rightsize Your Instances
Another major source of cloud waste is oversized resources. When you RightSize you are matching a workload to the best supporting virtual machine size, helping you optimize costs. This is important because many virtual machines in the cloud are sized much larger than necessary for the workloads running on them – a single instance change can save 50% or more of the cost. (Try it free to see how much you can save.)
9. Know Your Purchasing Options & Discounts Offered by Cloud Providers – Starting with Reserved Instances
Each of the ‘big three’ cloud providers offer an assortment of purchasing options to lower costs from the listed On-Demand prices.
For example, AWS Reserved Instances, Azure Reserved Virtual Machine Instances, and Google Committed Use Discounts allow customers to purchase compute capacity in advance in exchange for a discount.
10. And Spot Instances
Another discounting mechanism is the option that lets you purchase unused capacity for a steep discount – in AWS these are referred to as spot instances, low-priority VMs in Azure, and preemptible VMs in Google.
11. Don’t Miss Sustained Use Discounts
Google also offers a unique cost-savings option that AWS and Azure don’t – Sustained Use Discounts.
12. Use AWS’s New Savings Plans
You’re probably familiar with AWS Reserved Instances. But have you been following along with the Savings Plans announced at re:Invent? If you use
According to both our CTO and Corey Quinn, you should run, not walk, to the AWS portal to get your hands on some savings plans to better manage cloud costs.
Plus, you can now use savings plans to save up to 17% on Lambda workloads, per an announcement last week.
13. Review Your Contracts
Another sort of “purchasing option” is related to contract agreements. All three major cloud providers offer enterprise contracts. Typically, these are to encourage large companies to commit to specific levels of usage and spend in exchange for an across-the-board discount – examples of this would be AWS EDPs and Azure Enterprise Agreements.
14. Make Sure You’re Using Lambda Efficiently
It can be easy to get caught up while building Lambda based applications that you forget to optimize and plan for the costs Lambda will incur. While it may be cheap and easy to build these applications, if you run heavy workloads without taking costs into account, you’ll end up running up your bill.
Continuously keeping track of spend, monitoring usage and understanding its behavior is essential to keeping Lambda costs controlled and optimized.
15. Review Credit Options
Each of the cloud providers offers ways to get credits you can put toward your bill. By offering these credits, Google Cloud, Azure and AWS are trying to make it easy and in some cases free to get started using their cloud platforms.
16. Keep Your Instance Types Up to Date
Did you ever think that simply modernizing your VMs and databases to make sure they are running on the latest instance family can save you money?
Cloud providers incentivize instance modernization by pricing the newest generations the lowest. Typically, new instance families come out with newer CPU types, but can also refer to networking or memory improvements as well.
So you get a cheaper price (10-20% discount) and better performance – modernizing your instances is almost a no brainer.
…and the list goes on. Managing cloud costs can seem like a daunting task but it doesn’t have to be! Follow these tips and start optimizing your cloud environment.
Got any tips we should add? Let us know in the comments below!
Azure credits are a perk offered by Microsoft that help you save money on your cloud bill. Like a gift card for a retail store, credits are applied to your account to help cover costs until they are exhausted or expire. In a sense, these credits act as a spending limit because any usage of resources or products that are not free will be deducted from the credit amount. We found 7 different ways that you can earn credits and start saving on your Azure bill.
1. Visual Studio Subscription
If you’re a Visual Studio subscriber, you get monthly Azure credits that can be used to explore and test out different Azure services. The amount of Azure credits that you receive will depend on the type of Visual Studio subscription that you have.
With a Visual Studio Enterprise subscription, you get a standard of $150 in monthly credits. For subscriptions through MSDN Platforms you get $100 a month. For Visual Studio Professional and Visual Studio Test Professional, you get $50 a month.
2. Azure for Students
Full-time students at an accredited, two or four-year educational institution in a STEM-related field are eligible for these credits.
When a student signs up with their school email address, Microsoft gives them $100 in credit in order to help them further their career and build their skills in Azure thanks to the free access to learning paths, labs, and professional developer tools.
3. Azure Free Account
With a free account, you get access to a number of popular Azure services for no cost. In addition to access to free services, you’ll also get a $200 credit. It’s important to note that while the free account lasts for 12 months, your credits must be spent in the first 30 days.
Whether you’re just getting started in Azure or are looking to further your knowledge, a free account is always a great way to test the waters without having to make a long term commitment.
4. Microsoft Partner Network
In the Partner Network, those that are members of Microsoft’s Action Pack program receive $100 of Azure credits every month. Based on your computing needs, you can use these credits for any Azure service; some examples include, Virtual Machines, Web Sites, Cloud Services, Mobile Services, Storage, SQL Database, Content Delivery Network, HDInsight, Media Services, and more.
The great part about this is that there are a handful of usage scenarios that won’t consume all of the $100 credit – you can use this pricing calculator to estimate how much you could use with a $100 credit.
Any of the unused monthly credits can’t be carried over to succeeding months or transferred to other Azure subscriptions, so make sure to use it while you can!
5. Microsoft for Startups
This global program is designed to help startups as they build and scale their organizations. Part of the technical enablement features that are always free and available to all startups is $200 of Azure credits that can be used towards any service for 30 days. This is a great option for startups because it’s free and gives you the ability to explore all the different offerings without having to spend any money.
6. Azure for Education
With Azure for Education, users are given access to the learning resources and developer tools that educators and students need in order to build cloud-based skills. This program is available to students, educators and institutions – once signed up, educators get $200 of Azure credits.
Whether you’re teaching advanced workloads, interested in building cloud-based skills, or just getting started in your Azure learning journey, this program provides guidance and resources for individuals looking to further their knowledge in Azure.
7. Microsoft for Nonprofits
In an effort to make their technology more affordable and accessible for nonprofit and nongovernmental organizations, Microsoft offers donated and discounted products. Each year, approved organizations receive $3,500 in Azure credits which can be used to purchase all Azure workloads created by Microsoft (excluding Azure Active Directory, which is licensed under EM+S).
No matter the industry you’re in or learning level you’re at, there are a wide variety of credits and resources offered that can help make Azure an affordable option for you.
Top 3 Ways to Save Money on Azure
How to Save Money with Microsoft Azure Enterprise Agreements
9 Ways to Get AWS Credits
4 Ways to Get Google Cloud Credits
Each of the ‘big three’ cloud providers (AWS, Azure, GCP) offer a number of cloud certification options that individuals can get to validate their cloud knowledge and skill set, while helping them advance in their careers and broaden the scope of their achievements.
Between the different PaaS specific, role-based (such as dev. or architect) or domain focused certifications, CSPs have numerous options available to help you bring more value to your organization as you keep up with the new business demands and continue to challenge yourself and grow with this world. With these certifications, you are more likely to achieve business goals thanks to your proficiency in specific areas – and benefit from an extra edge on your resume in your next job search.
Here’s an overview of the certifications offered by AWS, Azure, and GCP and what capabilities an individual validates by completing these certifications.
Amazon Web Services (AWS) Certifications
AWS offers certifications for different learning levels. The four different categories/levels of certifications include:
- Foundational: individuals should have at least six months of basic/foundational industry and AWS knowledge.
- Associate: expected to have one year of experience solving problems and implementing solutions with AWS.
- Professional: aimed for individuals that have two years of comprehensive experience operating, designing and solving solutions using AWS.
- Specialty: each of the certifications in this category are based on a technical AWS experience in the specialty domain. Requirements for these certifications can range from a minimum of 6 months to 5 years of required hands-on experience.
AWS certifications offered include:
- AWS Certified Cloud Practitioner
- Individuals are expected to effectively demonstrate a comprehensive understanding of AWS fundamentals and best practices.
- AWS Certified Solutions Architect – Associate
- Individuals in an associate solutions architect role have 1+ years of experience designing available, fault-tolerant, scalable, and most importantly cost-efficient, distributed systems on AWS.
- Can demonstrate how to build and deploy applications on AWS.
- AWS Certified SysOps Administrator – Associate
- This certification is meant for systems administrators that hold a systems operations role and have at least one year of hands-on experience in management, operations and deployments on AWS.
- They must be able to migrate on-premises workloads to AWS
- They can estimate usage costs and identify operational cost control methods.
- Must prove knowledge of deploying, operating and managing highly available, scalable and fault-tolerant systems on AWS.
- AWS Certified Developer – Associate
- This is for individuals who hold a development role and have at least one or more years of experience developing and maintaining AWS-based applications.
- Display a basic understanding of core AWS services, uses, and basic AWS architecture best practices.
- Demonstrate that they are capable of developing, deploying, and debugging cloud-based applications using AWS
- AWS Certified Solutions Architect – Professional
- Individuals in a professional solutions architect role have two or more years of experience operating and managing systems on AWS.
- They must be able to design and deploy scalable, highly available, and fault-tolerant applications on AWS.
- Must demonstrate knowledge of migrating complex, multi-tier applications on AWS
- They are responsible for implementing cost-control strategies.
- AWS Certified DevOps Engineer – Professional
- Intended for individuals who have a DevOps engineer role and two or more years of experience operating, provisioning and managing AWS environments.
- They are able to implement and manage continuous delivery systems and methodologies on AWS.
- Additionally, they must be able to implement and automate security controls, governance processes, and compliance validation.
- Can deploy and define metrics, monitoring and logging systems on AWS.
- Are responsible for designing, managing, and maintaining tools that automate operational processes.
- AWS Certified Advanced Networking – Speciality
- Intended for individuals who perform intricate networking tasks.
- Design, develop, and deploy cloud-based solutions using AWS
- Design and maintain network architecture for all AWS services
- Leverage tools to automate AWS networking tasks
- AWS Certified Big Data – Speciality
- For individuals who perform complex Big Data analyses and have at least two years of experience using AWS.
- Implement core AWS Big Data services according to basic architecture best practices
- Design and maintain Big Data
- Leverage tools to automate data analysis
- AWS Certified Security – Speciality
- Individuals who have a security role and at least two years of hands-on experience securing AWS workloads.
- Exhibit an understanding of specialized data classifications and AWS data protection mechanisms as well as data encryption methods and secure Internet protocols and AWS mechanisms to implement them
- Knowledge of AWS security services and features to provide a secure production environment
- An understanding of security operations and risk
- AWS Certified Machine Learning – Speciality
- Intended for individuals in a development or data science role.
- Ability to design, implement, deploy and maintain machine learning solutions for specific business problems.
- AWS Certified Alexa Skill Builder – Speciality
- Intended for individuals who have a role as an Alexa skill builder.
- Individuals have demonstrated an ability to design, build, test, publish and manage Amazon Alexa skills.
Microsoft Azure Certifications
Following the Azure learning path under Microsoft, there are certifications available that allow you to demonstrate your expertise in Microsoft cloud-related technologies and advance your career by earning one of the new Azure role-based certifications or an Azure-related certification in platform, development, or data.
Azure certifications include:
- Azure Solutions Architect Expert
- Intended for individuals that have an expertise in network, compute, security and storage so that they can design solutions that run on Azure
- Azure Fundamentals
- Individuals will prove their understanding of cloud concepts, Azure pricing and support, core Azure services, as well as the fundamentals of cloud privacy, security, trust and compliance.
- Azure DevOps Engineer Expert
- Individuals will demonstrate an ability to combine people, process, and technologies to continuously deliver valuable products and services that meet business objectives in addition to end user needs.
- Azure Developer Associate
- For individuals that can design, build, test and maintain cloud solutions – such as applications and services – and partner with cloud solutions architects, cloud administrators, cloud DBAs, and clients in order to implement these solutions.
- Azure Data Scientist Associate
- Intended for individuals that apply Azure’s machine learning techniques to train, evaluate, and deploy models that will ultimately help solve business problems.
- Azure Data Engineer Associate
- For individuals that design and implement the management, security, monitoring, and privacy of data – using the full stack of Azure data services – to satisfy business needs.
- Azure AI Engineer Associate
- Intended for individuals that use Machine Learning, Knowledge Mining, and Cognitive Services to architect and implement Microsoft AI solutions – this involves natural language processing, computer vision, speech, agents and bots.
- Azure Administrator Associate
- Individuals must demonstrate their ability to implement, monitor and maintain Azure solutions – this includes major services related to storage, compute, security and network.
- Azure Security Engineer Associate
- Individuals are expected to be able to implement security controls and threat protection, manage identity and access. Additionally, they must be able to protect data, applications, and networks in the cloud as well as hybrid environments as part of end-to-end infrastructure.
- Azure for SAP Workloads Specialty
- In this specialty, architects have extensive experience and knowledge of the SAP Landscape Certification process and industry standards that are specific and critical to the long-term operation of an SAP solution.
- Azure IoT Developer Specialty
- In this specialty, individuals must prove that they understand how to implement the Azure services that form an IoT solution – this includes data analysis, data processing, data storage options, and PaaS options.
- Must be able to recognize Azure IoT service configuration settings within the code portion of an IoT solution.
Google offers three different levels of available certifications:
- Associate certification – focused on the fundamental skills of deploying, monitoring, and maintaining projects on Google Cloud.
- This certification is a good starting point for those new to cloud and can be used as a path to professional level certifications.
- Recommended experience: 6+ months building on Google Cloud
- Professional certification – span key technical job functions and assess advanced skills in design, implementation, and management.
- These certifications are recommended for individuals with industry experience and familiarity with Google Cloud products and solutions.
- Recommended experience: 3+ years of industry experience, including 1+ years on Google Cloud
- User certification – intended for individuals with experience using G Suite and determines an individual’s ability to use core collaboration tools.
- Recommended experience: Completion of Applied Digital Skills training course and G Suite Essentials quest, and 1+ months on G Suite.
Available certifications include:
- Associate Cloud Engineer
- Intended for individuals that can deploy applications, monitor operations, and manage enterprise solutions.
- Individuals display an ability to use the Google Cloud Console and the command-line interface to perform common platform-based tasks to maintain one or more deployed solutions that leverage Google-managed or self-managed services on Google Cloud.
- Individuals display an ability to set up a cloud solution environment, plan and configure a cloud solution, deploy and implement a cloud solution, ensure successful operation of a cloud solution, and configure access and security.
- Professional Cloud Architect
- For individuals that enable organizations to leverage Google Cloud technologies.
- These individuals can design, develop, and manage secure, scalable, and highly available solutions that drive business objectives.
- Individuals display an ability to design and plan a cloud solution architecture, manage and provision the cloud solution infrastructure, design for security and compliance, analyze and optimize technical and business processes, manage implementations of cloud architecture, and ensure solution and operations reliability.
- Professional Cloud Developer
- These individuals build scalable and highly available applications using Google recommended practices and tools that leverage fully managed services.
- Have experience with next generation databases, runtime environments, and developer tools.
- Have proficiency with at least one general purpose programming language and are skilled in using Stackdriver.
- Individuals display an ability to design highly scalable, available, and reliable cloud-native applications, build and test applications, deploy applications, integrate Google Cloud Platform services, and manage application performance monitoring.
- Professional Data Engineer
- Intended for individuals that enable data-driven decision making by collecting, transforming, and publishing data.
- Individuals should be able to design, build, operate, manage, and monitor secure data processing systems.
- Individuals display an ability to design data processing systems, build and operationalize data processing systems, operationalize machine learning models, and ensure solution quality.
- Professional Cloud DevOps Engineer
- Individuals are responsible for efficient development operations that can balance service reliability and delivery speed.
- Individuals are expected to be skilled in using Google Cloud Platform to build software delivery pipelines, deploy and monitor services, and manage and learn from incidents.
- Individuals display an ability to apply site reliability engineering principles to a service, optimize service performance, implement service monitoring strategies, build and implement CI/CD pipelines for a service, and manage service incidents.
- Professional Cloud Security Engineer
- Intended for individuals that enable organizations to design and implement a secure infrastructure on Google Cloud Platform.
- They are expected to have a thorough understanding of security best practices and industry security requirements.
- These individuals design, develop, and manage a secure infrastructure leveraging Google security technologies and should be proficient in all aspects of Cloud Security.
- Individuals display an ability to configure access within a cloud solution environment, configure network security, ensure data protection, manage operations within a cloud solution environment and ensure compliance.
- Professional Cloud Network Engineer
- Intended for individuals who implement and manage network architectures in Google Cloud Platform.
- These individuals ensure successful cloud implementations using the command line interface or the Google Cloud Platform Console.
- Individuals display an ability to design, plan, and prototype a GCP Network, implement a GCP Virtual Private Cloud (VPC), configure network services and implement hybrid interconnectivity.
- Professional Collaboration Engineer
- Intended for individuals that transform business objectives into tangible configurations, policies, and security practices as they relate to users, content, and integrations.
- Individuals use tools, programming languages, and APIs to automate workflows.
- Individuals display an ability to plan and implement G Suite authorization and access, manage user, resource, and Team Drive lifecycles, manage mail, control and configure G Suite services, configure and manage endpoint access, monitor organizational operations and advance G Suite adoption and collaboration.
- G Suite User – User Certification
- This certification lets employers know that you possess the digital skills to work collaboratively and productively in a professional environment, complete common workplace activities using cloud-based tools to create and share documents, spreadsheets, presentations, and files.
Where to Start
If you aren’t sure where to start, each cloud provider offers a certification that only requires a basic understanding of the platform and are a great way to help you get the ball rolling in your cloud certification journey. The three certifications for beginners are: AWS Certified Cloud Practitioner, Microsoft Certified Azure Fundamentals, and Google Associate Cloud Engineer. Good luck!
5 Favorite AWS Training Resources
5 Free Azure Training Resources
5 Free Google Cloud Training Resources
Azure Internet of Things (also known as Azure IoT) is a collection of cloud services managed by Microsoft that monitor, connect and control billions of IoT assets. Basically, this is a solution that operates in the cloud and is made up of 1 or more IoT devices and 1 or more back-end services that communicate with one another. Organizations across all industries use Azure IoT to help them improve their business and achieve their IoT goals.
There are three main parts that make up an IoT solution – devices, back-end services, and communication between the two. In this blog, we’ll dig in a little more into these components, different IoT services, and possible challenges.
How to Use Azure IoT
IoT devices are pretty much anything that has a sensor attached to it and can transmit data from one object to another or to people with the help of the internet. Typically, they are attached to a particular object that operates through the internet, enabling the transfer of data among objects or people automatically without human intervention. It’s also important to note that many of these devices can communicate through a Wi-Fi chip as well. Some examples of IoT devices that work with Azure IoT may include:
- Pressure sensors on a remote oil pump
- Temperature and humidity sensors in an air-conditioning unit
- Accelerometers in an elevator
- Presence sensors in a room
With Azure IoT Hub you can connect, manage and scale your IoT devices to communicate securely with back-end services in both directions. Here are some examples of how this communication works:
- Your device may send temperature from a mobile refrigeration truck every 5 minutes to an IoT Hub.
- The back-end service can ask the device to send telemetry more frequently to help diagnose a problem.
- Your device can send alerts based on the values read from its sensors. For example, if monitoring a batch reactor in a chemical plant, you may want to send an alert when the temperatures exceeds a certain value.
Here are some of the functions a back-end service can provide:
- Receiving telemetry at scale from your devices, and determining how to process and store that data.
- Analyzing the telemetry to provide insights, either in real-time or after the fact.
- Sending commands from the cloud to a specific device.
- Provisioning devices and control which devices can connect to your infrastructure.
- Control the state of your devices and monitor their activities.
Azure IoT Services Offered
Microsoft offers eight IoT services in Azure. With so many different options it can be confusing to figure out which one best fits your needs. Depending on how much help and control you want in building your own solution will affect which service is the best one for you. Here are the available services and what they can be used for:
- IoT Central: This application platform makes the creation of IoT solutions more simple and helps reduce the load and cost of IoT management operations, and development. This service is intended for straightforward solutions that don’t require a significant amount of service customization.
- IoT solution accelerators: This is a group of PaaS solutions that can be used to accelerate development of an IoT solution.
- IoT Hub: This service allows you to monitor and control billions of IoT devices by logging into an IoT hub from your devices. This is especially helpful if you need communication that goes both ways between your devices and back-end. This is the primary service for IoT solution accelerators and IoT Central.
- IoT Hub Device Provisioning Service: This service helps IoT Hub in that you can use this to securely provision devices to your IoT hub. Instead of provisioning millions of devices one at a time, this service gives you the ability to quickly and easily provision millions of devices all at once.
- IoT Edge: This service can be used to analyze data on IoT devices instead of in the cloud. This is a service that builds on top of IoT Hub.
- Azure Digital Twins: This service enables you to create comprehensive models of the physical environment.
- Time Series Insights: This service allows you to store, visualize, and query extensive amounts of time series data that is generated by an IoT device.
- Azure Maps: This service provides geographic data to web and mobile applications.
Things to consider
All IoT devices have different characteristics when compared to other clients, such as apps and browsers. Azure IoT devices, tools and data analytics may help you manage these to achieve your IoT goals. But, adopting IoT technologies can present its own set of challenges. While reducing IoT application costs and easing development efforts are important things to consider when implementing an IoT solution, connecting devices securely and reliably is often the biggest challenge most organizations encounter when using IoT services.