Azure Internet of Things (also known as Azure IoT) is a collection of cloud services managed by Microsoft that monitor, connect and control billions of IoT assets. Basically, this is a solution that operates in the cloud and is made up of 1 or more IoT devices and 1 or more back-end services that communicate with one another. Organizations across all industries use Azure IoT to help them improve their business and achieve their IoT goals.
There are three main parts that make up an IoT solution – devices, back-end services, and communication between the two. In this blog, we’ll dig in a little more into these components, different IoT services, and possible challenges.
How to Use Azure IoT
IoT devices are pretty much anything that has a sensor attached to it and can transmit data from one object to another or to people with the help of the internet. Typically, they are attached to a particular object that operates through the internet, enabling the transfer of data among objects or people automatically without human intervention. It’s also important to note that many of these devices can communicate through a Wi-Fi chip as well. Some examples of IoT devices that work with Azure IoT may include:
Pressure sensors on a remote oil pump
Temperature and humidity sensors in an air-conditioning unit
Accelerometers in an elevator
Presence sensors in a room
With Azure IoT Hub you can connect, manage and scale your IoT devices to communicate securely with back-end services in both directions. Here are some examples of how this communication works:
Your device may send temperature from a mobile refrigeration truck every 5 minutes to an IoT Hub.
The back-end service can ask the device to send telemetry more frequently to help diagnose a problem.
Your device can send alerts based on the values read from its sensors. For example, if monitoring a batch reactor in a chemical plant, you may want to send an alert when the temperatures exceeds a certain value.
Here are some of the functions a back-end service can provide:
Receiving telemetry at scale from your devices, and determining how to process and store that data.
Analyzing the telemetry to provide insights, either in real-time or after the fact.
Sending commands from the cloud to a specific device.
Provisioning devices and control which devices can connect to your infrastructure.
Control the state of your devices and monitor their activities.
Azure IoT Services Offered
Microsoft offers eight IoT services in Azure. With so many different options it can be confusing to figure out which one best fits your needs. Depending on how much help and control you want in building your own solution will affect which service is the best one for you. Here are the available services and what they can be used for:
IoT Central: This application platform makes the creation of IoT solutions more simple and helps reduce the load and cost of IoT management operations, and development. This service is intended for straightforward solutions that don’t require a significant amount of service customization.
IoT Hub: This service allows you to monitor and control billions of IoT devices by logging into an IoT hub from your devices. This is especially helpful if you need communication that goes both ways between your devices and back-end. This is the primary service for IoT solution accelerators and IoT Central.
IoT Hub Device Provisioning Service: This service helps IoT Hub in that you can use this to securely provision devices to your IoT hub. Instead of provisioning millions of devices one at a time, this service gives you the ability to quickly and easily provision millions of devices all at once.
IoT Edge: This service can be used to analyze data on IoT devices instead of in the cloud. This is a service that builds on top of IoT Hub.
Azure Digital Twins: This service enables you to create comprehensive models of the physical environment.
Time Series Insights: This service allows you to store, visualize, and query extensive amounts of time series data that is generated by an IoT device.
Azure Maps: This service provides geographic data to web and mobile applications.
Things to consider
All IoT devices have different characteristics when compared to other clients, such as apps and browsers. Azure IoT devices, tools and data analytics may help you manage these to achieve your IoT goals. But, adopting IoT technologies can present its own set of challenges. While reducing IoT application costs and easing development efforts are important things to consider when implementing an IoT solution, connecting devices securely and reliably is often the biggest challenge most organizations encounter when using IoT services.
AWS Trusted Advisor is a service that helps you understand if you are using your AWS services well. It does this by looking at 72 different best practices across 5 total categories, which include Cost Optimization, Performance, Security, Fault Tolerance, and Service Limits. All AWS users have access to 7 of those best practices, while Business Support and Enterprise Support customers have access to all items in all categories. Let’s dive in to each category to see what is there and what is missing.
A category that is near and dear to our hearts here at ParkMyCloud, the Cost Optimization category includes items related to the following services:
This list includes many of the services that are often the most expensive line items in an AWS account, but doesn’t take into account a large percentage of the AWS services available. Also, these recommendations only provide links to other AWS documentation that might help you solve the problem, as opposed to a service like ParkMyCloud that provides both the recommendations and ability to take the action of shutting down idle instances or resizing those instances for you.
This category caters more towards production instances, as it aims to make sure the performance of your applications is not hindered due to overutilization (as opposed to the Cost Savings category above, which is focused more on underutilization). This includes:
EC2 – highly-utilized VMs, large number of security group rules (per instance or per security group)
EBS – SSD volume configuration, overutilized magnetic volumes, EC2 to EBS throughput
This category is one of the weakest in terms of services supported, so you may want to factor that in if you’re trying to make sure your production applications are performing well on alternative AWS services.
The security checks of AWS Trusted Advisor will look at the following items:
Security Groups – Unrestricted ports, unrestricted access, RDS access risk
IAM – Use of Roles/Users, key rotation, root account MFA, password policy
S3 – Bucket permissions
CloudTrail – logging use
Route 53 – MX and SPF record sets
ELB – Listener security, Security groups
Cloudfront – Custom SSL certificates, certificates on the origin server
Access keys – Exposed keys
Snapshots – EBS public snapshots, RDS public snapshots
Security is a tough category to get right, as almost every one of these needs to be reviewed for your business needs. While this isn’t an exhaustive list of security considerations, it certainly helps your organization cover the basics and prevent some “I can’t believe we did that” moments.
One of the main benefits of the cloud that often gets overlooked is the use of distributed resources to increase fault tolerance for your services. These items in the fault tolerance category are focused on increasing the redundancy and availability of your applications. They include:
EBS – Snapshots
EC2 – Availability Zone balance
Load Balancer – optimization
VPN Tunnel – redundancy
Auto Scaling Groups – general ASG usage, health check
RDS – backups, multi-AZ configuration
S3 – bucket logging, bucket versioning
Route 53 – Name server delegations, record sets with high TTL or failover resources, deleted health checks
Direct Connect – Connection / location / virtual interface redundancy
Aurora DB – instance accessibility
EC2 Windows – EC2Config agent age, PV driver versions, ENA driver versions, NVMe driver versions
Overall, this turns out to be a great list of AWS services that can really make sure your production applications have minimal downtime and minimal latency. Additionally, some services like snapshots and versioning, help with recovering from problems in a timely fashion.
One of the hidden limitations that AWS puts on each account is a limit of how many resources you can spin up at any given time. This makes sense for AWS, so they don’t have new users unintentionally (or intentionally!) perform a DOS for other users. These service limits can be increased if you ask nicely, but this is one of the few places where you can actually see if you’re coming close. The services covered are:
Verdict: Helpful, But Not Game-Changing
While these checks and advice from AWS Trusted Advisor certainly help AWS users see ways to improve their usage of AWS, the lack of one-click-action makes these recommendations just that – recommendations. Someone still has to go verify the recommendations and take the actions, which means that in practice, a lot of this gets left as-is. That said, while I wouldn’t suggest upgrading your support just for Trusted Advisor, it certainly can provide value if you’re already on Business Support or Enterprise Support.
There have been about 1.3 zillion blogs posted this week recapping the announcements from AWS re:invent 2019, and of course we have our own spin on the topic. Looking primarily at cost optimization and cost visibility, there were a few cool new features posted. None of them were quite as awesome as the new Savings Plan announcement last month, but they are still worthy of note.
AWS Compute Optimizer
With AWS jumping feet-first into machine learning, it is no surprise that they turned it loose on instance rightsizing.
The Compute Optimizer is a standalone service in AWS, falling under the Management & Governance heading (yes, it is buried in the gigantic AWS menu). It offers rightsizing for the M, C, R, T, and X instance families and Auto Scaling groups of a fixed size (with the same values for desired/min/max capacity). To use the service you must first “opt-in” in each of your AWS accounts. Navigate to AWS Cost Optimizer and click the “Get Started” button.
Interestingly, they only promise a cost reduction “up to 25%”. This is probably a realistic yet humble claim, given that the savings for a single downsize in the same instance family is typically 50%. That said, the only way to get that 50% cost reduction is to install the AWS CloudWatch Agent on your instances and configure it to send memory metrics to CloudWatch. If you are not running the agent…then no memory metrics. Like ParkMyCloud rightsizing, in the absence of memory metrics, the AWS Compute Optimizer can only make cross-family recommendations that change only the CPU or network configuration, leaving memory constant. Hence – a potential 25% cost reduction.
The best part? It is free! All in all, this feature looks an awful lot like ParkMyCloud rightsizing recommendations, though I believe we add a bit more value by making our recommendations a bit more prominent in our Console – not mixed-in with 100+ other menu items… The jury is still out on the quality of the recommendations; watch for another blog soon with a deeper dive.
Amazon EC2 Inf1 Instance Family
Every time you congratulate yourself on how much you have been able to save on your cloud costs, AWS comes up with a new way to help you spend that money you had “left over.” In this case, AWS has created a custom chip, the “Inferentia”, purposely designed to optimize machine learning inference applications.
Inference applications essentially take a machine learning model that has already been trained via some deep-learning framework like TensorFlow, and uses that model to make predictions based on new data. Examples of such applications include fraud detection and image or speech recognition.
The Inferentia is combined in the new Inf1 family with Intel® Xeon® CPUs to make a blazingly fast machine for this special-purpose processing. This higher processing speed allows you to do more work in less time than you could do with the previous instance type used for inferencing applications, the EC2 G4 family. The G4 is built around Graphics Processing Unit (GPU) chips, so it is pretty easy to see that a purpose-built machine learning chip can be made a lot faster. AWS claims that the Inf1 family will have a “40% lower cost per inference than Amazon EC2 G4 instances.” This is a huge immediate savings, with only the work of having to recompile your trained model using AWS Neuron, which will optimize it for use with the Inferentia chip.
Next Generation Graviton2 Instances
The final cool cost-savings item is another new instance type that fits into the more commonly used M, C, and R instances families. These new instance types are built around another custom AWS chip (watch out Intel and AMD…) the Graviton2. The Graviton chips, in general, are built around the ARM processor design, more commonly found in smartphones and the like. Graviton was first released last year on the A1 instance family and honestly, we have not seen too many of them pass through the ParkMyCloud system. Since the Graviton2 is built to support M, C, and R, I think we are much more likely to see widespread use.
Looking at how they perform relative to the current M5 family, AWS described the following performance improvements:
HTTPS load balancing with Nginx: +24%
Memcached: +43% performance, at lower latency
X.264 video encoding: +26%
EDA simulation with Cadence Xcellium: +54%
Overall, the new instances offer “40% better price performance over comparable current generation instances.”
The new instance types will be the M6g and M6gd (“g”=Graviton, “d”=NVMe local storage), the C6g and C6gd, and the R6g and R6gd. The new family is still in Preview mode, so pricing is not yet posted, but AWS is claiming a net “20% lower cost and up to 40% higher performance over Amazon EC2 M5 instances, based on internal testing of workloads.” We will definitely be trying these new instance types when they release in 2020!
All in all, there were no real HUGE announcements that would impact your costs, but baby steps are OK too!
As the end of the year approaches, and we look ahead at what the 2020 tech trends promise to have in store for the cloud, we can’t help but also reflect on what the past years’ trends have foretold and given us thus far. As we enter 2020 we are not only entering a new year, but also a new decade, so it’s doubly interesting (and fun) to sit back and ponder on what the year and decade ahead might hold.
Before summoning the oracle and thinking ahead specifically on the future of cloud management, it’s worthwhile looking back at the big picture over the last decade to give us some sense of what we have to look forward to. Let’s start with the proverbial ‘gorilla’. AWS was founded in 2006 and had reached annual revenues by 2010 of ~$500MM. Not bad growth from a standing start and a growth trend which continued throughout the decade. With Wall St. estimates of some $50B in revenue for 2020, this means 100x growth. That is quite simply incredible and with growth last year at 35% year-on-year, this AWS growth doesn’t look like it will stop..
2010 Cloud Prediction
Amazon Cloud Revenue Could Exceed $500 Million In 2010, CRN (2010)
Growth among Microsoft Azure and Google Cloud Platform has also not been too shabby but AWS has held (and in many ways strengthened) its dominant position over the last decade.
Due to the wonderful archival powers of the internet, finding white papers on the future of cloud from a decade ago is no more than a few clicks away. Rather than try and summarize them here, it’s worth reviewing yourself – one that’s worth taking a look at is Microsoft’s The Economics of the Cloud (2010). Some parts were right, some wrong, but the key point was that: “cloud services will enable IT groups to focus more on innovation while leaving non-differentiating activities to reliable and cost-effective providers”.
On this particular point, it’s hard to argue that as a result new companies and new business models have been realized in ways previously not imaginable. Be it in the world of the sharing economy, Uber, Lyft, Airbnb, etc or the myriad of other cloud powered unicorns which have been built over the last decade, cloud infrastructure has been a huge enabler of growth.
As interesting as it would be to speculate on where the cloud industry might be headed to by 2030, (AI-Cloud, IOT, Blockchain, Space Cloud Computing, etc) and so that we keep our feet firmly on the ground, the trends we at ParkMyCloud feel qualified to comment on are somewhat more modest and closer to home.
Cloud Management – What we are seeing here is demand from customers for a more consolidated view across their multiple cloud accounts. In 2019 we saw a lot of our customers going mainstream in the multi-cloud world and trying to integrate a mix of cloud-native and third-party tools to provide actionable insights and more importantly actions. The companies building Cloud Management technologies have grown over the last decade but in many ways, it remains a small and no unicorns have yet emerged. We think this will change in the coming decade as the management of cloud infrastructure in all aspects (technical, economic, etc) has reached such a scale it requires non-human intervention and coordination.
Multi-cloud – Multi-cloud truly arrived in 2019 and we believe it will grow in 2020. Most organizations now use multiple clouds and among our customer base, there appears to be less concern about vendor lock-in. We also increasingly see specific clouds being used for specific purposes, so, for instance, data analytics workloads utilizing one particular cloud provider, whereas development and production sit on an entirely different cloud.
Automation – Building on what we see happening in the world of Cloud Management, the demand for automation across the technical and economic management stack is growing. Companies are getting more comfortable with semi-autonomous modes and in some cases moving to full-blown automation. Many have drawn the comparison with the 1 to 5 scale used in the field of autonomous vehicles and we like this analogy. With many now operating at level 2(Partial) or level 3(Conditional) we see this continuing to move closer to levels 4 (High) and 5 (Full) automation in relation to cloud management activities.
Greater Levels of Abstraction – IT will continue to become more and more abstracted in 2020 and beyond (NoOps). The growth of serverless, containers, software-defined hardware, etc means that engineers / devs are thinking less and less about infrastructure. The focus away from operations and toward outcomes is another clear trend and likely one which will continue for some time.
Containers Become Mainstream – Application containerization is more than just a new buzz-word in cloud computing; it is changing the way in which resources are deployed into the cloud. More and more companies utilized containers in 2019 and we have seen estimates that suggest that one-third of hybrid cloud workloads will utilize containers in 2020 (ESG Research). Over the last couple of years, Kubernetes has established itself as the container orchestration platform of choice. 451 Research projects the market size of application container technologies to reach $4.3 billion by 2022 more businesses will view containers as a fundamental part of their IT strategy.
We have always enjoyed the quote ‘never make predictions, especially about the future.’ Nevertheless, entering a new year and a new decade it’s hard not to. We think the predictions above are fairly safe bets but equally, we are sure the speed and scale of change will likely be faster than we predicted.
Google Cloud Platform vs AWS: what’s the deal? A while back, we also asked the same question about Azure vs AWS. After the release of the latest earnings reports a few weeks ago from AWS, Azure, and GCP, it’s clear that Microsoft is continuing to see growth, Amazon is maintaining a steady lead, and Google is stepping in. Now that Google Cloud Platform has solidly secured a spot among the “big three” cloud providers, we think it’s time to take a closer look and see how the underdog matches up to the rest of the competition.
Is Google Cloud catching up to AWS?
As they’ve been known to do, Amazon, Google, and Microsoft all released their recent quarterly earnings around the same time the same day. At first glance, the headlines tell it all:
The obvious conclusion is that AWS continues to dominate in the cloud war. With all major cloud providers reporting earnings around the same time, we have an ideal opportunity to examine the numbers and determine if there’s more to the story. Here’s what the quarterly earning reports tell us:
AWS had the slowest growth they have ever since they began separating their cloud reportings – up just 37% from last year.
Microsoft Azure reported a revenue growth rate of 59%.
Microsoft doesn’t break out specific revenue amounts for Azure, but Microsoft did report that its “Intelligent Cloud” business revenue increased 27% to $10.8 billion, with revenue from server products and cloud services increasing 30%
Google’s revenue has cloud sales lumped together with hardware and revenue from the Google Play app store, summing up to a total of $6.43 billion for the last quarter.
To compare, last year during Q3 their revenue was at $4.64 billion.
During their second-quarter conference call in July, Google said their cloud is on an $8 billion revenue run rate – meaning cloud sales have doubled in less than 18 months.
You can see here that while Google is the smallest out of the “big three” providers, they have shown the most growth – from Q1 2018 to Q1 2019, Google Cloud has seen growth of 83%. While they still have a ways to go before surpassing AWS and Microsoft, they are moving quickly in the right direction as Canalys reported they were the fasted growing cloud-infrastructure vendor in the last year.
It’s also important to note that Google is just getting started. Also making headlines was an increase in new hires, adding 6,450 in the last quarter, and most of them going to positions in their cloud sector. Google’s headcount now stands at over 114,000 employees in total.
The Obvious: Google is not surpassing AWS
When it comes to Google Cloud Platform vs AWS, we have a clear winner. Amazon continues to have the advantage as the biggest and most successful cloud provider in the market. While AWS is growing at a smaller rate now than both Google Cloud and Azure, Amazon still holds the largest market share of all three. AWS is the clear competitor to beatas they are the first and most successful cloud provider to date, with the widest range of services, and a strong familiarity among developers.
The Less Obvious: Google is actually gaining more ground
While it’s easy to write off Google Cloud Platform, AWS is not untouchable. AWS has already solidified itself in the cloud market, but with the new features and partnerships, Google Cloud is proving to be a force to be reckoned with.
Where is Google actually gaining ground?
We know that AWS is at the forefront of cloud providers today, but that doesn’t mean Google Cloud is very far behind. AWS is now just one of the three major cloud providers – with two more (IBM and Alibaba) gaining more popularity as well. Google Cloud Platform has more in store for its cloud business in 2020.
A big step for google was announced earlier this year at Google Cloud’s conference – Google Cloud Next – the CEO of Google Cloud announced that they would be coming out with a retail platform to directly compete with Amazon, called Google Cloud for Retail. What ‘s different about their product? For starters, they are partnering with companies such as Kohl’s, Target, Bed Bath & Beyond, Shopify, etc. – these retailers are known for being direct competition with Amazon. In addition to that, this will be the first time that Google Cloud has had an AI product that is designed to address a business process for a specific vertical. Google doesn’t appear to be stopping at just retail – Thomas Kurian said they are planning to build capabilities to assist companies in specialized industries, ex: healthcare, manufacturing, media, and more.
Google’s stock continues to rise. With nearly 6,450 new hires added to the headcount, a vast majority of them being cloud-related jobs, it’s clear that Google is serious about expanding its role in the cloud market. In April of this year, Google reported that 103,459 now work there. Google CFO Ruth Porat said, “Cloud has continued to be the primary driver of headcount.”
Google Cloud’s new CEO, Thomas Kurian, understands that Google is lagging behind the other two cloud giants, and plans to close that gap in the next two years by growing sales headcount.
Deals have been made with major retailer Kohl’s department store, and payments processor giant, PayPal. Google CEO Sundar Pichai lists the cloud platform as one of the top three priorities for the company, confirming that they will continue expanding their cloud sales headcount.
In the past few months, Pichai added his thoughts on why he believes the Google Cloud Platform is on a set path for strong growth. He credits their success to customer confidence in Google’s impressive technology and a leader in machine learning, naming the company’s open-source software TensorFlow as a prime example. Another key component to growth is strategic partnerships, such as the deal with Cisco that is driving co-innovation in the cloud with both products benefiting from each other’s features, as well as teaming up with VMware and Pivotal.
Driving Google’s growth is also the fact that the cloud market itself is growing so rapidly. The move to the cloud has prompted large enterprises to use multiple cloud providers in building their applications. Companies such as Home Depot Inc. and Target Corp. rely on different cloud vendors to manage their multi-cloud environments.
Home Depot, in particular, uses both Azure and Google Cloud Platform, and a spokesman for the home improvement retailer explains why that was intentional: “Our philosophy here is to be cloud-agnostic, as much as we can.” this philosophy goes to show that as long as there is more than one major cloud provider in the mix, enterprises will continue trying, comparing, and adopting more than one cloud at a time – making way for Google Cloud to gain more ground.
Multi-cloud environments have become increasingly popular because companies enjoy the advantage of the cloud’s global reach, scalability, and flexibility. Google Cloud has been the most avid supporter of multi-cloud out of the three major providers. Earlier this year at Google Cloud Next, they announced the launch ofAnthos, a new managed service offering for hybrid and multi-cloud environments to give enterprises operational consistency. They do this by running quickly on any existing hardware, leverage open APIs and give developers the freedom to modernize. There’s alsoGoogle Cloud Composer, which is a fully managed workflow orchestration service built on Apache Airflow that allows users to monitor, schedule and manage workflows across hybrid and multi-cloud environments.
Google Cloud Platform vs. AWS – Why Does It Matter?
Google Cloud Platform vs AWS is only one of the battles to consider in the ongoing cloud war. The truth is, market performance is only one factor in choosing the best cloud provider. As we always say, the specific needs of your business are what will ultimately drive your decision.
What we do know: the public cloud market is not just growing – it’s booming. Referring back to our Azure vs AWS comparison – the basic questions still remain the same when it comes to choosing the best cloud provider:
Are the public cloud offerings to new customers easily comprehensible?
What is the pricing structure and how much do the products cost?
Are there adequate customer support and growth options?
Will our DevOps processes translate to these offerings?
Can the PaaS offerings speed time-to-value and simplify things sufficiently, to drive stickiness?
Right now AWS is certainly in the lead among major cloud providers, but for how long? We will continue to track and compare cloud providers as earnings are reported, offers are increased, and price options grow and change. To be continued in 2020…