Curious why serverless is so popular – and why it won’t replace traditional servers in the cloud?
In the current cloud infrastructure, top service providers are dedicating a great deal of effort to expand on this architecture as a new approach to a cloud solution that focuses on applications rather than infrastructure. Today we’ll take a look at what serverless computing is good for, and what it can’t replace.
For starters, serverless mostly refers to an application or API that depends on third-party, cloud-hosted applications and services, to manage server-side logic and state, propagating code hosted on Function as a Service (FaaS) platforms.
Even though the name “serverless” suggests that there are no servers involved, there will always be servers in use. Rather, it makes it so developers don’t have to deal directly with the servers – it is more about the implementation and management of them. To power serverless workloads, cloud providers use automated systems that eliminate the need for server administrators, offering developers a way to manage applications and services without having to handle, tweak or scale the actual server infrastructure.
Top Serverless Providers
It is no surprise the top cloud providers that are investing in a major way on serverless include AWS, Microsoft Azure, and Google Cloud. In brief, here is how they approach serverless computing.
AWS Lambda is the current leader among serverless compute implementations. Lambda handles everything by automatically scaling your application by running your code as it’s triggered.
Microsoft Azure Functions enables you to run code-on-demand without having to explicitly provision or manage infrastructure.
Google Cloud Functions is a compute solution for creating event-driven applications and connects with GCP services by listening for and responding to events without needing to provision or manage servers.
Advantages and When to Use Serverless
Let’s look at why serverless is often a good choice. It allows organizations to reduce operational complications associated with infrastructure and related cost expenditures since they are computed for the actual usage or work the serverless platform performs.
When it comes to implementing, maintaining, debugging, monitoring the infrastructure, and setting up your environment, with serverless the heavy lifting is done for you. It allows developers to focus on application development, and not complex infrastructures, thus promoting team efficiency, better serving the customers and focusing on business goals.
Since serverless cost models are based on execution only, using serverless will reduce your costs of operations and save you money on cloud spend, making it more adaptable for short-term tasks on your environment, however, there are hidden costs to be aware of. Though we are considering advantages, this might as well be a disadvantage. Serverless apps rely on API calls, and the heavy use of API request can become very pricey indeed. In addition, networking costs can get very expensive when sending a lot of data and are generally more difficult to track in serverless costs models.
Some of the best use cases for serverless are:
Brand new applications that don’t already have an existing workload
Microservices-based architectures, with small chunks of code working together
No doubt, there is an increased interest in serverless, but there are limitations that come with it. Perhaps these trade-offs are the reasons as to why some companies, though interested in serverless, are not ready to make the jump from traditional servers just yet.
Networking on serverless must be done through a private API endpoint and cannot be accessed through IPs, which can lead to vendor lock-in. This makes serverless unsuitable for long-term tasks, making serverless unusable for applications that have variable execution times, and for services that require information from an external source.
Serverless creates dependency upon cloud providers, and because of this you are not able to port your applications between different providers. Cloud providers own the burden of resource provisioning, so they are solely responsible for ensuring that the application instance has the back-end infrastructure it needs to execute when summoned.
By adopting serverless, you forfeit complete control over your infrastructure, like for example, scaling. Scaling is done automatically, but the absence of control makes it difficult to address and migrate errors related to serverless instances. This lack of control also applies to application performance issues, a metric that developers still need to worry about in a serverless environment. After all, serverless providers depend on an actual server that needs to be accessed and monitored.
Serverless is likely not a good fit for:
Rewriting existing apps
Applications with variable execution times
Why Serverless Won’t Replace Traditional Servers
Though every business has different needs when it comes to cloud infrastructures, serverless won’t surmount the current cloud infrastructure of traditional servers completely. There are too many use cases where serverless is not applicable, or not worth the tradeoff in control (or perhaps the cost – stay tuned for a future post on this). But as cloud service providers continue to invest heavily on serverless, it is fair to say that serverless usage will continue to grow in the years to come.
Amazon EKS is a hosted Kubernetes solution that helps you run your container workloads in AWS without having to manage the Kubernetes control plane for your cluster. This is a great entry point for Kubernetes administrators who are looking to migrate to AWS services but want to continue using the tooling they are already familiar with. Often, users are choosing between Amazon EKS and Amazon ECS (which we recently covered, in addition to a full container services comparison), so in this article, we’ll take a look at some of the basics and features of EKS that make it a compelling option.
Amazon EKS 101
The main selling point of Amazon EKS is that the Kubernetes control plane is managed for you by AWS, so you don’t have to set up and run your own. When you set up a new cluster in EKS, you can specify if it’s going to be just available to the current VPC, or if it will be accessible to outside IP addresses. This flexibility highlights the two main deployment options for EKS:
Fully within an AWS VPC, with complete integration to other AWS services you run in your account while being completely isolated from the outside world.
Open and accessible, which enables hybrid-cloud, multi-cloud, or multi-account Kubernetes deployments.
Both options allow you the flexibility to use your own Kubernetes management tools, like Dashboard and kubectl, as EKS gives you the API Server Endpoint once you provision the cluster. This control plane utilizes multiple availability zones within the region you choose for redundancy.
Managed Container Showdown: EKS vs. ECS
Amazon offers two main container service options in EKS and ECS, and both are using Kubernetes under the hood. The biggest difference between the two options lies in who is doing the management of Kubernetes. WIth ECS, Amazon is running Kubernetes for you, and you just decide which tasks to run and when. Meanwhile, with EKS, you’re doing the Kubernetes management of your pods.
One consideration when considering EKS vs. ECS is networking and load balancing. Both services run EC2 servers behind the scenes, but the actual network connection is slightly different. ECS has network interfaces connected to individual tasks on each EC2 instance, while EKS has network interfaces connecting to multiple pods on each EC2 instance. Similarly, for load balancing, ECS can utilize Application Load Balancers to send traffic to a task, while EKS must use an Elastic Load Balancer to send traffic to an EC2 host (which can have a proxy via Kubernetes). Neither is necessarily better or worse, just a slight difference that may matter for your workload.
Sounds Great… How Much Does It Cost?
For each workload you run in Amazon EKS, there are two main charges that will apply. First, there’s a charge of $0.20/hr (roughly $146/month) for each EKS Control Plane you run in your AWS account. Second, you’re charged for the underlying EC2 resources that are spun up by the Kubernetes controller. This second charge is very similar to how Amazon ECS charges you, and is highly dependant on the size and amount of resources you need.
Amazon EKS Best Practices
There’s no one-size-fits-all option for Kubernetes deployments, but Amazon EKS certainly has some good things going for it. If you’re already using Kubernetes, this can be a great way to seamlessly migrate to a cloud platform without changing your working processes. Also, if you’re going to be in a hybrid-cloud or multi-cloud deployment, this can make your life a little easier. That being said, for just simple Kubernetes clusters, the price of the control plane for each cluster may be too much to pay, which makes ECS a valid alternative.
We’ve been hearing buzz about a new concept in AI, robotic process automation. The promise of the technology is that it can automate processes that employees are doing manually, saving your employees’ time and potentially reducing operational costs. It fits right in with the current trends in cloud computing toward optimization. We’re all about saving time and money – so let’s take a look at this trend to see if it can help you do either of these things.
What is Robotic Process Automation?
Robotic process automation (RPA) is a way to automate business processes by creating software robots to perform manual and mundane work-tasks. It allows users the ability to configure within an application the capability to handle a variety of repetitive tasks by processing, employing, generating and communicating information automatically. For example, you might program RPA bots to do first-level customer support tasks by searching for answers; copy and paste data from one system to another for invoicing or expense management or issue refunds. This video from IBM shows an example in action.
Furthermore, RPA tools can be trained to make judgments about future outputs. Many users appreciate its non-intrusive nature and the ability to integrate within infrastructures without causing disruption to systems already in place.
How can you use Robotic Process Automation?
Companies like Walmart, AT&T, and Walgreens are adopting the use of RPA. Clay Johnson, the CIO of Walmart, says they use RPA bots to automate pretty much anything from answering employee questions to retrieving useful information from audit documents. The CIO of American Express Global Business Travel, David Thompson, says they implement the use of RPA to automate the process for canceling an airline ticket and issuing refunds. In addition, Thompson is looking to use RPA to facilitate automatic rebooking recommendations, and to automate certain expense management tasks in the company.
More specific to cloud computing and IT, one great application for RPA is in automated software testing. If testing involves multiple applications and monotonous work, RPA can replace workers’ time spent testing. Additionally, RPA can be used to automate processes in monolithic legacy systems that are not worth developers’ time to update, to bring automation while work on newer microservices systems is in progress.
Is Robotic Process Automation the Best Way to Automate Cost Control?
A recent study found that not all automation is achievable with RPA. In the study, they conclude that only three percent of organizations have managed to scale RPA to a high level. Additionally, Gartner placed RPA tools at the “Peak of Inflated Expectations” in their Hype Cycle guide for artificial intelligence last year – another vote for more buzz than potential.
So can it save you time and money? If employees at your company are spending a large percentage of their time on repetitive tasks that require little to no decision making, then yes, it probably can. It’s also important to free up developer time that is spent on automatable tasks, like scripting, so they can focus on creating value for your business.
For complex and long-term automation, though, purpose-built software is a better solution. If there is already a solution to your automation needs on the market, it will probably serve you better than RPA, because there won’t be an upfront period needed to program bots, you won’t need to make frequent changes to your processes like many RPA bots will require, and it’s a better solution for the long run.
There’s a simple fact for public cloud users today: you need to use cloud agnostic tools. Yes – even if you only use one public cloud. Why? This recommendation comes down to a few drivers that we see time and time again.
You won’t always use just this cloud
There is an enterprise IT trend to multi-cloud and hybrid cloud – such a prevalent trend that even if you are currently single-cloud, you should plan for the eventuality of using more than one cloud, as the multi-cloud future has arrived. Dave Bartoletti, VP and Principal Analyst at Forrester Research, who broke down multi-cloud and hybrid cloud by the numbers:
62 percent of public cloud adopters are using 2+ unique cloud platforms
74 percent of enterprises describe their strategy as hybrid/multi-cloud today
In addition, standardizing on cloud agnostic tools also can alleviate costs associated with policy design, deployment, and enforcement across different cloud environments. Management and monitoring using the same service platform greatly reduces the issue of mismatched security policies and uncertainty in enforcement. Cloud agnostic tools that also operate in the context of the data center — whether in a cloud, virtualized, container, or traditional infrastructure — are a boon for organizations who need to be agile and move quickly. Being able to reuse policies and services across the entire multi-cloud spectrum reduces friction in the deployment process and offers assurances in consistency of performance and security.
How do you decide what tools to adopt?
We talk to different size enterprises using the cloud on a daily basis, and always ask if they are using cloud native tools, or if they are using third party tools that are cloud agnostic. The answer – it’s a mix to be sure, often it’s a mix between cloud-native and third-party tools within the same enterprise.
What we hear is that managing the cloud infrastructure is quite a complex job, especially when you have different clouds, technologies, and a diverse and opinionated user community to support. So a common theme with many of the third-party tools we see used tend to include freemium models, a technology someone used at a previous company, tools recommended by the cloud services provider (CSP) themselves, and open-API-driven solutions that allow for maximum automation in their cloud operations. It also serves the tools vendors well if deploying the tool includes minimum effort — in other words, SaaS tools that do not require a bunch of services and integration work. Plug and play is a must.
For context, here at ParkMyCloud support AWS, Azure, Google and Alibaba clouds, and usually talk to DevOps and IT Ops folks responsible for their cloud infrastructure. And those folks are usually after cloud cost control and governance when speaking with us. So our conversations tend to focus on the tools they use and need for cloud infrastructure management like CI/CD, monitoring, cost control, cost visibility and optimization, and user governance. For user governance and internal communication, Single-sign On and ChatOps are must have.
So we decided to compile a list of the most common clouds and tools we run across here at ParkMyCloud, in order of popularity:
Cloud Service Provider
AWS, Google Cloud, Microsoft Azure, Alibaba Cloud – and we do get requests for IBM and Oracle clouds
Infrastructure Monitoring (not APM)
Cloud Native (AWS CloudWatch, Azure Metrics, Google Stackdriver), DataDog, Nagios, SolarWinds, Microsoft, BMC, Zabbix, IBM
Our suggestion is to use cloud agnostic tools wherever possible. Our experience tells us that a majority of the enterprises lean this way anyways. The upfront cost in terms of license fee and/or set up could be more, but we think it comes down to (1) most people will end up hybrid/multi-cloud in the future, even if they aren’t now, and (2) cloud agnostic tools are more likely to meet your needs as a user, as the companies building those tools will stay laser-focused on supporting and improving said functionality across the big CSPs.
Lately, we’ve been thinking about cloud computing jobs and titles we’ve been seeing in the space. One of the great things about talking with ParkMyCloud users is that we get to talk to a variety of different people. That’s right – even though we’re laser-focused on cloud cost optimization, it turns out that can matter to a lot of different people in an organization. (And no wonder, given the size of wasted spend – that hits people’s’ buttons).
You know the cloud computing market is growing. You know that means new employment opportunities, and new niches in which to make yourself valuable. So what cloud computing jobs should you check out?
If you are a sysadmin or ops engineer:
Cloud Operations. Cloud operations engineers, managers, and similar are the people we speak with most often at ParkMyCloud, and they are typically the cloud infrastructure experts in the organization. This is a great opportunity for sysadmins looking to work in newer technology.
If you’re interested in cloud operations, definitely work on certifications from AWS, Azure, Google, or your cloud provider of choice. Attend meetups and subscribe to industry blogs – the cloud providers innovate at a rapid pace, and the better you keep up with their products and solutions, the more competitive you’ll be.
See also: DevOps, cloud infrastructure, cloud architecture, and IT Operations.
If you like technology but you also like working with people:
Customer Success, cloud support, or other customer-facing jobs at a managed service provider (MSP). As we recently discussed, there’s a growing market of small IT providers focusing on hybrid cloud in the managed services space. The opportunities at MSPs aren’t limited to customer success, of course – just in the past week we’ve talked to people with the following titles at MSPs: Cloud Analyst, Cloud Engineer, Cloud Champion/Cloud Optimization Engineer, CTO, and Engagement Architect.
Also consider: pre-sales engineering at one of the many software providers in the cloud space.
If you love process:
Site Reliability Engineer. This title, invented by Google, is used for operations specialists who focus on keeping the lights on and the sites running. Job descriptions in this discipline tend to focus on people and processes rather than around the specific infrastructure or tools.
If you have a financial background:
Cloud Financial Analyst. See also: cloud cost analyst, cloud financial administrator, IT billing analyst, and similar. Cloud computing jobs aren’t just for technical people — there is a growing field that allows experts to adapt financial skills to this hot market. As mentioned above, since the cloud cost problem is only going to grow, IT organizations need professionals in financial roles focused on cloud. Certifications from cloud providers can be a great way to stand out.
What cloud computing jobs are coming next?
As the cloud market continues to grow and change, there will be new cloud computing job opportunities – and it can be difficult to predict what’s coming next. Just a few years ago, it was rare to meet someone running an entire cloud enablement team, but that’s becoming the norm at larger, tech-forward organizations. We also see a trend of companies narrowing in “DevOps” roles to have professionals focused on “CloudOps” specifically — as well as variations such as DevFinOps. And although some people hear “automation” and worry that their jobs will disappear, there will always be a need for someone to keep the automation engines running and optimized. We’ll be here.
In the world of infrastructure as code, the biggest divide seems to come in the war between Hashicorp’s Terraform vs. CloudFormation in AWS. Both tools can help you deploy new cloud infrastructure in a repeatable way, but have some pretty big differences that can mean the difference between a smooth rollout or a never ending battle with your tooling. Let’s look at some of the similarities and some of the differences between the two.
While the tools have some very unique features, they also share some common aspects. In general, both CloudFormation and Terraform help you provision new AWS resources from a text file. This means you can iterate and manage the entire infrastructure stack the same as you would any other piece of code. Both tools are also declarative, which means you define what you want the end goal to be, rather than saying how to get there (such as with tools like Chef or Puppet). This isn’t necessarily a good or bad thing, but is good to know if you’re used to other config management tools.
Unique Characteristics of CloudFormation
One of the biggest benefits of using CloudFormation is that it is an AWS product, which means it has tighter tie-ins to other AWS services. This can be a huge benefit if you’re all-in on AWS products and services, as this can help you maximize your cost-effectiveness and efficiency within the AWS ecosystem. CloudFormation also makes use of either YAML or JSON as the format for your code, which might be familiar to those with dev experience. Along the same lines, each change to your infrastructure is a changeset from the previous one, so devs will feel right at home.
There are some additional tools available around CloudFormation, such as:
Stacker – for handling multiple CloudFormation stacks simultaneously
Troposphere -if you prefer python for creating your configuration files
Sceptre – for organizing CloudFormation stacks into environments
Unique Characteristics of Terraform
Just as being an AWS product is a benefit of CloudFormation if you’re in AWS, the fact that Terraform isn’t affiliated with any particular cloud makes it much more suited for multi-cloud and hybrid-cloud environments, and of course, for non-AWS clouds. There are Terraform modules for almost any major cloud or hypervisor in the Terraform Registry, and you can even write your own modules if necessary.
Terraform treats all deployed infrastructure as a state, with any subsequent changes to any particular piece being an update to the state (unlike the changesets mentioned above for CloudFormation). This means you can keep the state and share it, so others know what your stack should look like, and also means you can see what would change if you modify part of your configuration before you actually decide to do it. The Terraform configuration files are written in HCL (Hashicorp Configuration Language), which some consider easier to read than JSON or YAML.
The good news is that if you’re trying to decide between Terraform vs. CloudFormation, you can’t really go wrong with either. Both tools have large communities with lots of support and examples, and both can really get the job done in terms of creating stacks of resources in your environments. They are both also free, with CloudFormation having no costs (aside from the infrastructure that gets created) and Terraform being open-source while offering a paid Enterprise version for additional collaboration and governance options. Each has their pros and cons, but using either one will help you scale up your infrastructure and manage it all as code.