AWS Postgres Pricing Comparison

AWS Postgres Pricing Comparison

Maybe you’re looking to use PostgreSQL in your AWS environment – if so, you need to make sure to evaluate pricing and compare your options before you decide. A traditional “lift and shift” of your database can cause quite a headache, so your DBA team likely wants to do it right the first time (and who doesn’t?). Let’s take a look at some of your options for running PostgreSQL databases in AWS.

Option 1: Self-Managed Postgres on EC2

If you’re currently running your databases on-premises or in a private cloud, then the simplest conversion to public cloud in AWS is to stand up an EC2 virtual machine and install the Postgres software on that VM. Since PostgreSQL is open-source, there’s no additional charge for running the software, so you’ll just be paying for the VM (along with associated costs like storage and network transfer). AWS doesn’t have custom instance sizes, but they have enough different sizes across instance families that you can find an option to match your existing server.

As an example, let’s say you’d like to run an EC2 instance with 2 CPUs and 8 GB of memory and 100GB of storage in the us-east-1 region. An m5.large system would work for this, which would cost approximately $70 per month for compute, plus $10 per month for storage. On the plus side, there will be no additional costs if you are transferring existing data into the system (there’s only outbound data transfer costs for AWS).

The biggest benefit of running your own EC2 server with Postgres installed is that you can do any configuration changes or run external software as you see fit. Tools like pgbouncer for connection pooling or pg_jobmon for logging within transactions requires the self-management provided by this EC2 setup. Additional performance tuning that is based on direct access to the Postgres configuration files is also possible with this method.

Option 2: AWS Relational Database Service for Hosted Postgres Databases

If your database doesn’t require custom configuration or community projects to run, then using the AWS RDS service may work for you. This hosted service comes with some great options that you may not take the time to implement with your own installation, including:

    • Automated backups
    • Multi-AZ options (for automatic synchronization to a standby in another availability zone)
    • Behind-the-scenes patching to the latest version of Postgres
    • Monitoring via CloudWatch
    • Built-in encryption options

These features are all fantastic, but they do come at a price. The same instance size as above, an m5.large with 2 CPUs and 8 GB of memory, is approximately $130 per month for a single AZ, or $260 per month for a multi-AZ setup.

Option 3: Postgres-Compatible AWS Aurora

One additional option when looking at AWS Postgres pricing is AWS Aurora. This AWS-created database option is fully compatible with existing Postgres workloads, but enables auto-scaling and additional performance throughput. The price is also attractive, as a similar size of r5.db.large in a multi-AZ configuration would be $211 per month (plus storage and backup costs per GB). This is great if you’re all-in on AWS services, but might not work if you don’t like staying on the absolute latest Postgres version (or don’t want to become dependant on AWS).

AWS Postgres Pricing Comparison

Comparing these costs of these 3 options gives us: 

  • Self-managed EC2 – $80/month
  • Hosted RDS running Postgres in a single AZ – $130/month
  • Hosted RDS running Postgres in multiple AZ’s – $260/month
  • Hosted RDS running Aurora in multiple AZ’s – $211/month

Running an EC2 instance yourself is clearly the cheapest option from a pure cost perspective, but you better know how to manage and tune your settings in Postgres for this to work.  If you want your database to “just work” without worrying about losing data or accessibility, then the Aurora option is the best value, as the additional costs cover many more features that you’ll wonder how you ever lived without.

What the Five Levels of Vehicle Autonomy Tell us About Adoption of Infrastructure Automation Tools

What the Five Levels of Vehicle Autonomy Tell us About Adoption of Infrastructure Automation Tools

On our first day as Turbonomic employees, our team had some great discussions with CTO Charles Crouchman about Turbonomic, ParkMyCloud, and the market for infrastructure automation tools. Charles explained his vision of the future of infrastructure automation, which parallels the automation trajectory that cars and other vehicles have been following for decades. It’s a comparison that’s useful in order to understand the goals of fully-automated cloud infrastructure – and the mindset of cloud users adopting this paradigm. (And of course, given our name, we’re all in on driving analogies!) 

The Five Levels of Vehicle Autonomy

The idea of the five levels of vehicle autonomy – or six, if you include level 0 – is an idea that comes from the Society of Automotive Engineers. 

The levels are as follows:

  • Level 0  – No Automation. The driver performs all driving tasks with no tools or assistance.
  • Level 1 – Driver Assistance. The vehicle is controlled by the driver, but the vehicle may have driver-assist features such as cruise control or an automated emergency brake.
  • Level 2 – Partial Automation or Occasional Self-Driving. The driver must remain in control and engaged in driving and monitoring, but the vehicle has combined automated functions such as acceleration and steering/lane position. 
  • Level 3 – Conditional Automation or Limited Self-Driving. The driver is a necessity, but not required to monitor the environment. The vehicle monitors the road and traffic, and informs the driver when he or she must take control. 
  • Level 4 – High Automation or Full Self-Driving Under Certain Conditions. The vehicle is capable of driving under certain conditions, such as urban ride-sharing, and the driver may have the option to control the vehicle. This is where airplanes are today – for the most part, they can fly themselves, but there’s always a human pilot present.
  • Level 5 – Full Automation or Full Self-Driving Under All Conditions. The vehicle can drive without a human driver or occupants under all conditions. This is an ideal, but right now, neither the technology nor the people are ready for this level of automation.

How These Levels Apply to Infrastructure Automation Tools

Now let’s take a look at how these levels apply to infrastructure automation tools and infrastructure:

  • Level 0 – No Automation. No tools in place.
  • Level 1 – Driver Assistance. Some level of script-based automation with limited applications, such as scripting the installation of an application so it’s just one user command, instead of hand-installing it.
  • Level 2 – Partial Automation or Occasional Self-Driving. In cloud infrastructure, this translates to having a monitoring system in place that can alert you to potential issues, but cannot take action to resolve those issues.
  • Level 3 – Conditional Automation or Limited Self-Driving. Think of this as traditional incident resolution or traditional orchestration. You can build specific automations to handle specific use cases, such as opening a ticket in a service desk, but you have to know what the event trigger is in order to automate a response.
  • Level 4 – High Automation or Full Self-Driving Under Certain Conditions. This is the step where analytics are integrated. A level-4 automated infrastructure system uses analytics to decide what to do. A human can monitor this, but is not needed to take action.
  • Level 5 – Full Automation or Full Self-Driving Under All Conditions. Full automation. Like in the case of vehicles, both the technology and the people are a long way from this nirvana.

So where are most cloud users in the process right now? There are cloud users and organizations all over this spectrum, which makes sense when you think about vehicle automation: there are early adopters who are perfectly willing to buy a Tesla, turn on auto-pilot, and let the car drive them to their destination. But, there are also plenty of laggards who are not ready to take their hands off the wheel, or even turn on cruise control.

Most public cloud users have at least elements of levels 1 and 2 via scripts and monitoring solutions. Many are at level 3, and with the most advanced platforms, organizations reach level 4. However, there is a barrier between levels 4 and 5: you will need an integrated hardware/software solution. The companies that are closest to full automation are the hyperscale cloud companies like Netflix, Facebook, and Google who have basically built their own proprietary stack including the hardware. This is where Kubernetes comes from and things like Netflix Scryer.

In our conversation, Charles said: “The thing getting in the way is heterogeneity, which is to say, most customers buy their hardware from one vendor, application software from another company, storage from another, cloud capacity from another, layer third-party software applications in there, use different development tools –– and none of these things were effectively built to be automated. So right now, automation needs to happen from outside the system, with adaptors into the systems. To get to level 5, the automation needs to be baked in from the system software through the application all the way up the stack.”

What Defines Early Adopters of Infrastructure Automation Tools

While there’s a wide scale of adoption in the market right now, there are a few indicators that can predict whether an organization or an individual will be open to infrastructure automation tools. 

The first is a DevOps approach. If an organization using DevOps, they have already agreed to let software automate deployments, which means they’re accepting of automation in general – and likely to be open to more.

Another is whether resource management is centralized within the organization or not. If it is centralized, the team or department doing the management tends to be more open to automation and software solutions. If ownership is distributed throughout the organization, it’s naturally more difficult to make unified change.

Ultimately, the goal we should all be striving for is to use infrastructure automation tools to step up the levels of automated resource configuration and cost control. Through automation, we can reduce management time and room for human error to achieve optimized environments.

The New Infrastructure Automation: Continuous Cost Control

The New Infrastructure Automation: Continuous Cost Control

As applications and systems have evolved from single-host mainframes to distributed microservices architectures, infrastructure automation has become a key part of the toolkit for modern sysadmins and operations teams. This automation has gone from doing basic Operating System installation and setup to full-blown multi-step deployments of production code from a single developer’s commit. By automating these mundane processes and eliminating the human error, production systems have a much higher stability than ever before.

But why stop at automating deployments? There are other elements that need to be automated, too –– one of which is cost.

Rolling out new infrastructure over and over again without ever taking a step back to analyze the cost just leads to the panic-driven cloud-bill-based phone calls from your finance department. Instead, taking similar automation decisions as Puppet, Chef, Ansible, Terraform, or Jenkins and applying them to your cloud costs can help you incrementally save money so you never have that giant surprise bill.

Scaling Up Without Ever Spinning Down

Developers and operations teams often use infrastructure automation early in application development and deployment processes to get servers and databases deployed and functioning. Modern automation tools aren’t just powerful, but also quick to deploy and fit into your current workflow. This is fantastic, but the problem is that the automation effort can start to taper off once the environments are running. Too often, users and teams move on to the next project before figuring out a way to keep costs from getting out of control. Then it becomes too late, and they simply accept that money needs to be dumped into the deployment pipeline to keep everything on task.

Easy-to-use automation is the key to spinning these environments up efficiently, and can also be key for keeping the costs of these environments low.  Sure, you may need to keep the production systems scaled up for maximum application performance and customer satisfaction, but what about the test lab, sandbox environment, dev systems, UAT servers, QA deployments, staging hosts, and other pre-production workloads?  Having giant environments with system sizes that match production can be useful for some testing, but leaving it all running can be easily doubling your cloud costs for each environment like this that you have, for things that are used for a fraction of the time.

DevSecMonLogScalFinOps

As your infrastructure automation toolkit grows and evolves, there’s a few things that you’ll start building in to all of your applications and deployments:

  • Security
  • Monitoring
  • Logging
  • Scalability

As this list grows, there’s one more thing you need: Continuous Cost Control.

 

By building in cost control automation from the very beginning, you can keep your cloud costs low while maintaining the flexibility required to keep up the pace of innovation. Without this, your costs are destined to rise faster than you intended, and is only going to cause headaches (and endless meetings) for your future self. It may not be coming out of your bank account directly, but saving money at an enterprise organization is everyone’s job, and automating this is the key.

And that’s actually what thousands of customers around the world are using ParkMyCloud for today! Get started with continuous cost control today.

Why Serverless Won’t Replace Traditional Servers

Why Serverless Won’t Replace Traditional Servers

Curious why serverless is so popular – and why it won’t replace traditional servers in the cloud?

In the current cloud infrastructure, top service providers are dedicating a great deal of effort to expand on this architecture as a new approach to a cloud solution that focuses on applications rather than infrastructure. Today we’ll take a look at what serverless computing is good for, and what it can’t replace.

Understanding Serverless

For starters, serverless mostly refers to an application or API that depends on third-party, cloud-hosted applications and services, to manage server-side logic and state, propagating code hosted on Function as a Service (FaaS) platforms.

Even though the name “serverless” suggests that there are no servers involved, there will always be servers in use. Rather, it makes it so developers don’t have to deal directly with the servers – it is more about the implementation and management of them. To power serverless workloads,  cloud providers use automated systems that eliminate the need for server administrators, offering developers a way to manage applications and services without having to handle, tweak or scale the actual server infrastructure.

Top Serverless Providers

It is no surprise the top cloud providers that are investing in a major way on serverless include AWS, Microsoft Azure, and Google Cloud. In brief, here is how they approach serverless computing.

AWS Lambda is the current leader among serverless compute implementations. Lambda handles everything by automatically scaling your application by running your code as it’s triggered.

Microsoft Azure Functions enables you to run code-on-demand without having to explicitly provision or manage infrastructure.

Google Cloud Functions is a compute solution for creating event-driven applications and connects with GCP services by listening for and responding to events without needing to provision or manage servers.

Advantages and When to Use Serverless

Let’s look at why serverless is often a good choice. It allows organizations to reduce operational complications associated with infrastructure and related cost expenditures since they are computed for the actual usage or work the serverless platform performs.

When it comes to implementing, maintaining, debugging, monitoring the infrastructure, and setting up your environment, with serverless the heavy lifting is done for you. It allows developers to focus on application development, and not complex infrastructures, thus promoting team efficiency, better serving the customers and focusing on business goals.

Since serverless cost models are based on execution only, using serverless will reduce your costs of operations and save you money on cloud spend, making it more adaptable for short-term tasks on your environment, however, there are hidden costs to be aware of. Though we are considering advantages, this might as well be a disadvantage. Serverless apps rely on API calls, and the heavy use of API request can become very pricey indeed. In addition, networking costs can get very expensive when sending a lot of data and are generally more difficult to track in serverless costs models.

Some of the best use cases for serverless are:

  • Brand new applications that don’t already have an existing workload
  • Microservices-based architectures, with small chunks of code working together
  • Infrequently-used scripts that don’t need a server running 24/7

Disadvantages and When Not to Use Serverless

No doubt, there is an increased interest in serverless, but there are limitations that come with it. Perhaps these trade-offs are the reasons as to why some companies, though interested in serverless, are not ready to make the jump from traditional servers just yet.

Networking on serverless must be done through a private API endpoint and cannot be accessed through IPs, which can lead to vendor lock-in. This makes serverless unsuitable for long-term tasks, making serverless unusable for applications that have variable execution times, and for services that require information from an external source.

Serverless creates dependency upon cloud providers, and because of this you are not able to port your applications between different providers. Cloud providers own the burden of resource provisioning, so they are solely responsible for ensuring that the application instance has the back-end infrastructure it needs to execute when summoned.

By adopting serverless, you forfeit complete control over your infrastructure, like for example, scaling. Scaling is done automatically, but the absence of control makes it difficult to address and migrate errors related to serverless instances. This lack of control also applies to application performance issues, a metric that developers still need to worry about in a serverless environment. After all, serverless providers depend on an actual server that needs to be accessed and monitored.

Serverless is likely not a good fit for:

  • Rewriting existing apps
  • Applications with variable execution times
  • Long-term tasks
  • Monolithic applications

Why Serverless Won’t Replace Traditional Servers

Though every business has different needs when it comes to cloud infrastructures, serverless won’t surmount the current cloud infrastructure of traditional servers completely. There are too many use cases where serverless is not applicable, or not worth the tradeoff in control (or perhaps the cost – stay tuned for a future post on this). But as cloud service providers continue to invest heavily on serverless, it is fair to say that serverless usage will continue to grow in the years to come.  

Amazon EKS Overview: AWS’s Managed Kubernetes Service

Amazon EKS Overview: AWS’s Managed Kubernetes Service

Amazon EKS is a hosted Kubernetes solution that helps you run your container workloads in AWS without having to manage the Kubernetes control plane for your cluster. This is a great entry point for Kubernetes administrators who are looking to migrate to AWS services but want to continue using the tooling they are already familiar with. Often, users are choosing between Amazon EKS and Amazon ECS (which we recently covered, in addition to a full container services comparison), so in this article, we’ll take a look at some of the basics and features of EKS that make it a compelling option.

Amazon EKS 101

The main selling point of Amazon EKS is that the Kubernetes control plane is managed for you by AWS, so you don’t have to set up and run your own. When you set up a new cluster in EKS, you can specify if it’s going to be just available to the current VPC, or if it will be accessible to outside IP addresses. This flexibility highlights the two main deployment options for EKS:

  1. Fully within an AWS VPC, with complete integration to other AWS services you run in your account while being completely isolated from the outside world.
  2. Open and accessible, which enables hybrid-cloud, multi-cloud, or multi-account Kubernetes deployments.

Both options allow you the flexibility to use your own Kubernetes management tools, like Dashboard and kubectl, as EKS gives you the API Server Endpoint once you provision the cluster. This control plane utilizes multiple availability zones within the region you choose for redundancy.

Managed Container Showdown: EKS vs. ECS

Amazon offers two main container service options in EKS and ECS, and both are using Kubernetes under the hood. The biggest difference between the two options lies in who is doing the management of Kubernetes. With ECS, Amazon is running Kubernetes for you, and you just decide which tasks to run and when. Meanwhile, with EKS, you’re doing the Kubernetes management of your pods.

One consideration when considering EKS vs. ECS is networking and load balancing. Both services run EC2 servers behind the scenes, but the actual network connection is slightly different. ECS has network interfaces connected to individual tasks on each EC2 instance, while EKS has network interfaces connecting to multiple pods on each EC2 instance. Similarly, for load balancing, ECS can utilize Application Load Balancers to send traffic to a task, while EKS must use an Elastic Load Balancer to send traffic to an EC2 host (which can have a proxy via Kubernetes). Neither is necessarily better or worse, just a slight difference that may matter for your workload.

Sounds Great… How Much Does It Cost?

For each workload you run in Amazon EKS, there are two main charges that will apply.  First, there’s a charge of $0.20/hr (roughly $146/month) for each EKS Control Plane you run in your AWS account. Second, you’re charged for the underlying EC2 resources that are spun up by the Kubernetes controller. This second charge is very similar to how Amazon ECS charges you, and is highly dependant on the size and amount of resources you need.

Amazon EKS Best Practices

There’s no one-size-fits-all option for Kubernetes deployments, but Amazon EKS certainly has some good things going for it. If you’re already using Kubernetes, this can be a great way to seamlessly migrate to a cloud platform without changing your working processes. Also, if you’re going to be in a hybrid-cloud or multi-cloud deployment, this can make your life a little easier. That being said, for just simple Kubernetes clusters, the price of the control plane for each cluster may be too much to pay, which makes ECS a valid alternative.

More on container management and container optimization.

Will Robotic Process Automation Save Your Company Time and Money in the Cloud?

Will Robotic Process Automation Save Your Company Time and Money in the Cloud?

We’ve been hearing buzz about a new concept in AI, robotic process automation. The promise of the technology is that it can automate processes that employees are doing manually, saving your employees’ time and potentially reducing operational costs. It fits right in with the current trends in cloud computing toward optimization. We’re all about saving time and money – so let’s take a look at this trend to see if it can help you do either of these things.

What is Robotic Process Automation?

Robotic process automation (RPA) is a way to automate business processes by creating software robots to perform manual and mundane work-tasks. It allows users the ability to configure within an application the capability to handle a variety of repetitive tasks by processing, employing, generating and communicating information automatically. For example, you might program RPA bots to do first-level customer support tasks by searching for answers; copy and paste data from one system to another for invoicing or expense management or issue refunds. This video from IBM shows an example in action.

Furthermore, RPA tools can be trained to make judgments about future outputs. Many users appreciate its non-intrusive nature and the ability to integrate within infrastructures without causing disruption to systems already in place.

How can you use Robotic Process Automation?

Companies like Walmart, AT&T, and Walgreens are adopting the use of RPA. Clay Johnson, the CIO of Walmart, says they use RPA bots to automate pretty much anything from answering employee questions to retrieving useful information from audit documents. The CIO of American Express Global Business Travel, David Thompson, says they implement the use of RPA to automate the process for canceling an airline ticket and issuing refunds. In addition, Thompson is looking to use RPA to facilitate automatic rebooking recommendations, and to automate certain expense management tasks in the company.

More specific to cloud computing and IT, one great application for RPA is in automated software testing. If testing involves multiple applications and monotonous work, RPA can replace workers’ time spent testing. Additionally, RPA can be used to automate processes in monolithic legacy systems that are not worth developers’ time to update, to bring automation while work on newer microservices systems is in progress.

Is Robotic Process Automation the Best Way to Automate Cost Control?

A recent study found that not all automation is achievable with RPA. In the study, they conclude that only three percent of organizations have managed to scale RPA to a high level. Additionally, Gartner placed RPA tools at the “Peak of Inflated Expectations” in their Hype Cycle guide for artificial intelligence last year – another vote for more buzz than potential.

So can it save you time and money? If employees at your company are spending a large percentage of their time on repetitive tasks that require little to no decision making, then yes, it probably can. It’s also important to free up developer time that is spent on automatable tasks, like scripting, so they can focus on creating value for your business.

For complex and long-term automation, though, purpose-built software is a better solution. If there is already a solution to your automation needs on the market, it will probably serve you better than RPA, because there won’t be an upfront period needed to program bots, you won’t need to make frequent changes to your processes like many RPA bots will require, and it’s a better solution for the long run.