How to Evaluate AWS RDS Pricing and Features

How to Evaluate AWS RDS Pricing and Features

AWS RDS pricing – like all Amazon cloud pricing – can be a bit confusing. In this post, we’ll walk through how RDS pricing works and other features of RDS you should know about.

Since its release, Amazon’s Relational Database Service (RDS) has become increasingly popular among organizations that are looking to simplify setting up, operating and scaling relational databases in AWS. An RDS is an automated service, which when implemented, will take over time consuming, mundane tasks. The ability to automate relational databases in the cloud has made RDS a cost-efficient option for those looking to control their cloud spend. 

Traditional systems administration of servers, applications, and databases used to be a little simpler when it came to choices and costs.  For a long time, there was no other choice than to hook up a physical server, put on your desired OS, and install the database or application software that you needed.  Eventually, you could choose to install your OS on a physical server or on a virtual machine running on a hypervisor. Then, large companies started running their own hypervisor and allowed you to rent your VM for as long as you needed it on their servers.

In October 2009, Amazon started offering the ability to rent databases directly – without having to worry about the underlying OS – in a platform as a service (PaaS) offering called RDS. This service quickly became one more thing systems administrators should take into consideration when looking at their choices for infrastructure management. Let’s take a look at some of the features that come with AWS RDS and explore RDS pricing to get a better understanding of all the RDS costs.

RDS Basics

AWS RDS gives users the ability to run and manage cloud relational databases, changing the way users once interacted with cloud infrastructure. A great thing about RDS is that users don’t have to manage the infrastructure that the database is running on, RDS will take over many of the once tedious tasks that were necessary to manage an AWS relational database. This allows system administrators to focus their time on other, more important projects. Another great feature is that you don’t have to worry about patching off the database software itself. 

The most essential part of Amazon RDS is an AWS DB instance. These instances are isolated database environments in the cloud. The computation and memory capacity of an RDS DB instance depends on its DB instance class. In AWS RDS, you can choose to use on-demand instances or reserved instances. Pricing and features will vary depending on the database engine and instance class you use.

You can currently run RDS on six database engines; MySQL, Aurora (MySQL on steroids), Oracle, Microsoft SQL Server, PostgreSQL, and MariaDB.  The database sizes are grouped into 3 categories: Standard (m4), Memory Optimized (r3), and Micro (t2).  Each family has multiple sizes that have varying numbers of vCPUs, GiBs of memory, levels of network performance, and can be input/output optimized. 

Each RDS instance can be set up to be “multi-AZ”, leveraging replicas of the database in different availability zones within AWS.  This is often used for production databases. If a problem arises in one availability zone, failover to one of the replica databases happens automatically behind the scenes. You don’t have to manage it.  Along with multi-AZ deployments, Amazon offers “Aurora”, which has more fault tolerance and self-healing beyond multi-AZ, as well as additional performance features.

It’s important to ensure that you’re matching your workloads to the instance types that best meet their needs, so you have the best and most cost-efficient option for your database. There are different pricing options for the different rds instance sizes and the databases they are being run on. Here’s a breakdown of AWS RDS Instances types:  

  • General Purpose
    • T3 instances – the latest burstable, general purpose instance type that provides a baseline level of CPU performance, plus the ability to burst CPU usage. Balance of compute, memory, and network.
    • T2 instances – similar to T3, T2 instances are burstable general-purpose performance instances that provide a baseline level of CPU performance with the ability to burst.
    • M5 instances – the latest general purpose instances with a balance of compute, memory, and network resources.
    • M4 instances – balance of compute, memory, and network resources.
  • Memory Optimized
    • R5 instances – latest generation of memory-optimized instances.
    • R4 instances – previous generation of memory-optimized instances.
    • X1e instances – optimized for high-performance databases, offering one of the lowest prices per GiB of RAM.
    • X1 instances – optimized for large-scale, enterprise-class and in-memory applications.

RDS Pricing

RDS is essentially a service running on top of EC2 instances, but you don’t have access to the underlying instances. Therefore, Amazon has set the pricing for an RDS instance in a very similar way to an AWS EC2 instance, which will be familiar once you’ve gotten a grasp on the structure that is already in place for compute.  There are multiple components to the price of an instance, including: the underlying instance size, storage of data, multi-AZ capability, and sending data out (sending data in is free for the transfer). To add another layer of complexity, each database type (MySQL, Oracle, etc) has different prices for each of the factors.  Aurora also charges for I/O on top of the other costs.

When you add all this up, the cost of an RDS instance can go through the roof for a high-volume database.  It also can be hard to predict the usage, storage, and transfer needs of your database, especially for new applications.  Also, the raw performance might be a lot less than what you might expect running on your own hardware or even on your own instances. What makes the price worth it?

What are the Actual Costs?

AWS offers a number of instances that are fit for different engines/databases. Once you determine which instance you want, and what engine you will be running your instance in, you can find a more specific price for your instance. With AWS RDS you only pay for what you use. You can try AWS RDS for free with the AWS Free Tier for no additional fees. 

Pricing of instance types depend on the RDS database engine you are running it on. To give an example of AWS RDS instance pricing, here’s what a memory-optimized R4 large compares to an R4 Extra Large across engines as one example. This is pricing for the US East (N. Virginia) region: 

You can see how the cost of an AWS RDS instance doubles, or more, just by going up one size – this is the same across instance types, sizes, and regions.

To further break down what you would be paying, here’s what Amazon will bill you based on:

  • DB instance hours  
  • Storage (per GB per month) 
  • I/O requests per month 
  • Provisioned IOPS per month 
  • Backup Storage
  • Data transfer 

You can use this AWS Monthly Calculator to help calculate what your costs would be. 

RDS vs. Installing a Database on EC2

We often see that the choice comes down to either using RDS for your database backend, or installing your own database on an EC2 instance the “traditional” way. From a purely financial perspective, installing your own database is almost guaranteed to be cheaper if you focus on AWS direct costs alone.  However, there’s more to factor into the decision than just the cost of the services. 

What often gets lost in the use of a service is the time-to-value savings (which includes your time and potentially opportunity cost/benefit for bringing services online, faster).  For example, by using RDS instead of your own database, you avoid the need to install and manage the OS and database software, as well as the ongoing patching of those. You also get automatic backups and recovery through the AWS console or AWS API.  You avoid having to configure storage LUNs and worrying about optimizing striping for better I/O. Resizing instances is much simpler with RDS, both going smaller or bigger if necessary. High-availability (either cold or warm) is available at the click of a button.  All of this means less management for you and faster deployment times, though at a higher price point. If your company competes in a highly competitive market, these faster deployment times can make all the difference in the world to your bottom line.

Keep in mind that, as of fall 2017, you can start/stop RDS instances, which is particularly useful for dev/test environments. With this functionality, businesses will be able to stop RDS instances so they are not running 24/7. However, while they are not getting charged for database hours, you will still have to pay for provisioned storage, manual snapshots, and automated backup storage. 

How to Manage RDS with ParkMyCloud

ParkMyCloud has made “parking” – a.k.a starting and stopping on a schedule – public cloud compute resources as simple as possible. Included is the ability to park RDS instances, helping you save money on non-production databases.  

By using our Logical Groups feature, you can create a simple “stack” containing both compute instances and RDS databases to represent a particular application. A logical group can be used to manage all the constituent parts of an application.

The start/stop times can be sequenced within the group and a single schedule can be used on the group for simplified management. If access is needed during scheduled stop times, then you can override the schedules as needed, through the web app or commands through your chat provider like Slack or Microsoft Teams. You can also set start or stop delays within the Logical Group to customize the order, so if databases need to be started first and stopped last, then you can set that level of granularity. This helps with cost optimization because you have the ability to organize and manage your RDS instances in one place, at one time.

Conclusion

AWS RDS pricing can get a bit tricky and really requires you to know the details of your database in order to accurately predict the bill.  However, there are a ton of benefits to using the service, and can really help streamline your systems administration by handling the management and deployment of your backend database.  For companies that are moving to the cloud, or born in the cloud, RDS might be your choice when compared to running on a separate compute instance or on your own hypervisor, as it allows you to focus on your business and application, not on being a database administrator. For larger, established companies with a large team of DBAs and well-established automation or for IO-intensive applications, an alternative service may be a better option. By knowing the features, benefits, drawbacks, and factors in the cost, you can make the most informed decision for your AWS database needs.

AWS vs Google Cloud Pricing – A Comprehensive Look

AWS vs Google Cloud Pricing – A Comprehensive Look

Since ParkMyCloud provides cost control for Amazon Web Services (AWS) along with Google Cloud Platform (GCP) resources, we thought it might be useful to compare AWS vs Google Cloud pricing. Additionally, we will take a look at the terminology and billing differences. There are other “services” involved, such as networking, storage and load balancing, when looking at your overall bill. I am going to be focused mainly on compute charges in this article.  

Note: a version of this post was originally published in 2017. It has been completely rewritten and updated to include the latest AWS pricing and GCP pricing as of October 2019.

AWS and GCP Terminology Differences

As mentioned before, in AWS, the compute service is called “Elastic Compute Cloud” (EC2). The virtual servers are called “Instances”.

In GCP, the service is referred to as “Google Compute Engine” (GCE). The servers are called also called “instances”. 

A notable difference in terminology are GCP’s there are “preemptible” and non-preemptible instances.  Non-preemptible instances are the same as AWS “on demand” instances.  

Preemptible instances are similar to AWS “spot” instances, in that they are a lot less expensive, but can be preempted with little or no notice. GCP preemptible instances can be stopped without being terminated. In November 2017, AWS introduced a similar feature with spot instance hibernation. Flocks of these instances spun up from a snapshot according scaling rules are called “auto scaling groups” in AWS.

The similar concept can be created within GCP using “instance groups”. However, instance groups are really more of a “stack”, which are created using an “instance group template”. As such, they are more closely related to AWS CloudFormation stacks.

AWS vs. GCP Compute Sizing

Both AWS and GCP have a dizzying array of instance sizes to choose from, and doing an apples-to-apples comparison between them can be quite challenging. These predefined instance sizes are based upon number of virtual cores, amount of virtual memory and amount of virtual disk.

They have different categories.

AWS offers:

  • Free tier – inexpensive, burst performance (t3 family)
  • General purpose (m4/m5 family)
  • Compute optimized (c5 family)
  • GPU instances (p3 family)
  • FPGA instances (f1 family)
  • Memory optimized (x1, r5 family)
  • Storage optimized (i3, d2, h1 family)

GCP offers the following predefined types:

  • Free tier – inexpensive, burst performance (f1/g1 family)
  • Standard, shared core (n1-standard family)
  • High memory (n1-highmem family)
  • High CPU (n1-highCPU family)

However, GCP also allows you to make your own custom machine types, if none of the predefined ones fit your workload. You pay for uplifts in CPU/Hr and memory GiB/Hr. You can also add GPUs and premium processors as uplifts.

With respect to pricing, this is how the two seem to compare, by looking at some of the most common “work horses” and focusing on CPU, memory and cost.  

The bottom line:

In general, for most workloads AWS is less expensive on a CPU/Hr basis. For compute intensive workloads, GCP instances are generally less expensive

Also, as you can see from the table, both providers charge uplifts for different operating systems, and those uplifts can be substantial. You really need to pay attention to the fine print. For example, GCP charges a 4 core minimum for all their SQL uplifts (yikes!). And, in the case of Red Hat Enterprise Licensing (RHEL) in GCP, they charge you a 1 hour minimum for the uplifts and in 1 hour increments after that. (We’ll talk more about how the providers charge you in the next section.)

AWS vs. Google Cloud Platform Pricing – Examining the Differences

Cost per hour is only one aspect of the cloud pricing equation, though. To better understand your monthly bill, you must also understand how the cloud providers actually charge you. AWS prices their compute time by the hour, but charges by the second, with a 1 minute minimum.

Google Compute Engine pricing is also listed by the hour for each instance, but they charge you by the minute, rounded up to the nearest minute, with a 10 minute minimum charge. So, if you run for 1 minute, you get charged for 10 minutes. However, if you run for 61 minutes, you get charged for 61 minutes. 

AWS Reserved Instances vs GCP Committed Use

Both providers offer deeper discounts off their normal pricing, for “predictable” workloads that need to run for sustained periods of time, if you are willing to commit to capacity consumption upfront. AWS offers Reserved Instances. Google offers Committed Use Discounts.  Both involve agreeing to pay for the life of the reservation or commitment, though some have you pay up-front versus paying per month. This model of payment can get you some significant discounts over on-demand workloads, but can limit your flexibility as a trade-off. Check out our other posts on AWS Reserved Instances and Google Committed Use Discounts.

GCP Sustained Use Discounts

In addition to the Committed Use Discounts, GCP also has a unique offering with no direct parallel in AWS: Sustained Use Discounts. These provide you with an automatic discount if you run a workload for more that 25% of the month, with bigger discounts for more usage. These discounts can save up to 30% based on your use and instance size. The Google Cloud Pricing Calculator can help figure out how much this will affect your GCP costs.

Conclusion

If you are new to public cloud, once you get past all the confusing jargon, the creative approaches to pricing and the different ways providers charge for usage, the actual cloud services themselves are much easier to use than legacy on-premise services.

The public cloud services do provide much better flexibility and faster time-to-value. 

When comparing AWS vs. Google Cloud pricing, AWS EC2 on-demand pricing may on the surface appear to be more competitive than GCP pricing for comparable compute engines. However, when you examine specific workloads and factor in Google’s approach to charging for CPU/Hr time and their use of Sustained Use Discounts, GCP may actually be less expensive. 

In the meantime, ParkMyCloud will continue to help you turn off non-production cloud resources, when you don’t need them and help save you a lot of money on your monthly cloud bills, regardless of which public cloud provider you use.

Like this post? See also: How much do the differences between cloud providers actually matter? 

5 Tips To Improve Terraform on AWS

5 Tips To Improve Terraform on AWS

Many DevOps folks are inclined to use Hashicorp’s Terraform on AWS to manage infrastructure as code. This can have some Schroedinger-esque qualities to it, in that it can simplify your cloud management while also adding a layer of complexity to your toolset. If you’re already using Terraform, or are planning to start implementing infrastructure as code, then use these tips for better management of AWS deployments.

1. Use Terraform AND CloudFormation

This might come as a surprise, but not everyone wants to be involved in the holy war between Terraform and CloudFormation – and many have team members who use both tools. If this rings true for you, there’s no need to give up on CloudFormation. You can make use of the aws_cloudformation_stack resource type in Terraform. By using this, you can incorporate existing CloudFormation templates in your Terraform configurations, or you can make use of any potential features that aren’t available in Terraform. The “parameters” and “outputs” of the resource let you have a bidirectional communication between the two tools as well. 

2. Run “terraform plan” early and often

One of the best things about infrastructure as code is that you can know what’s going to happen before it happens. Using the “terraform plan” command to get an output of any AWS infrastructure changes that would happen from the existing terraform configuration can help you plan, communicate, and document everything that happens during your change windows. Making a habit of getting the plan after even minor code changes can save you a bunch of headache from errors or misunderstood edits.

3. Be careful using Autoscaling Groups with Terraform

AWS Autoscaling Groups are a great idea on paper, but can cause headaches (and often at the worst possible times) in practice due to either scaling too much or not enough. The biggest key for this is that you need to not only build out your ASG in Terraform, but also define the scaling policies, scaling boundaries, and know how to handle graceful termination and data storage. You also need to know what it looks like if Terraform expects one setup, but you’ve parked your resources for cost savings or suspended any ASG processes. This can be a challenge when you try to test your scaling policies, or if you don’t use ASGs the same way in development environments.

4. Adopt a “microservices” architecture with your configurations

Splitting your Terraform configuration into multiple files might seem like you’re just adding complexity, but this “microservices” approach can really help with management in large organizations. By making heavy use of output variables, you can coordinate configurations as needed, or you can keep things isolated for risk mitigation in case a configuration doesn’t do what you intended. Separate files also helps you keep track of what’s changing and where for audit logging purposes.

5. Create reusable modules specific to your organization

Along a similar line to number 4 above, creating smaller configurations can help you reuse code for different projects or teams. Reusable modules that are specific to what you do as a company can help accelerate deployment in the future, and can help with non-production workloads when developers want an AWS environment that is as close to production as possible.

Conclusion

Using Terraform on AWS can be a fantastic tool for defining your cloud infrastructure in an easily-deployable way, but can be daunting if you are trying to just get started or scale out to fit a larger enterprise. Use these tips to help save some frustration while making full use of the power of AWS!

8 Non-Obvious Benefits of SSO

8 Non-Obvious Benefits of SSO

As an enterprise or organization grows in size, the benefits of SSO grow along with it. Some of these benefits are easy to see, but there are other things that come up as side-effects that might just become your favorite features. If you’re on the fence about going all-in on Single Sign-On, then see if anything here might push you over the edge.

1. Multi-factor Authentication

One of the best ways to secure a user’s account is to make the account not strictly based on a password. Passwords can be hacked, guessed, reused, or written down on a sticky note on the user’s monitor. A huge benefit of SSO is the ease of adding MFA security to the SSO login. By adding a second factor, which is typically a constantly-rotating number or token, you vastly increase the security of the account by eliminating the immediate access of a hacked password. Some organizations even choose to add a third factor, which is typically something you are (like a fingerprint or eye scan) for physical access to a location. Speaking of passwords…

2. Increased Password Complexity

Forcing users to go through an SSO login instead of remembering passwords for each individual application or website means they are much more open to forming complex passwords that rotate frequently. A big complaint about passwords is having to remember a bunch of them without reusing them, so a limitation on the number of passwords means that one password can be much stronger.

3. Easier User Account Deployment

This one might seem obvious to some, but by using an SSO portal for all applications, user provisioning can be greatly accelerated and secured. The IT playbook can be codified within the SSO portal, so a new user in the accounting department can get immediate access to the same applications that the rest of the accounting department has access to. Now, when you get that inevitable surprise hire that no one told you about, you can make it happen and be the hero.

4. Easier User Account Deletion

On the flip side of #3, sometimes the playbook for removing users after they leave the company can be quite convoluted, and there’s always that nagging feeling that you’re forgetting to change a password or remove a login from somewhere. With SSO, you just have one account to disable, which means access is removed quickly and consistently. If your admins were using SSO for administrative access, it also means fewer password changes you have to make on your critical systems.

5. Consistent Audit Logging

Another one of the benefits of SSO is consistent audit logging. Funneling all of a user’s access through the same SSO login means that tracking that user’s activity is easier than ever. In financial and regulated industries, this is a crucial piece of the puzzle, as you can make guarantees about what you are tracking. In the case of a user who is no longer employed by the enterprise, it can make it easier to have your monitoring tools look for such attempts at access (but you know they can’t get in, from point #4!).

6. Quickly Roll Out New Applications

Tell your IT staff that you need to roll out a new application to all users without SSO and you’ll hear groans starting in record time. However, with SSO, rolling out an application is a matter of a few clicks. This means you have plenty of options ranging from a slow rollout to select groups to start all the way to a full deployment within a matter of minutes. This flexibility can really help maximize your user’s productivity, and will make your IT staff happy to put services into play.

7. Simplify the User Experience

If you use a lot of SaaS applications or web apps that require remembering a URL, you’re just asking for your users to need reminders of how to get into them. With an SSO portal, you can make all services and websites show up as clickable items, so users don’t need to remember the quirky spelling of that tool you bought yesterday. Users will love having everything in one place, and you’ll love not having to type anything anymore.

8. Empower Your Users

Speaking of SaaS applications, one of the main blockers for deploying an application to a wider audience is the up-front setup time and effort, which leads to IT and Operations shouldering the load of the work (since they have the access). SSO can accelerate that deployment, which means the users have more power and can directly access the tools they need. Take an example of ParkMyCloud, where instead of users asking IT to turn on their virtual machines and databases, the users can log directly into the ParkMyCloud portal (with limited access) and control the costs of their cloud environments. Users feel empowered, and IT feels relieved.

Don’t Wait To Use SSO

Whether you’ve already got something in place that you’re not fully utilizing, or you’re exploring different providers, the benefits of SSO are numerous. Small companies can quickly make use of single sign-on, while large enterprises might consider it a must-have. Either way, know that your staff and user base will love having it as their main access portal!

AWS Neptune Overview – Amazon’s Graph Database Service

AWS Neptune Overview – Amazon’s Graph Database Service

AWS Neptune is AWS’s managed graph database service, offered to give customers an option to easily build and run applications that work with highly connected datasets. It was first announced at AWS re:Invent 2017, and made generally available in May 2018.

Graph databases like AWS Neptune were created to address the limitations of relational databases, and offer an efficient way to work with complex data. 

What is a graph database?

A graph database is a database optimized to store and process highly connected data – in short, it’s about relationships. The data structure for these databases consists of vertices or nodes, and direct links called edges.

Use cases for such highly-connected data include social networking, restaurant recommendations, retail fraud detection, knowledge graphs, life sciences, and network & IT ops. For a restaurant recommendations use case, for example, you may be interested in the relationships between various users, where those users live, what types of restaurants those users like, where the restaurants are located, what sort of cuisine they serve, and more. With a graph database, you can use the relationships between these data points to provide contextual restaurant recommendations to users.

Details on the AWS Neptune Offering

AWS Neptune Pricing 

The AWS Neptune cost calculation depends on a few factors:

  • On-Demand instance pricing – you’ll need to pay for the compute instances needed for read-write workloads as well as Amazon Neptune replicas. These follow the general pricing for AWS On Demand instances.
  • Database Storage & I/Os – storage is also paid per usage with no upfront commitments. Storage is billed in per GB-month increments and I/Os are billed in per million request increments. 
  • Backup storage – you are charged for the storage associated with your automated database backups and database cluster snapshots. As per usual, increasing the retention period will cost more. 
  • Data transfer – you are charged per GB for data transferred in and out of AWS Neptune.

For this, as with most AWS services, pricing is confusing and difficult to predict. 

AWS Neptune Use Cases

Use cases for the AWS graph database and other similar offerings include:

  • Machine learning, such as intelligent image recognition, speech recognition, intelligent chatbots, and recommendation engines.
  • Social networking
  • Fraud detection – flexibility at scale makes graph databases useful to work with the huge amount of transactional data needed to detect fraud. 
  • Regulatory compliance – ever-more important as HIPPA, GDPR, and other regulations pose strict regulations on the way organizations use data about customers.
  • Knowledge graphs – such as advanced results for keyword searches and complex content searches.Life sciences – graph databases are uniquely suited to store models of disease and gene interactions, protein matterns, chemical compounds, and more. 
  • Network/IT Operations to keep networks secure, including identity and access management, detection of malicious file paths, and more. 
  • Supply chain transparency – graph databases are great for modeling complex supply chains that span the globe. 

Tired of SQL?

If you’re tired of SQL, AWS Neptune may be for you. A graph database is fundamentally different from SQL. There are no tables, columns, or rows – it feels like a NoSQL database. There are only two data types: vertices and edges, both of which have properties stored as key-value pairs.

AWS Neptune is fully managed, which means that database management tasks like hardware provisioning, software patching, setup, configuration, and backups are taken care of for you.

It’s also highly available and shows up in multiple availability zones. This is very similar to Aurora, the relational database from Amazon, in its architecture and availability.

Neptune supports Property Graph and W3C’s RDF. You can use these to build your own web of data sets that you care about, and build networks across the data sets in the way that makes sense for your data, not with arbitrary presets. You can do this using the graph models’ query languages: Apache TinkerPop Gremlin and SPARQL.

AWS Neptune Visualization is not built in natively. However, data can be visualized with Amazon SageMaker Jupyter notebooks, or third-party options like Metaphactory, Tom Sawyer Software, Cambridge Intelligence/Keylines, and Arcade. 

Other Graph Database Options

There’s certainly competition in the market for other graph database solutions. Here are a few that are frequently mentioned. 

AWS Neptune vs. Neo4j

Neo4j is a graph database that has been rated most popular by mindshare and adoption. Version 1.0 was released in February 2010. Unlike AWS Neptune, Neo4j is open source. Neo4j uses the language Cypher, which it originally developed. While there are several languages available in the graph database market, Cypher is widely known by now. 

Neo4j, unlike AWS Neptune, does actually come with graph visualization, which is a huge plus for working with this kind of data, though as mentioned above, there are several ways to visualize your Neptune data.

Other

Other graph databases include: AllegroGraph, AnzoGraph, ArangoDB, DataStax Enterprise Graph, InfiniteGraph, JanusGraph, MarkLogic, Microsoft SQL Server 2017, OpenLink Virtuoso, Oracle Spatial and Graph (part of Oracle Database), OrientDB, RedisGraph, SAP HANA, Sparksee, Sqrrl Enterprise, and Teradata Aster.

AWS Neptune – Getting Started

If you’re interested in the new service, you can check out more about AWS Neptune. As you get started, the AWS Neptune docs are a great resource. Or, check out some AWS Neptune Tutorials on YouTube

Once you’re on board, make sure you have cost control as a priority. ParkMyCloud can now park Neptune databases to ensure you’re only paying for what you’re actually using. Try it out for yourself!