How to Create Your 2019 AWS re:Invent Schedule

How to Create Your 2019 AWS re:Invent Schedule

It’s time to plan your 2019 AWS re:Invent schedule! This will be our team’s fifth trek to the AWS conference in Las Vegas, so we’ve put together some tips for planning out your conference experience.

First up, if you have not yet registered for re:Invent, do that now! Tickets have sold out in the past, so don’t wait. 

Choose Your Sessions in Advance

The key to a great AWS re:Invent schedule is to plan in advance. The essential part of this planning is to register for sessions in advance. Reserved seating goes live in the fall (typically mid-October). Once the date is announced, mark your calendar and plan to select a few key sessions immediately – seating can be competitive and fill up quickly!

What you can get started with today is reading through the re:Invent agenda and, especially, the immense event catalog. Note the sessions you’re interested in. Here are some tips to keep in mind:

  • Focus – what do you most hope to gain at re:Invent? You can sort sessions based on subject areas and industries – would a “focus path” help you gain more out of your experience?
  • Value of In-Person vs. Session Videos – Many sessions will be online afterward, so prioritize sessions with an element that is more valuable in person – that may be chalk talks, workshops, and others with interactive elements. You’ll be able to watch any sessions you missed and catch up on the information on others with videos. This can put you more at ease and let you have some fun while in Vegas.
  • Sessions will be repeated at the 2019 event – no doubt in response to complaints about full sessions and long wait times in 2018, for this year, AWS will be repeating their most popular sessions several times over across venues. There will also be overflow rooms for popular sessions and more late-night session options than have been available in previous years. 
  • Travel time – This won’t be the first or the last time you hear this, but it’s worth saying again: the re:Invent campus is big. HUGE. Plan your schedule accordingly, with as few travel periods up and down The Strip as possible. If there are multiple sessions you’re interested in at the same time, prioritize ones with the least travel time. You should also plan to arrive to sessions early.

Once dates, times, and locations have been announced for sessions, we recommend putting them into your calendar for a clean visual of your day, and reminders. Once it’s available, you’ll be able to view your schedule in the AWS re:Invent app, along with maps and more.

Scroll down to the bottom of this post for a list of sessions that looked particularly interesting to our team. 

Set Aside Time for the Expo Hall

Make sure you plan on time to visit the expo hall! Actually, there are two expos – the main one at The Venetian and another at the Aria.

The Welcome Reception from 4-7 PM on Monday is a great time to visit the expo and kick off your re:Invent experience with food, drinks, and giveaways. However, it will be crowded. You’ll want to come back again later in the week to check out vendor products and services, chat with vendors whose products you already use, get swag, and enter drawings. The expo is open from 10:30 AM – 6 PM Tuesday, 8 AM – 6 PM Wednesday, and 10:30 AM – 4 PM Thursday.

You won’t be disappointed by the swag. Just search #reinventswag for examples — sponsors go all out. By the way, if you’re aiming to maximize swag, definitely stop by after lunch on Thursday. Sponsors will practically beg you to take stuff off their hands so they don’t have to ship it home. You can grab toys, stickers, and keychains for your kids, or build an entire wardrobe of t-shirts and socks for yourself.

And of course, stop by and visit ParkMyCloud and Turbonomic at the Venetian expo! We’ll post our booth number here once it’s announced. 

We’d love to see you there!  Sign up for a meeting with us now.

Activities and Parties

Round out your Vegas experience with some partying! The great thing about a conference like this is that you can often drink your way through for free. Outside of the pub crawl, many parties require you to register ahead of time, so keep an eye on your email for invitations. You’ll want to bookmark this list of 2019 re:Invent parties. As of this writing, it’s a bit sparse, but check out last year’s party list for an idea of the multitude of options to come.

Obviously, you don’t want to miss re:Play, the AWS re:Invent party and centerpiece of the conference (you know, besides the keynotes.) More free food, drink, an EDM concert, retro arcade, laser escape room, drone obstacle course, climbing wall, dodgeball, bounce castle, archery tag, and/or whatever else they come up with for this year. And new for 2019 is the Intersect Music Festival put on by AWS on Friday and Saturday, although note that tickets are separate from re:Invent. 

Or venture out beyond the conference hall walls and try your luck or catch a show – it’s hard to be bored in Vegas.

Frequently Asked Questions, Answered

Here are some answers to the most common questions about AWS re:Invent 2019:

How much does AWS re:Invent 2019 cost? 

A full pass to the conference costs $1799. There are a handful of discounts available (for example, if your company is also sponsoring the event), but if you don’t already have access to one of those, we’d recommend going ahead and buying a full pass. 

What are the AWS re:Invent 2019 dates? 

This year’s AWS Las Vegas extravaganza will run December 2-6. If you fly in on Sunday December 1, you can participate in Midnight Madness. And don’t forget that AWS’s new Intersect Music Festival is then 6th and 7th.

Will there be AWS re:Invent video recordings? 

Yes – AWS records many of the sessions and posts to their YouTube channel.

How does AWS re:Invent registration work when I arrive?

There will be registration areas available in multiple venues throughout the conference. There may also be one in the arrival terminal at the airport, but if the line is long, skip it – you’ll wait less at the Venetian or other venues to check-in.

AWS re:Invent Sessions That Caught Our Eyes

See more details and (once open) register through the official AWS re:Invent agenda

  • AIM302-R – Create a Q&A bot with Amazon Lex and Amazon Alexa – A recent poll showed that 44 percent of customers would rather talk to a chatbot than to a human for customer support. In this workshop, we show you how to deploy a question-and-answer bot using two open-source projects: QnABot and Lex-Web-UI. You get started quickly using Amazon Lex, Amazon Alexa, and Amazon Elasticsearch Service (Amazon ES) to provide a conversational chatbot interface. You enhance this solution using AWS Lambda and integrate it with Amazon Connect.
  • AIM303-R –  Stop guessing: Use AI to understand customer conversations – You don’t need to be a data scientist to build an AI application. In this workshop, we show you how to use AWS AI services to build a serverless application that can help you understand your customers. Analyze call-center recordings with the help of automatic speech recognition (ASR), translation, and natural language processing (NLP). Get hands-on by producing your own call recordings using Amazon Connect. In the last step, you set up a processing pipeline to automate transcription and NLP analysis, and run analytics and visualizations on the results.
  • ARC201 – Comparing serverless and containers – Microservices are a great way to segment your application into well-defined, self-contained units of functionality. Come join us in this chalk talk as we discuss two common architectures for deploying microservices: containers and serverless.
  • ARC204-R – Cost optimizing a workload – In this hands-on session, we guide you through the AWS tools, services, and design decisions for architecting cost-optimized applications. Whether you run cloud-native applications or legacy monolithic applications, we show you tricks and techniques to run your workload at the lowest cost possible. Once the workload is optimized, we set up an enterprise cost-optimization dashboard to measure and report on workload efficiency utilizing Amazon Athena and Amazon QuickSight.
  • ARC303-R – Failing successfully: The AWS approach to resilient design – AWS global infrastructure provides the tools customers need to design resilient and reliable services. In this session, we explore how to get the most out of these tools. We discuss achieving continued stability and availability in the face of impaired dependencies. We also cover AWS tools and best practices you can use to design applications and services that avoid overload.
  • ARC406-R1 – Building multi-region microservices – In this session, participate in a hands-on exercise where you create, verify, and test a serverless solution across multiple regions using AWS Lambda and Amazon DynamoDB global tables.
  • CON210-S – How to run like a startup with enterprise Kubernetes on AWS – Scholastic Corporation reinvented itself with the adoption of a startup mindset and a move to microservices on AWS running in Red Hat OpenShift Container Platform, an enterprise Kubernetes distribution. In this session, you hear about the journey that this large publishing enterprise went on during its transformation. From our discussion of breaking up monolithic applications into microservices, you learn about some of the pitfalls along the road to Kubernetes, containers, and microservices adoption. You also hear about the resulting demonstrable benefits—faster time to market, lower infrastructure costs, happier developers, and improved business performance.
  • DOP204-S – Transforming IT pros to DevOps gurus: How to secure your new tech stacks – Large enterprises are limited by legacy systems. With existing tools, traditional platforms, outdated requirements, and more, IT and engineering teams have difficulty building in a modern way. Hear how Pivvot, a US enterprise, used tools in the AWS Cloud to escape this traditional trap. The Pivvot team learned to scale by adopting a DevOps philosophy to support hundreds of organizations and thousands of customers inside a commercial software company. Learn how building a pipeline-driven cloud-native process with built-in security helps modernize an organization. Culture change is challenging, but with the right approach and a strong tech stack, you can build securely and ship quickly in the AWS Cloud. 
  • DOP206-S – Breaking the monolith with style and speed – Microservices are here to stay, but nearly all of the most successful architectures originate from the classic monolith. The promised land of microservices is filled with treasures like decoupled deploys, scalability, resilience, development velocity, and more. However, the journey there can involve prolonged seasons of pain, suffering, and even regret. This talk is the story of how Stitch Fix used all three pillars of observability to build confidence, accelerate its migration, and collaborate with other teams. Learn about the strategies that Stitch Fix used and how it incorporated logs, metrics, and traces into these strategies. 
  • IOT301-R – Race to generate IoT connected wind energy – In this workshop, learn how to connect a simple wind turbine to AWS IoT Core and work in teams to see which one can generate the most aggregate power in a five-minute race at the end of the workshop. A large fan will be the single source of wind, and energy levels from each turbine will be aggregated between the two teams with a continuously updating dashboard. Participants will learn how to set up an IoT device connection with their own account, enrich the data with a pipeline, and store the data for a dashboard.
  • MGT304 – Automate everything: Options and best practices – You can use an expanding set of services to automate many common management tasks in your AWS environment, including patching, configuration updates, software stack deployments, and more. In this session, we explore how you can use AWS management tools for automation, including the use of self-service runbooks. We discuss the many options available, including AWS CloudFormation, AWS Service Catalog, and AWS Systems Manager.
  • MGT310-R – High-velocity service delivery: Infrastructure as code – Customers today are looking to the cloud to help them evolve, adapt, and innovate faster than ever. In this chalk talk, learn how to use AWS native services to increase your organization’s ability to deliver at high velocity using services like AWS CloudFormation, AWS OpsWorks, and AWS Systems Manager. We talk about best practices to help you provision and manage infrastructure, deploy code, and automate your software-release processes.
  • SVS203-R1 – Build a serverless ride-sharing web application – In this workshop, you deploy a simple web application that lets users request unicorn rides from the Wild Rydes fleet. The application presents users with an HTML-based user interface for indicating the location where they would like to be picked up and interfaces on the backend with a RESTful web service to submit the request and dispatch a nearby unicorn. The application also provides facilities for users to register with the service and log in before requesting rides.
  • SVS309 – Development life cycle for serverless backends – With serverless applications, you can easily get started, experiment, and build prototypes. However, as serverless usage grows and more developers adopt serverless, it can be hard to maintain a good workflow from ideas to production. In this talk, we share inspirations for how to build a good development workflow for serverless applications that allow for fast experimentation while sharing common standards across teams and organizations.

See You There!

Do you have any other tips for planning the perfect AWS re:Invent schedule? Sessions you’re looking forward to? Any other questions, tips, swag suggestions  Let us know in the comments. Cheers, and see you there!

More on re:Invent: 2018 recap.

Are AWS Reserved Instances Better Than On-Demand?

Are AWS Reserved Instances Better Than On-Demand?

AWS Reserved Instances vs. Scheduling On-Demand Instances

We are frequently asked about the impact of instance scheduling on AWS Reserved Instances for EC2 and RDS.  Scheduling On Demand instances as well as Reserved Instances (RIs) are both useful techniques for cost optimization, but they are polar opposites in terms of goals.  RIs are all about getting a better price for RDS or EC2 instances that are running all the time. Scheduling is all about reducing costs by turning off instances when they are not in use.  

How should a customer choose between buying an AWS Reserved Instance and applying scheduling to the instance?  This is an important question, as usually the savings from scheduling can exceed the savings available from a Reserved Instance.  Before we dive into that though, let’s get familiar with some critical RI nuances that can help you make the decision…and make the most of your RIs.

Note: versions of this article were originally published in 2015 and 2017. It has been completely re-analyzed, rewritten, and updated! 

Where are my Reserved Instances?

First of all, it’s worth clarifying that there is no functional difference between Amazon Reserved Instances and On Demand. It’s all in the billing. <rant> I wish I had a dollar for each time someone asked me (while looking at the AWS Console) “Which ones of these are the Reserved Instances?” </rant>. You cannot be sure which instances are using a reservation until you get your bill.  You can make a good guess, based on what account you are looking at and what reservations you have purchased, but if you are using instance size flexibility with region-based RIs, it would be a pure guess.  An instance reservation is not like a hotel reservation, where you end up with a specific room for the duration of your stay. That would be a Dedicated Host Reservation – a whole other beast with its own set of limitations.

3600 Seconds per Hour

Yep – there are 3600 seconds in an hour.  I did the math. You may ask: why does that matter?  It matters because of two things: per-second billing for EC2, and the fact that AWS RIs are billed in one hour chunks.  More precisely, RIs are billed in one “clock-hour” chunks, where a clock-hour starts on the hour and up to the final second of the hour – like 04:00:00 to 04:59:59.

If you are running a number of instances that use per-second billing, and they go up or down during a (clock) hour, then multiple instances may be able to leverage the Reserved Instance pricing.

As paraphrased shamelessly from here, if you purchase one m5.large Reserved Instance and run four matching m5.large instances for 15 minutes each (900 seconds each) in the same hour, the total running time is one hour, which consumes the entire Reserved Instance allocation.  This usage pattern is illustrated below.

That said, if multiple matching instances are running at the same time, the Reserved Instance billing benefit is applied to all of the instances, up to a maximum of 3600 seconds in a clock-hour.  On-demand rates are used for any usage after that. This is illustrated below.

By the way, remember that per-second billing only applies to instances running an open-license OS like Amazon Linux or Ubuntu.  

If your instance is running any flavor of Windows, Red Hat Linux, or any pay-by-the-hour image from the AWS Marketplace, that instance is billed by the hour, and any matching reservation will be consumed by the hour, even if the instance is not.  Any overlapping instances will each need their own RI to get RI pricing.

Elastic Reserved Instances

Wait…what?  Isn’t that kind-of an oxymoron?  Elastic means it should be able to change with my usage…and aren’t reservations are a commitment to usage?  Well, maybe…but some RIs are definitely more elastic (or at least more flexible) than others.  Regional Reserved Instances can leverage “instance size flexibility.”  This is the ability of a reservation to apply up or down to other instance types in the same family.  Regional RIs are applied in priority order from the smallest instance in the region to the largest instance in the region.  This makes these AWS reservations somewhat elastic in that if you buy reservations for smaller instance types than you normally use, then those reservations are more likely to be consumed, even if you rightsize an instance or replace one server with another of a different size.

The math of how an RI of one instance type can be applied to another instance type is governed by a “normalization factor”, described here.  Putting the table in that link another way:

Some examples of how we can use this table:

  • To fully cover one t3.medium with RIs, you would buy eight t3.nano RIs.  If you stop using the t3.medium instance, the same eight t3.nano RIs could also be used to cover:
      • Two t3.small instances
      • One t3.small and two t3.micro
      • Half of a t3.large
  • To fully cover an m5.12xlarge would require 24 m5.large RIs.  Those 24 m5.large RIs could also be used for:
      • One m5.8xlarge and one m5.4xlarge
      • One m5.8xlarge and two m5.2xlarge
      • One m5.8xlarge, one m5.2xlarge, one m5.xlarge, and two m5.large
      • Three quarters of an m5.16xlarge
      • Etc.

The lesson here is that it is easiest to manage reserved instances if you buy the smallest instance size available within an instance family.  In fact, as shown below this is the exact type of recommendation you are likely to receive from the AWS Cost Explorer.

So why do we mention AWS flexible reserved instances with relation to schedules?  Recall that the allocation of RIs to instances is done by AWS each second within a clock-hour.  Each reservation is consumed by instances in its family until all 3600 seconds have been allocated.  You do not necessarily have to keep an instance up for the whole hour to use the reservation. You can still consume the whole RI, even if you are starting and stopping instances by schedule or within an auto-scaling group over the course of an hour.

Something so cool cannot be without limitations and gotchas. Here are a few we’ve noticed:

  • Instance size flexibility is ONLY available for certain instance platforms.
    • For AWS EC2 reserved instances, only open-license linux instances support flexible RIs.  It is NOT APPLICABLE to any flavor of Windows or other licensed software and OS.
    • For AWS RDS reserved instances, instance size flexibility is available for MySQL, MariaDB, PostgreSQL, and Amazon Aurora database engines as well as the “Bring your own license” (BYOL) edition of the Oracle. [That said: it is not clear as of this writing if the recent RDS per-second billing change is also applied to RDS instance size flexibility. The billing pages still state that RDS RIs are allocated to instances on a per-hour basis.]
  • Flexibility is only available within the same instance family
  • Flexibility is only available for Regional RIs 

How does AWS Reserved Instance Pricing Work?

As usual, there are a number of different decisions that need to be made before this question can be answered.  AWS RIs are available with two different levels of flexibility and several different terms of purchase. 

In terms of flexibility, RIs can either be purchased as “standard” or “convertible.” 

  • AWS Convertible Reserved Instances – allow changing the assigned availability zone, instance size/families, operating system, tenancy, and networking type. There is no fee for changing the reservation attributes, but you can only convert the RI into a new set of attributes with equal or greater cost than the original.  If the cost is greater, you will have to pay the difference.
  • Standard Reserved Instances – once purchased, cannot be changed. They can be sold on the AWS Reserved Instance Marketplace.

Both AWS Convertible RIs and Standard RIs offer:

  • Regional vs. Availability Zone assignment scope.  Regional scope offers a lot more flexibility for where your instances can reside while still leveraging RI pricing. Availability Zone scope has the bonus of a guaranteed capacity reservation (though, like RI pricing,  capacity reservation cannot be assigned to a specific instance).
  • For open-license Linux, instance size flexibility, as described earlier.

There are a few options for terms of purchase that affect AWS RI pricing. RIs can be purchased for 1 or 3-year terms, and with no upfront, partial upfront, or all upfront payment available for each. 

To get a better idea of the difference in the amount you’ll end up paying with each purchasing option, let’s take a look at some pricing scenarios for the general purpose m5.large instance in us-east-1 (US Virginia).  When you look at these on the AWS RI price list, they are usually grouped by year terms and standard vs. convertible.  Since you can already see it that way on AWS, and because time is the greatest unknown, I am going to give a bit of a different view, keeping the years together on the same comparison chart.

One-Year Term RIs

To  benefit from the discount provided by a  reservation while keeping contract terms to a minimum, look at the 1-year RIs:

You can see from this that within the Convertible vs. Standard tiers, there is not a lot of difference between the upfront cost payment options.  Your upfront cash decision here can depend on your accounting process, perhaps accounting for the difference between CapEx and OpEx (Capital Expenditures and Operating Expenditures), or based on your current and prospective cash flow.  The pricing between the Convertible vs. Standard options are so similar as to make you seriously consider if the benefit of flexibility in the AWS Reserved Instances convertible option outweighs the minor cost savings.

Three-Year Term RIs

If are confident of your three-year usage plans, look at the 3-year RIs, shown here as the cost per year:

There is a bit more visible progression between the options here, and the cost savings difference between the 1-year graph and the 3-year graph are clearly seen.  Still, the 1-year vs. 3-year decision may again come down to accounting practices.

Note that these graphs are just for one instance type in one location in about the middle of the overall performance pack.  They do not express a true profile of all instance types/families in all locations, and you should do your own comparison before buying any RIs.  That said….

AWS Reserved Instances vs. On Demand

Scheduling Break-even

FINALLY getting into the core question.  Is it better to shut an instance down or buy a reserved instance?  Let’s do the math and compare AWS On Demand vs. Reserved, with on/off scheduling applied to the On Demand option.  Given all the different pricing options it can be a bit difficult to decide how to compare these. Let’s go with the no-upfront RI options and see where the RIs break even with scheduling.

AWS On Demand vs. Reserved Instance Purchase Option Comparison

Since the hours might be judge at this scale, here are the break-even hours in a table (hours rounded-up):

So what do these hours mean?  

By way of comparison, here are a few common schedules and their hours in ParkMyCloud (with weeks converted to months on the basis of 52 weeks/12 months = 4.33 weeks/month):

  • Running 8 AM – 5 PM, Weekdays = 195 hours/month (nominally a 73% savings)
  • Running 7 AM – 7 PM, Weekdays = 260 hours/month (64% savings)
  • Running 5 AM – 9 PM, Weekdays = 347 hours/month (53% savings)

Of the above, 7 AM – 7 PM Weekdays is the second most popular schedule in our entire system, and at 260 hours, easily is more cost-effective than the least expensive RI.  

In that final schedule above, you need to run 16 hours per day, Monday-Friday before a 3-year RI starts to look like a better deal.  If you have other matching instances on another schedule in that same region/AZ, you could then buy the RI anyway, and save even more money.

Scheduled Reserved Instances

“But wait!” you say…  “There are these things called Scheduled Reserved Instances – would they not offer me the best of both worlds?”  Maybe, but note that to support Scheduled RIs, AWS has essentially set aside hardware for use as Scheduled RIs – this is a finite set of dedicated resources.  On paper, a Scheduled RI can definitely save you a bit more money, but let’s look at the limitations.  

  • Instances intended to consume a Scheduled Reservation must be launched during their scheduled time periods using a launch configuration that matches the reservation. Let’s break that down further:
      • They are launched with “a launch configuration” and terminated by EC2 three minutes before the end of the schedule window.  You cannot stop/start/suspend them, and so they cannot maintain state internally unless you do some EBS volume tweaks after launch.
      • You cannot launch a scheduled RI outside of its scheduled window.  If you try, that instance will not use the Reservation, as you would not expect it to magically jump over onto the dedicated hardware.
  • Only three regions and only a few older generation instances families are supported.
  • Limited quantities are available (as of this writing, a search for an m4.large in us-east-1 with a Monday-Friday 7 AM – 7 PM scheduled listed only two reservations being available).
  • There is a minimum reservation size of 1,200 hours/year, 100 hours/month, 24 hours/week, or 4 hours/day.  (So you are not going to be able to use a Scheduled RI to run your database backup server for just a couple hours on Sunday mornings [I just tried…dang it.])
  • Scheduled RI purchases cannot be cancelled, modified or sold on the RI Marketplace. No buyer’s remorse allowed here…
  • And finally…  AWS describes the cost savings of Scheduled RIs over on-demand as being in the 5-10% range.  Again – you have to be really sure this workload is going to be needed on this schedule, and the workload is fully utilized.  If you find this server is ever idle or under-utilized, you may have lost an opportunity to save 50% or more by simply scheduling it with ParkMyCloud and then rightsizing it.

When launched in 2016, Scheduled RIs were only available in “US East (N. Virginia), US West (Oregon), and Europe (Ireland) regions, with support for the C3, C4, M4, and R3 instance types.”  Given that this has not changed, I am guessing the feature has not taken off as AWS had hoped.  That said, if you have something that needs to run for 4 hours per day, daily, and can use an older instance family in a supported region…go for it!  This may also be a good place to leverage AWS Spot instance types.

Should You Buy Reserved Instances?

Deciding between different RI options starts with knowledge of how stable your usage is going to be in the coming years.  Most organizations have a good understanding of at least a baseline level of usage, and some actually construct portfolios of reserved instance purchases, much like how you might balance a stock portfolio.  For example, 3-year Standard Reserved Instances are for those applications that are stable and unlikely to change. Balance those with some 3-year Convertible Reserved Instances to help save money in the longer term but allow for some flexibility in use.  Use 1-year RIs to account for shorter-term usage volatility, with again maybe a balance between Standard and Convertible where needed for those loads that might shift over the short term.  

Just like with stocks, such portfolios require active management, awareness of what is being used and what is coming in both the short and long term.  Companies with these types of RI management typically have dedicated cloud cost management staff, though RI management itself does not need to be a full-time job.  One of our larger customers tells us they do a quarterly review of their RI portfolio, starting with using ParkMyCloud to schedule any instances they can, and then rebalancing their RIs and investing in more as needed.  This is a best practice for any size company.

For production workloads that run 24×7 long-term, you REALLY should be buying RIs. For production workloads with an unknown future, or for non-production workloads (dev, test, QA, training, etc.), the answer is “probably not”.  Be careful though, as often RIs are seen as a quick fix to save 30-50% on the cloud bill, but then other ways are found to save even more, and you end up with underutilized RIs. Before you buy a Standard RI, make sure you have evaluated your resources for underutilization and overprovisioning.  Also consider the following cost-saving options:

  • Choose smaller instance sizes – each size down can save 50%. Use ParkMyCloud Rightsizing to first make sure your resources are appropriately sized for their load before committing to an RI purchase. 
  • For open-license Linux instances, buy RIs for smaller instance types in the same instance family in order to leverage instance size flexibility (allows the RIs to more easily be allocated against other resources, even if they are running for short periods of time)
  • Use Spot Instances and Autoscaling when appropriate for short-term/volatile workloads.
  • Schedule on/off times for on-demand instances.  Use ParkMyCloud SmartParking to automatically create schedules for when the resources are not being used, while still giving your staff the flexibility to easily restart them if needed outside the normal hours. Even a generous 7 AM – 7 PM workday schedule will immediately save 65%!

The worst thing one can do is…nothing.  Set aside some time each quarter to review EC2 and RDS instance utilization.  Rightsize and schedule what you can – try a free trial of ParkMyCloud to see how we can help you with that.  After that, anything left running 24×7 is a candidate for an RI.  If you are risk-averse or time-crunched, stick with 1-year convertible RIs to save at least 36% on at least some of your resources.  Then take all that money you saved and put it into other things that will bring greater value to your business.

AWS Neptune Overview – Amazon’s Graph Database Service

AWS Neptune Overview – Amazon’s Graph Database Service

AWS Neptune is AWS’s managed graph database service, offered to give customers an option to easily build and run applications that work with highly connected datasets. It was first announced at AWS re:Invent 2017, and made generally available in May 2018.

Graph databases like AWS Neptune were created to address the limitations of relational databases, and offer an efficient way to work with complex data. 

What is a graph database?

A graph database is a database optimized to store and process highly connected data – in short, it’s about relationships. The data structure for these databases consists of vertices or nodes, and direct links called edges.

Use cases for such highly-connected data include social networking, restaurant recommendations, retail fraud detection, knowledge graphs, life sciences, and network & IT ops. For a restaurant recommendations use case, for example, you may be interested in the relationships between various users, where those users live, what types of restaurants those users like, where the restaurants are located, what sort of cuisine they serve, and more. With a graph database, you can use the relationships between these data points to provide contextual restaurant recommendations to users.

Details on the AWS Neptune Offering

AWS Neptune Pricing 

The AWS Neptune cost calculation depends on a few factors:

  • On-Demand instance pricing – you’ll need to pay for the compute instances needed for read-write workloads as well as Amazon Neptune replicas. These follow the general pricing for AWS On Demand instances.
  • Database Storage & I/Os – storage is also paid per usage with no upfront commitments. Storage is billed in per GB-month increments and I/Os are billed in per million request increments. 
  • Backup storage – you are charged for the storage associated with your automated database backups and database cluster snapshots. As per usual, increasing the retention period will cost more. 
  • Data transfer – you are charged per GB for data transferred in and out of AWS Neptune.

For this, as with most AWS services, pricing is confusing and difficult to predict. 

AWS Neptune Use Cases

Use cases for the AWS graph database and other similar offerings include:

  • Machine learning, such as intelligent image recognition, speech recognition, intelligent chatbots, and recommendation engines.
  • Social networking
  • Fraud detection – flexibility at scale makes graph databases useful to work with the huge amount of transactional data needed to detect fraud. 
  • Regulatory compliance – ever-more important as HIPPA, GDPR, and other regulations pose strict regulations on the way organizations use data about customers.
  • Knowledge graphs – such as advanced results for keyword searches and complex content searches.Life sciences – graph databases are uniquely suited to store models of disease and gene interactions, protein matterns, chemical compounds, and more. 
  • Network/IT Operations to keep networks secure, including identity and access management, detection of malicious file paths, and more. 
  • Supply chain transparency – graph databases are great for modeling complex supply chains that span the globe. 

Tired of SQL?

If you’re tired of SQL, AWS Neptune may be for you. A graph database is fundamentally different from SQL. There are no tables, columns, or rows – it feels like a NoSQL database. There are only two data types: vertices and edges, both of which have properties stored as key-value pairs.

AWS Neptune is fully managed, which means that database management tasks like hardware provisioning, software patching, setup, configuration, and backups are taken care of for you.

It’s also highly available and shows up in multiple availability zones. This is very similar to Aurora, the relational database from Amazon, in its architecture and availability.

Neptune supports Property Graph and W3C’s RDF. You can use these to build your own web of data sets that you care about, and build networks across the data sets in the way that makes sense for your data, not with arbitrary presets. You can do this using the graph models’ query languages: Apache TinkerPop Gremlin and SPARQL.

AWS Neptune Visualization is not built in natively. However, data can be visualized with Amazon SageMaker Jupyter notebooks, or third-party options like Metaphactory, Tom Sawyer Software, Cambridge Intelligence/Keylines, and Arcade. 

Other Graph Database Options

There’s certainly competition in the market for other graph database solutions. Here are a few that are frequently mentioned. 

AWS Neptune vs. Neo4j

Neo4j is a graph database that has been rated most popular by mindshare and adoption. Version 1.0 was released in February 2010. Unlike AWS Neptune, Neo4j is open source. Neo4j uses the language Cypher, which it originally developed. While there are several languages available in the graph database market, Cypher is widely known by now. 

Neo4j, unlike AWS Neptune, does actually come with graph visualization, which is a huge plus for working with this kind of data, though as mentioned above, there are several ways to visualize your Neptune data.

Other

Other graph databases include: AllegroGraph, AnzoGraph, ArangoDB, DataStax Enterprise Graph, InfiniteGraph, JanusGraph, MarkLogic, Microsoft SQL Server 2017, OpenLink Virtuoso, Oracle Spatial and Graph (part of Oracle Database), OrientDB, RedisGraph, SAP HANA, Sparksee, Sqrrl Enterprise, and Teradata Aster.

AWS Neptune – Getting Started

If you’re interested in the new service, you can check out more about AWS Neptune. As you get started, the AWS Neptune docs are a great resource. Or, check out some AWS Neptune Tutorials on YouTube

Once you’re on board, make sure you have cost control as a priority. ParkMyCloud can now park Neptune databases to ensure you’re only paying for what you’re actually using. Try it out for yourself!

New: RightSizing Now Generally Available in ParkMyCloud for Data-Driven Cloud Optimization

New: RightSizing Now Generally Available in ParkMyCloud for Data-Driven Cloud Optimization

Exciting news: RightSizing is now generally available in ParkMyCloud! You can now use this method for automated cost optimization alongside scheduling to achieve an optimized cloud bill in AWS, Azure, and Google Cloud. 

How it Works

When you RightSize an instance, you find the optimal virtual machine size and type for its workload. 

Why is this necessary? Cloud providers offer a myriad of instance type options, which can make it difficult to select the right option for the needs of each and every one of your instances. Additionally, users often select the largest size and compute power available, whether it’s because they don’t know their workload needs in advance, don’t see cost as their problem, or “just in case”.

In fact, our analysis of instances being managed in ParkMyCloud showed that 95% of instances were operating at less than 50% average CPU, which means they are oversized and wasting money.

Now with ParkMyCloud’s RightSizing capability, you can quickly and easily – even automatically – resolve these sizing discrepancies to save money. ParkMyCloud uses your actual usage data to make these recommendations, and provides three recommendation options, which can include size changes, family/type changes, and modernization changes. Users can choose to accept these recommendations manually or schedule the changes to occur at a later date.

How Much You Can Save

A single instance change can save 50% or more of the cost. In the example shown here, ParkMyCloud recommends three possible changes for this instance, which would save 40-68% of the cost. 

At scale, the savings potential can be dramatic. For example, one enterprise customer who beta-tested RightSizing found that their RightSizing recommendations added up to $82,775.60 in savings – an average of more than $90 per month/ more than $1,000 per year for every instance in their environment. 

How to Get Started

Are you already using ParkMyCloud? If not, go ahead and register for a free trial. You’ll have full access for 14 days to try out ParkMyCloud in your own environment – including RightSizing.

If you already use ParkMyCloud, you’ll need to make sure you’re subscribed to the Pro or Enterprise tier to have access to this advanced feature. 

Now it’s time to RightSize! Watch this video to see how you can get started in just 90 seconds: 

Happy savings!

AWS Resource Optimization Recommendations: Good Enough or Not Quite There?

AWS Resource Optimization Recommendations: Good Enough or Not Quite There?

Earlier this week, AWS announced the launch of AWS resource optimization recommendations within their cost management portal. AWS claims that this will “identify opportunities for cost efficiency and act on them by terminating idle instances and rightsizing under-used instances.” Here’s what that actually means, and what functionality AWS still does not provide that users need in order to automate cost control.

AWS Recommendations Overview

AWS Recommendations are an enhancement to the existing cost optimization functionality covered by AWS Cost Explorer and AWS Trusted Advisor. Cost Explorer allows users to examine usage patterns over time. Trusted Advisor alerts users about resources with low utilization. These new recommendations actually suggest instances that may be a better fit. 

AWS Resource Optimization provides two types of recommendations for EC2 instances:

    • Terminate idle instances
    • Rightsize underutilized instances

These recommendations are generated based on 14 days of usage data. It considers “idle” instances to be those with lower than 1%  peak CPU utilization, and “underutilized” instances to be those with maximum CPU utilization between 1% and 40%. 

While any native functionality to control costs is certainly an improvement, users often express that they wish AWS would just have less complex billing in the first place. 

AWS Resource Optimization Tool vs. ParkMyCloud

ParkMyCloud offers cloud cost optimization through RightSizing for AWS, as well as Azure and Google Cloud, in addition to our automated scheduling to shut down resources when they are idle. Note that AWS’s new functionality does not include on/off schedule recommendations.

Here’s how the new AWS resource optimization tool stacks up against ParkMyCloud.

Types of Recommendations Generated

The AWS Resource Optimization tool will provide up to three recommendations for size changes within the same instance family, with the most conservative recommendation listed as the primary recommendation. Putting it another way, the top recommendation will be one size down from the current instance, the second recommendation will be two sizes down, etc. ParkMyCloud recommends the optimal instance type and size for the workload, regardless of the existing instance’s configuration. This includes instance modernization recommendations, which AWS does not offer.

The AWS tool generates recommendations for EC2 instances only, while ParkMyCloud recommends scheduling and RightSizing recommendations for EC2 and RDS. AWS also does not support GPU-based instances in its recommendations, while ParkMyCloud does. 

AWS customers must explicitly enable generation of recommendations in the AWS Cost Management tools. In ParkMyCloud, recommendations are generated automatically (with some access limitations based on subscription tier).  

ParkMyCloud allows you to manage resources across a multi-cloud environment including AWS, Azure, Google Cloud, and Alibaba Cloud. AWS’s tool, of course, only allows you to manage AWS resources.

Recommendation Quality

When you start to dig in, you’ll notice several limitations of the recommendations provided by AWS. The recommendations are based on utilization data from the last 14 days, a range that is not configurable. ParkMyCloud’s recommendations, on the other hand, can be based on a configurable range of 1-24 weeks of data, configurable by the customer by team, cloud provider, and resource type. 

Another important aspect of “optimization” that AWS does not allow the user to configure are the utilization thresholds. AWS assumes that any instance at less than 1% CPU utilization is idle, and assumes any instance between 1-40% CPU utilization is undersized. While these are reasonable rules of thumb, users need the ability to customize such thresholds to best suit their own environment and use cases, and AWS does not allow you to customize these parameters. AWS also assumes an “all or nothing” approach – they recommend that any instance detected as idle simply be terminated. ParkMyCloud does not assume that low utilization means the instance should be terminated, but suggests sizing and/or scheduling solutions with specificity to the utilization patterns.  ParkMyCloud allows users to select between Conservative, Balanced, or Aggressive schedule recommendations with customizable thresholds.

AWS also only evaluates “maximum CPU utilization” to determine idle resources. However, for resource schedule recommendations, ParkMyCloud uses both Peak and Average CPU plus network utilization for all instances, and memory utilization for instances with the CloudWatch agent installed. For sizing recommendations, ParkMyCloud uses maximum Average CPU plus memory utilization data if available. 

Perhaps the most dangerous aspect of the AWS Recommendation is they will recommend an instance size change based on CPU alone, even if they do not have memory metrics. Without cross-family recommendation, this means each size down typically cuts the memory in half.  ParkMyCloud Rightsizing Recommendations do not assume this is OK. In the absence of memory metrics, we make downsizing recommendations that keep memory constant. For a concrete example of this, here is an AWS recommendation to downsize from m5.xlarge to m5.large, cutting both CPU and memory, and resulting in a net savings of $60 per month.

In contrast, here is the ParkMyCloud Rightsizing Recommendation for the same instance:

You can see that while the AWS recommendation can save $60 per month by downsizing from m5.xlarge to m5.large, the top ParkMyCloud recommendation saves a very similar $57.67 by allowing the transition from m5.xlarge to r5a.large, keeping memory constant. While the savings are off by $2.33, this a far less risky transition and probably worth the difference. In both cases, of course, memory data from the CloudWatch Agent would likely result in better recommendations.

As shown in the AWS recommendation above, AWS provides the “RI hours” for the preceding 14 days, giving better visibility into the impact of resizing on your reserved instance usage, and uses this data for the cost and savings calculations. ParkMyCloud does not yet provide correlation of the size to RI usage, though that is planned for a future release.  That said, the AWS documentation also states “Rightsizing recommendations don’t capture second-order effects of rightsizing, such as the resulting RI hour’s availability and how they will apply to other instances. Potential savings based on reallocation of the RI hours aren’t included in the calculation.”  So the RI visibility on the AWS side has minimal impact on the quality of their recommendations.  

If the user is viewing the AWS Recommendation from within the same account as the target EC2 instance, a “Go to the Amazon EC2 Console” button appears on the recommendation details, but it leads to the EC2 console for whatever your last-used region was, and without an automatic filter for the specific instance ID. This means you need to do your own navigation to the right region (perhaps also requiring a new console login if the recommendation is for a different account in the Organization), and then find the instance to see the details. ParkMyCloud provides better ease-of-use in that you can jump directly from the recommendation into the instance details, regardless of your AWS Organization account structure.  ParkMyCloud: 1 click. AWS: At least five, plus copy/paste of the instance ID and plus possibly a login.

ParkMyCloud also shows utilization data for the recommendation below the recommendation text, giving excellent context. AWS again requires navigation the right account, EC2 and then region, or to CloudWatch and the right metrics using the instance ID. 

AWS Resource Optimization also ignores instances that have not been run for the past three days. ParkMyCloud takes this lack of utilization into consideration and does not discard these instances from recommendations. 

AWS regenerates recommendations every 24 hours. ParkMyCloud regenerates recommendations based on the calculation window set by the customer. 

Automation & Ease of Use

While AWS’s new recommendations are generated automatically, they all must be applied manually. ParkMyCloud allows users to accept and apply scheduling recommendations automatically, via a Policy Engine based on resource tagging and other criteria. RightSizing changes can be “applied now”, or scheduled to occur in the future, such as during a designated maintenance window. 

There is also the question of visibility and access across team members. In AWS, users will need access to billing, which most users will not have. In ParkMyCloud, Team Leads have access to view and execute recommendations for resources assigned to their respective teams. Additionally, recommendations can be easily exported, so business units or teams can share and review recommendations before they’re accepted if required by their management process. 

AWS’s management console and user interface are often cited as confusing and difficult to use, a trend that has unfortunately carried forward to this feature. On the other hand, ParkMyCloud makes resource management straightforward with a user-friendly UI. 

Get Started

Want to see what ParkMyCloud will recommend for your environment? Try it out with a free trial, which gives you 14-day access to our entire feature set, and you can see what cost optimization recommendations we have for you.

VMware Cloud on AWS: A Hybrid Cloud Midpoint

VMware Cloud on AWS: A Hybrid Cloud Midpoint

VMware Cloud on AWS is an integrated hybrid cloud offering jointly developed by AWS and VMware. It’s targeted at enterprises (or companies) who are looking to migrate on-premises vSphere-based workloads to public cloud, and provides access to native AWS services. 

Overview of VMware Cloud on AWS 

VMware Cloud on AWS provides an integrated hybrid cloud environment, allowing you to maintain a consistent infrastructure between the vSphere environment in your on-prem data center and the vSphere Software-Defined Data Center (SDDC) on AWS. It also provides a unified view and resource management of your on-prem data center and VMware SDDC on AWS with a single console. 

Digital transformation continues to drive businesses to the cloud to stay competitive. But integrating public cloud with existing private cloud infrastructure requires many technical processes, and skill differences between on-prem and cloud environments to be leveraged for both of these to work simultaneously. This combined offering makes it easier for those familiar with VMware to integrate into the public cloud without having to rewrite applications or modify operating models.

One reason this offering is attractive to customers is that it provides optimized access to native AWS services including compute, database, analytics, IoT, AI/ML, security, mobile, resource deployment, and application services.

Another reason is that with automatic scaling and load balancing VMware Cloud on AWS can adapt to the changing business needs across global regions. They also position themselves as a cost-effective solution for reducing upfront investment costs with no application re-factoring or re-architecting needed when migrating. We’ll take a look at the pricing solutions it offers for on-demand and subscription models, but first, let’s see what VMware Cloud for AWS can do for the enterprise.

Use Cases for VMware Cloud on AWS 

Accelerated and Simplified Data Center Migration

VMware Cloud on AWS claims to accelerate and simplify the migration process for businesses by reducing migration efforts and complexity between on-prem environments and the cloud. Once in the cloud, users can leverage VMware and AWS services to modernize applications and run mission-critical applications quickly with VMware availability and performance combined with the elastic scale of AWS.

Extend the Data Center to the Cloud with Your Existing Skillset

This offering lets users who are used to VMware keep a consistent and familiar environment on the cloud. Since VMware Cloud on AWS doesn’t require re-tooling or re-educating, IT teams can continue to deliver consistently on vSphere-based infrastructure and operations that are already implemented in existing on-prem data centers. 

Add a Robust Disaster Recovery Service to Your Environment

One offering available is VMware Site Recovery: on-demand disaster recovery as a service, optimized for VMware Cloud on AWS to reduce risk without the need to maintain a secondary on-prem site. You can securely replicate workloads to VMware Cloud on AWS so you can spin them up on-demand if disaster strikes. 

Flexible Dev/Test Environment

You can use VMware SDDC-consistent dev/test environments that can integrate with modern CI/CD automation tools and access native AWS services seamlessly. You can spin up an entire VMware SDDC in under two hours and scale host capacity in a few minutes.

VMware Cloud on AWS Cost Compared

So, how does the pricing shake out?  Hosts can be purchased on-demand or as a 1-year or 3-year subscription. If you choose on-demand pricing, you’ll pay for the physical host by the hour that the host is active with no upfront cost, while the long-term subscription is set to provide up to 50% cost savings over an equivalent period compared to on-demand service, but you pay the costs upfront. It’s a similar idea to AWS Reserved Instances, which may or may not be worth the cost.

Depending on the use case, pricing is similar to standard AWS pricing. See how it compares in price with standard AWS or estimate your costs with the pricing estimator. 

Top Tips for Using VMware Cloud on AWS

VMware Cloud on AWS is a good hybrid cloud option for those who want to stay in the VMware ecosystem while dipping their toe in AWS. Here are our top tips for using this offering:

  • Estimate prices in advance: One of the main reasons you want to estimate your pricing before committing to a subscription is to avoid overspend. Idle and overprovisioned resources you are not actually using result in wasted cloud spend, so make sure you’re not oversizing or spending money on cloud resources that should be turned off. 
  • Educate stakeholders on the fact that this allows you to bridge on-premises infrastructure and public cloud without disruption.
  • Consider whether jumping straight to the cloud is possible for some workloads – many companies start with dev/test. If so, you may be able to skip this intermediary step.