How to Create Your 2019 AWS re:Invent Schedule

How to Create Your 2019 AWS re:Invent Schedule

It’s time to plan your 2019 AWS re:Invent schedule! This will be our team’s fifth trek to the AWS conference in Las Vegas, so we’ve put together some tips for planning out your conference experience.

First up, if you have not yet registered for re:Invent, do that now! Tickets have sold out in the past, so don’t wait. 

Choose Your Sessions in Advance

The key to a great AWS re:Invent schedule is to plan in advance. The essential part of this planning is to register for sessions in advance. Reserved seating goes live on October 15 at 10:00 AM PST. Mark your calendar and plan to select a few key sessions immediately – seating can be competitive and fill up quickly!

What you can get started with today is reading through the re:Invent agenda and, especially, the immense event catalog. Note the sessions you’re interested in. You may want to use one of these schedule planning tools to show in a calendar view, have better filters, and get email notifications of new sessions.

When planning, here are some tips to keep in mind:

  • Focus – what do you most hope to gain at re:Invent? You can sort sessions based on subject areas and industries – would a “focus path” help you gain more out of your experience?
  • Value of In-Person vs. Session Videos – Many sessions will be online afterward, so prioritize sessions with an element that is more valuable in person – that may be chalk talks, workshops, and others with interactive elements. You’ll be able to watch any sessions you missed and catch up on the information on others with videos. This can put you more at ease and let you have some fun while in Vegas.
  • Sessions will be repeated at the 2019 event – no doubt in response to complaints about full sessions and long wait times in 2018, for this year, AWS will be repeating their most popular sessions several times over across venues. There will also be overflow rooms for popular sessions and more late-night session options than have been available in previous years. 
  • Travel time – This won’t be the first or the last time you hear this, but it’s worth saying again: the re:Invent campus is big. HUGE. Plan your schedule accordingly, with as few travel periods up and down The Strip as possible. If there are multiple sessions you’re interested in at the same time, prioritize ones with the least travel time. You should also plan to arrive to sessions early.

Once dates, times, and locations have been announced for sessions, we recommend putting them into your calendar for a clean visual of your day, and reminders. Once it’s available, you’ll be able to view your schedule in the AWS re:Invent app, along with maps and more.

Scroll down to the bottom of this post for a list of sessions that looked particularly interesting to our team. 

Set Aside Time for the Expo Hall

Make sure you plan on time to visit the expo hall! Actually, there are two expos – the main one at The Venetian and another at the Aria.

The Welcome Reception from 4-7 PM on Monday is a great time to visit the expo and kick off your re:Invent experience with food, drinks, and giveaways. However, it will be crowded. You’ll want to come back again later in the week to check out vendor products and services, chat with vendors whose products you already use, get swag, and enter drawings. The expo is open from 10:30 AM – 6 PM Tuesday, 8 AM – 6 PM Wednesday, and 10:30 AM – 4 PM Thursday.

You won’t be disappointed by the swag. Just search #reinventswag for examples — sponsors go all out. By the way, if you’re aiming to maximize swag, definitely stop by after lunch on Thursday. Sponsors will practically beg you to take stuff off their hands so they don’t have to ship it home. You can grab toys, stickers, and keychains for your kids, or build an entire wardrobe of t-shirts and socks for yourself.

And of course, stop by and visit ParkMyCloud and Turbonomic at the Venetian expo! We’ll be at booth #1029.

We’d love to see you there!  Sign up for a meeting with us now.

Activities and Parties

Round out your Vegas experience with some partying! The great thing about a conference like this is that you can often drink your way through for free. Outside of the pub crawl, many parties require you to register ahead of time, so keep an eye on your email for invitations. You’ll want to bookmark this list of 2019 re:Invent parties. As of this writing, it’s a bit sparse, but check out last year’s party list for an idea of the multitude of options to come.

Obviously, you don’t want to miss re:Play, the AWS re:Invent party and centerpiece of the conference (you know, besides the keynotes.) More free food, drink, an EDM concert, retro arcade, laser escape room, drone obstacle course, climbing wall, dodgeball, bounce castle, archery tag, and/or whatever else they come up with for this year. And new for 2019 is the Intersect Music Festival put on by AWS on Friday and Saturday, although note that tickets are separate from re:Invent. 

Or venture out beyond the conference hall walls and try your luck or catch a show – it’s hard to be bored in Vegas.

Frequently Asked Questions, Answered

Here are some answers to the most common questions about AWS re:Invent 2019:

How much does AWS re:Invent 2019 cost? 

A full pass to the conference costs $1799. There are a handful of discounts available (for example, if your company is also sponsoring the event), but if you don’t already have access to one of those, we’d recommend going ahead and buying a full pass. 

What are the AWS re:Invent 2019 dates? 

This year’s AWS Las Vegas extravaganza will run December 2-6. If you fly in on Sunday December 1, you can participate in Midnight Madness. And don’t forget that AWS’s new Intersect Music Festival is then 6th and 7th.

Will there be AWS re:Invent video recordings? 

Yes – AWS records many of the sessions and posts to their YouTube channel.

How does AWS re:Invent registration work when I arrive?

There will be registration areas available in multiple venues throughout the conference. There may also be one in the arrival terminal at the airport, but if the line is long, skip it – you’ll wait less at the Venetian or other venues to check-in.

AWS re:Invent Sessions That Caught Our Eyes

See more details and (once open) register through the official AWS re:Invent agenda

  • AIM302-R – Create a Q&A bot with Amazon Lex and Amazon Alexa – A recent poll showed that 44 percent of customers would rather talk to a chatbot than to a human for customer support. In this workshop, we show you how to deploy a question-and-answer bot using two open-source projects: QnABot and Lex-Web-UI. You get started quickly using Amazon Lex, Amazon Alexa, and Amazon Elasticsearch Service (Amazon ES) to provide a conversational chatbot interface. You enhance this solution using AWS Lambda and integrate it with Amazon Connect.
  • AIM303-R –  Stop guessing: Use AI to understand customer conversations – You don’t need to be a data scientist to build an AI application. In this workshop, we show you how to use AWS AI services to build a serverless application that can help you understand your customers. Analyze call-center recordings with the help of automatic speech recognition (ASR), translation, and natural language processing (NLP). Get hands-on by producing your own call recordings using Amazon Connect. In the last step, you set up a processing pipeline to automate transcription and NLP analysis, and run analytics and visualizations on the results.
  • ARC201 – Comparing serverless and containers – Microservices are a great way to segment your application into well-defined, self-contained units of functionality. Come join us in this chalk talk as we discuss two common architectures for deploying microservices: containers and serverless.
  • ARC204-R – Cost optimizing a workload – In this hands-on session, we guide you through the AWS tools, services, and design decisions for architecting cost-optimized applications. Whether you run cloud-native applications or legacy monolithic applications, we show you tricks and techniques to run your workload at the lowest cost possible. Once the workload is optimized, we set up an enterprise cost-optimization dashboard to measure and report on workload efficiency utilizing Amazon Athena and Amazon QuickSight.
  • ARC303-R – Failing successfully: The AWS approach to resilient design – AWS global infrastructure provides the tools customers need to design resilient and reliable services. In this session, we explore how to get the most out of these tools. We discuss achieving continued stability and availability in the face of impaired dependencies. We also cover AWS tools and best practices you can use to design applications and services that avoid overload.
  • ARC406-R1 – Building multi-region microservices – In this session, participate in a hands-on exercise where you create, verify, and test a serverless solution across multiple regions using AWS Lambda and Amazon DynamoDB global tables.
  • CON210-S – How to run like a startup with enterprise Kubernetes on AWS – Scholastic Corporation reinvented itself with the adoption of a startup mindset and a move to microservices on AWS running in Red Hat OpenShift Container Platform, an enterprise Kubernetes distribution. In this session, you hear about the journey that this large publishing enterprise went on during its transformation. From our discussion of breaking up monolithic applications into microservices, you learn about some of the pitfalls along the road to Kubernetes, containers, and microservices adoption. You also hear about the resulting demonstrable benefits—faster time to market, lower infrastructure costs, happier developers, and improved business performance.
  • DOP204-S – Transforming IT pros to DevOps gurus: How to secure your new tech stacks – Large enterprises are limited by legacy systems. With existing tools, traditional platforms, outdated requirements, and more, IT and engineering teams have difficulty building in a modern way. Hear how Pivvot, a US enterprise, used tools in the AWS Cloud to escape this traditional trap. The Pivvot team learned to scale by adopting a DevOps philosophy to support hundreds of organizations and thousands of customers inside a commercial software company. Learn how building a pipeline-driven cloud-native process with built-in security helps modernize an organization. Culture change is challenging, but with the right approach and a strong tech stack, you can build securely and ship quickly in the AWS Cloud. 
  • DOP206-S – Breaking the monolith with style and speed – Microservices are here to stay, but nearly all of the most successful architectures originate from the classic monolith. The promised land of microservices is filled with treasures like decoupled deploys, scalability, resilience, development velocity, and more. However, the journey there can involve prolonged seasons of pain, suffering, and even regret. This talk is the story of how Stitch Fix used all three pillars of observability to build confidence, accelerate its migration, and collaborate with other teams. Learn about the strategies that Stitch Fix used and how it incorporated logs, metrics, and traces into these strategies. 
  • IOT301-R – Race to generate IoT connected wind energy – In this workshop, learn how to connect a simple wind turbine to AWS IoT Core and work in teams to see which one can generate the most aggregate power in a five-minute race at the end of the workshop. A large fan will be the single source of wind, and energy levels from each turbine will be aggregated between the two teams with a continuously updating dashboard. Participants will learn how to set up an IoT device connection with their own account, enrich the data with a pipeline, and store the data for a dashboard.
  • MGT304 – Automate everything: Options and best practices – You can use an expanding set of services to automate many common management tasks in your AWS environment, including patching, configuration updates, software stack deployments, and more. In this session, we explore how you can use AWS management tools for automation, including the use of self-service runbooks. We discuss the many options available, including AWS CloudFormation, AWS Service Catalog, and AWS Systems Manager.
  • MGT310-R – High-velocity service delivery: Infrastructure as code – Customers today are looking to the cloud to help them evolve, adapt, and innovate faster than ever. In this chalk talk, learn how to use AWS native services to increase your organization’s ability to deliver at high velocity using services like AWS CloudFormation, AWS OpsWorks, and AWS Systems Manager. We talk about best practices to help you provision and manage infrastructure, deploy code, and automate your software-release processes.
  • SVS203-R1 – Build a serverless ride-sharing web application – In this workshop, you deploy a simple web application that lets users request unicorn rides from the Wild Rydes fleet. The application presents users with an HTML-based user interface for indicating the location where they would like to be picked up and interfaces on the backend with a RESTful web service to submit the request and dispatch a nearby unicorn. The application also provides facilities for users to register with the service and log in before requesting rides.
  • SVS309 – Development life cycle for serverless backends – With serverless applications, you can easily get started, experiment, and build prototypes. However, as serverless usage grows and more developers adopt serverless, it can be hard to maintain a good workflow from ideas to production. In this talk, we share inspirations for how to build a good development workflow for serverless applications that allow for fast experimentation while sharing common standards across teams and organizations.

See You There!

Do you have any other tips for planning the perfect AWS re:Invent schedule? Sessions you’re looking forward to? Any other questions, tips, swag suggestions  Let us know in the comments. Cheers, and see you there!

More on re:Invent: 2018 recap.

Do Cloud Providers Care About Green Computing?

Do Cloud Providers Care About Green Computing?

Is green computing something cloud providers like Amazon, Microsoft, and Google care about? And whether they do or not – how much does it matter? As the data center market continues to grow, it’s making an impact not only on the economy but on the environment as well. 

Public cloud offers enterprises more scalability and flexibility compared to their on-premise infrastructures. One benefit occasionally touted by the major cloud providers is that organizations will be more socially responsible when moving to the cloud by reducing their carbon footprint. But is this true?

Here is one example: Northern Virginia is the east coast’s capital of data centers, where “Data Center Alley” is located (and, as it happens, the ParkMyCloud offices), home to more than 100 data centers and more than 10 million square feet of data center space. Northern Virginia welcomed the data center market because of its positive economic impact. But as the demand for cloud services continues to grow, the expansion of data centers also increases dramatically. Earlier this year, the cloud boom in Northern Virginia alone was reaching over 4.5 gigawatts in commissioned energy, about the same power output needed from nine large (500-megawatt) coal power plants. 

Environmental groups like Greenpeace have accused major cloud providers like Amazon Web Services (AWS) of not doing enough for the environment when operating data centers. According to them, the problem is that cloud providers rely on commissioned energy from energy companies that are only focused on dirty energy (coal and natural gas) and very little from initiatives that include renewable energy. While the claims bring the spotlight on energy companies as well, we wanted to know what (if anything) the major cloud providers are doing to rely less on these types of energy and provide data centers with cleaner energy to make green computing a reality.

Data Center Sustainability Projects from AWS

According to AWS’s sustainability team, they’re investing in green energy initiatives and are striving to commit to an ambitious goal of 100% use of renewable energy by 2040. They are doing this with the proposition and support of smart environmental policies, and leveraging expertise in technology that drives sustainable innovation by working with state and local environmental groups and through power purchase agreements (PPAs) from power companies.

AWS’s Environmental Layer, which is dedicated to site selection, construction, operations and the mitigation of environmental risks for data centers, also includes sustainability considerations when making such decisions. According to them, “When companies move to the AWS Cloud from on-premises infrastructure, they typically reduce carbon emissions by 88%.” This is because their data suggests companies generally use 77% fewer servers, 84% less power, and gain access to a 28% cleaner mix of energy – solar and wind power – compared to using on-premise infrastructure. 

Amazon Solar Farm

So, how much of this commitment has AWS been able to achieve and is it enough? In 2018, AWS said they had made a lot of progress in their sustainability commitment, and exceeded 50% of renewable energy use. Currently, AWS has nine renewable energy farms in the US, including six solar farms located in Virginia and three wind farms in North Carolina. AWS plans to add three more renewable energy projects, one more here in the US, one in Ireland and one in Sweden. Once completed they expect to create approximately 2.7 gigawatts of renewable energy annually.

Microsoft’s Environmental Initiatives for Data Centers

Microsoft has stated that they are committed to change and make a positive impact on the environment, by “leveraging technology to solve some of the world’s most urgent environmental issues.”

In 2016, they announced they would power their data centers with more renewable energy, and set a target goal of 50% renewable energy by the end of 2018. But according to them, they were able to achieve that goal by 2017, earlier than they expected. Looking ahead they plan to surpass their next milestone of 70% and hope to reach 100% of renewable energy by 2023. If they were to meet these targets, they would be far ahead of AWS.

Beyond renewable energy, Microsoft plans to use IoT, AI and blockchain technology to measure, monitor and streamline the reuse, resale, and recycling of data center assets. Additionally, Microsoft will implement new water replenishment initiatives that will utilize rainfall for non-drinking water applications in their facilities.

Google’s Focus for Efficient Data Centers 

Google claims that making data centers run as efficiently as possible is a very big deal, and that reducing energy usage has been a major focus to them for over the past 10 years. 

Google’s innovation in the data center market came from the process of building facilities from the ground up instead of buying existing infrastructures. According to Google, using machine learning technology to monitor and improve power-usage-effectiveness (PUE) and find new ways to save energy in their data centers gave them the ability to implement new cooling technologies and operational strategies that would reduce energy consumption in their buildings by 30%. Additionally, they deployed custom-designed, high-performance servers that use very little energy as possible by stripping them of unnecessary components, helping them reduce their footprint and add more load capacity. 

By 2017, Google announced they were using 100% renewable energy through power purchase agreements (PPAs) from wind and solar farms and then reselling it back to the wholesale markets where data centers are located. 

The Environmental Argument

Despite the pledges cloud providers are committing to in renewable energy, cloud services continue to grow beyond those commitments, and how much energy is needed to operate data centers is still very dependant on “dirty energy.”

Breakthroughs for cloud sustainability are taking place, whether big or small, providing the cloud with better infrastructures, high-performance servers, and the reduction of carbon emissions with more access to renewable energy resources like wind and solar power. 

However, some may argue the time might be against us, but if cloud providers continue to better improve existing commitments that keep up with growth, then data centers – and ultimately the environment – will benefit from them.

Are AWS Reserved Instances Better Than On-Demand?

Are AWS Reserved Instances Better Than On-Demand?

AWS Reserved Instances vs. Scheduling On-Demand Instances

We are frequently asked about the impact of instance scheduling on AWS Reserved Instances for EC2 and RDS.  Scheduling On Demand instances as well as Reserved Instances (RIs) are both useful techniques for cost optimization, but they are polar opposites in terms of goals.  RIs are all about getting a better price for RDS or EC2 instances that are running all the time. Scheduling is all about reducing costs by turning off instances when they are not in use.  

How should a customer choose between buying an AWS Reserved Instance and applying scheduling to the instance?  This is an important question, as usually the savings from scheduling can exceed the savings available from a Reserved Instance.  Before we dive into that though, let’s get familiar with some critical RI nuances that can help you make the decision…and make the most of your RIs.

Note: versions of this article were originally published in 2015 and 2017. It has been completely re-analyzed, rewritten, and updated! 

Where are my Reserved Instances?

First of all, it’s worth clarifying that there is no functional difference between Amazon Reserved Instances and On Demand. It’s all in the billing. <rant> I wish I had a dollar for each time someone asked me (while looking at the AWS Console) “Which ones of these are the Reserved Instances?” </rant>. You cannot be sure which instances are using a reservation until you get your bill.  You can make a good guess, based on what account you are looking at and what reservations you have purchased, but if you are using instance size flexibility with region-based RIs, it would be a pure guess.  An instance reservation is not like a hotel reservation, where you end up with a specific room for the duration of your stay. That would be a Dedicated Host Reservation – a whole other beast with its own set of limitations.

3600 Seconds per Hour

Yep – there are 3600 seconds in an hour.  I did the math. You may ask: why does that matter?  It matters because of two things: per-second billing for EC2, and the fact that AWS RIs are billed in one hour chunks.  More precisely, RIs are billed in one “clock-hour” chunks, where a clock-hour starts on the hour and up to the final second of the hour – like 04:00:00 to 04:59:59.

If you are running a number of instances that use per-second billing, and they go up or down during a (clock) hour, then multiple instances may be able to leverage the Reserved Instance pricing.

As paraphrased shamelessly from here, if you purchase one m5.large Reserved Instance and run four matching m5.large instances for 15 minutes each (900 seconds each) in the same hour, the total running time is one hour, which consumes the entire Reserved Instance allocation.  This usage pattern is illustrated below.

That said, if multiple matching instances are running at the same time, the Reserved Instance billing benefit is applied to all of the instances, up to a maximum of 3600 seconds in a clock-hour.  On-demand rates are used for any usage after that. This is illustrated below.

By the way, remember that per-second billing only applies to instances running an open-license OS like Amazon Linux or Ubuntu.  

If your instance is running any flavor of Windows, Red Hat Linux, or any pay-by-the-hour image from the AWS Marketplace, that instance is billed by the hour, and any matching reservation will be consumed by the hour, even if the instance is not.  Any overlapping instances will each need their own RI to get RI pricing.

Elastic Reserved Instances

Wait…what?  Isn’t that kind-of an oxymoron?  Elastic means it should be able to change with my usage…and aren’t reservations are a commitment to usage?  Well, maybe…but some RIs are definitely more elastic (or at least more flexible) than others.  Regional Reserved Instances can leverage “instance size flexibility.”  This is the ability of a reservation to apply up or down to other instance types in the same family.  Regional RIs are applied in priority order from the smallest instance in the region to the largest instance in the region.  This makes these AWS reservations somewhat elastic in that if you buy reservations for smaller instance types than you normally use, then those reservations are more likely to be consumed, even if you rightsize an instance or replace one server with another of a different size.

The math of how an RI of one instance type can be applied to another instance type is governed by a “normalization factor”, described here.  Putting the table in that link another way:

Some examples of how we can use this table:

  • To fully cover one t3.medium with RIs, you would buy eight t3.nano RIs.  If you stop using the t3.medium instance, the same eight t3.nano RIs could also be used to cover:
      • Two t3.small instances
      • One t3.small and two t3.micro
      • Half of a t3.large
  • To fully cover an m5.12xlarge would require 24 m5.large RIs.  Those 24 m5.large RIs could also be used for:
      • One m5.8xlarge and one m5.4xlarge
      • One m5.8xlarge and two m5.2xlarge
      • One m5.8xlarge, one m5.2xlarge, one m5.xlarge, and two m5.large
      • Three quarters of an m5.16xlarge
      • Etc.

The lesson here is that it is easiest to manage reserved instances if you buy the smallest instance size available within an instance family.  In fact, as shown below this is the exact type of recommendation you are likely to receive from the AWS Cost Explorer.

So why do we mention AWS flexible reserved instances with relation to schedules?  Recall that the allocation of RIs to instances is done by AWS each second within a clock-hour.  Each reservation is consumed by instances in its family until all 3600 seconds have been allocated.  You do not necessarily have to keep an instance up for the whole hour to use the reservation. You can still consume the whole RI, even if you are starting and stopping instances by schedule or within an auto-scaling group over the course of an hour.

Something so cool cannot be without limitations and gotchas. Here are a few we’ve noticed:

  • Instance size flexibility is ONLY available for certain instance platforms.
    • For AWS EC2 reserved instances, only open-license linux instances support flexible RIs.  It is NOT APPLICABLE to any flavor of Windows or other licensed software and OS.
    • For AWS RDS reserved instances, instance size flexibility is available for MySQL, MariaDB, PostgreSQL, and Amazon Aurora database engines as well as the “Bring your own license” (BYOL) edition of the Oracle. [That said: it is not clear as of this writing if the recent RDS per-second billing change is also applied to RDS instance size flexibility. The billing pages still state that RDS RIs are allocated to instances on a per-hour basis.]
  • Flexibility is only available within the same instance family
  • Flexibility is only available for Regional RIs 

How does AWS Reserved Instance Pricing Work?

As usual, there are a number of different decisions that need to be made before this question can be answered.  AWS RIs are available with two different levels of flexibility and several different terms of purchase. 

In terms of flexibility, RIs can either be purchased as “standard” or “convertible.” 

  • AWS Convertible Reserved Instances – allow changing the assigned availability zone, instance size/families, operating system, tenancy, and networking type. There is no fee for changing the reservation attributes, but you can only convert the RI into a new set of attributes with equal or greater cost than the original.  If the cost is greater, you will have to pay the difference.
  • Standard Reserved Instances – once purchased, cannot be changed. They can be sold on the AWS Reserved Instance Marketplace.

Both AWS Convertible RIs and Standard RIs offer:

  • Regional vs. Availability Zone assignment scope.  Regional scope offers a lot more flexibility for where your instances can reside while still leveraging RI pricing. Availability Zone scope has the bonus of a guaranteed capacity reservation (though, like RI pricing,  capacity reservation cannot be assigned to a specific instance).
  • For open-license Linux, instance size flexibility, as described earlier.

There are a few options for terms of purchase that affect AWS RI pricing. RIs can be purchased for 1 or 3-year terms, and with no upfront, partial upfront, or all upfront payment available for each. 

To get a better idea of the difference in the amount you’ll end up paying with each purchasing option, let’s take a look at some pricing scenarios for the general purpose m5.large instance in us-east-1 (US Virginia).  When you look at these on the AWS RI price list, they are usually grouped by year terms and standard vs. convertible.  Since you can already see it that way on AWS, and because time is the greatest unknown, I am going to give a bit of a different view, keeping the years together on the same comparison chart.

One-Year Term RIs

To  benefit from the discount provided by a  reservation while keeping contract terms to a minimum, look at the 1-year RIs:

You can see from this that within the Convertible vs. Standard tiers, there is not a lot of difference between the upfront cost payment options.  Your upfront cash decision here can depend on your accounting process, perhaps accounting for the difference between CapEx and OpEx (Capital Expenditures and Operating Expenditures), or based on your current and prospective cash flow.  The pricing between the Convertible vs. Standard options are so similar as to make you seriously consider if the benefit of flexibility in the AWS Reserved Instances convertible option outweighs the minor cost savings.

Three-Year Term RIs

If you are confident of your three-year usage plans, look at the 3-year RIs, shown here as the cost per year:

There is a bit more visible progression between the options here, and the cost savings difference between the 1-year graph and the 3-year graph are clearly seen.  Still, the 1-year vs. 3-year decision may again come down to accounting practices.

Note that these graphs are just for one instance type in one location in about the middle of the overall performance pack.  They do not express a true profile of all instance types/families in all locations, and you should do your own comparison before buying any RIs.  That said….

AWS Reserved Instances vs. On Demand

Scheduling Break-even

FINALLY getting into the core question.  Is it better to shut an instance down or buy a reserved instance?  Let’s do the math and compare AWS On Demand vs. Reserved, with on/off scheduling applied to the On Demand option.  Given all the different pricing options it can be a bit difficult to decide how to compare these. Let’s go with the no-upfront RI options and see where the RIs break even with scheduling.

AWS On Demand vs. Reserved Instance Purchase Option Comparison

Since the hours might be judge at this scale, here are the break-even hours in a table (hours rounded-up):

So what do these hours mean?  

By way of comparison, here are a few common schedules and their hours in ParkMyCloud (with weeks converted to months on the basis of 52 weeks/12 months = 4.33 weeks/month):

  • Running 8 AM – 5 PM, Weekdays = 195 hours/month (nominally a 73% savings)
  • Running 7 AM – 7 PM, Weekdays = 260 hours/month (64% savings)
  • Running 5 AM – 9 PM, Weekdays = 347 hours/month (53% savings)

Of the above, 7 AM – 7 PM Weekdays is the second most popular schedule in our entire system, and at 260 hours, easily is more cost-effective than the least expensive RI.  

In that final schedule above, you need to run 16 hours per day, Monday-Friday before a 3-year RI starts to look like a better deal.  If you have other matching instances on another schedule in that same region/AZ, you could then buy the RI anyway, and save even more money.

Scheduled Reserved Instances

“But wait!” you say…  “There are these things called Scheduled Reserved Instances – would they not offer me the best of both worlds?”  Maybe, but note that to support Scheduled RIs, AWS has essentially set aside hardware for use as Scheduled RIs – this is a finite set of dedicated resources.  On paper, a Scheduled RI can definitely save you a bit more money, but let’s look at the limitations.  

  • Instances intended to consume a Scheduled Reservation must be launched during their scheduled time periods using a launch configuration that matches the reservation. Let’s break that down further:
      • They are launched with “a launch configuration” and terminated by EC2 three minutes before the end of the schedule window.  You cannot stop/start/suspend them, and so they cannot maintain state internally unless you do some EBS volume tweaks after launch.
      • You cannot launch a scheduled RI outside of its scheduled window.  If you try, that instance will not use the Reservation, as you would not expect it to magically jump over onto the dedicated hardware.
  • Only three regions and only a few older generation instances families are supported.
  • Limited quantities are available (as of this writing, a search for an m4.large in us-east-1 with a Monday-Friday 7 AM – 7 PM scheduled listed only two reservations being available).
  • There is a minimum reservation size of 1,200 hours/year, 100 hours/month, 24 hours/week, or 4 hours/day.  (So you are not going to be able to use a Scheduled RI to run your database backup server for just a couple hours on Sunday mornings [I just tried…dang it.])
  • Scheduled RI purchases cannot be cancelled, modified or sold on the RI Marketplace. No buyer’s remorse allowed here…
  • And finally…  AWS describes the cost savings of Scheduled RIs over on-demand as being in the 5-10% range.  Again – you have to be really sure this workload is going to be needed on this schedule, and the workload is fully utilized.  If you find this server is ever idle or under-utilized, you may have lost an opportunity to save 50% or more by simply scheduling it with ParkMyCloud and then rightsizing it.

When launched in 2016, Scheduled RIs were only available in “US East (N. Virginia), US West (Oregon), and Europe (Ireland) regions, with support for the C3, C4, M4, and R3 instance types.”  Given that this has not changed, I am guessing the feature has not taken off as AWS had hoped.  That said, if you have something that needs to run for 4 hours per day, daily, and can use an older instance family in a supported region…go for it!  This may also be a good place to leverage AWS Spot instance types.

Should You Buy Reserved Instances?

Deciding between different RI options starts with knowledge of how stable your usage is going to be in the coming years.  Most organizations have a good understanding of at least a baseline level of usage, and some actually construct portfolios of reserved instance purchases, much like how you might balance a stock portfolio.  For example, 3-year Standard Reserved Instances are for those applications that are stable and unlikely to change. Balance those with some 3-year Convertible Reserved Instances to help save money in the longer term but allow for some flexibility in use.  Use 1-year RIs to account for shorter-term usage volatility, with again maybe a balance between Standard and Convertible where needed for those loads that might shift over the short term.  

Just like with stocks, such portfolios require active management, awareness of what is being used and what is coming in both the short and long term.  Companies with these types of RI management typically have dedicated cloud cost management staff, though RI management itself does not need to be a full-time job.  One of our larger customers tells us they do a quarterly review of their RI portfolio, starting with using ParkMyCloud to schedule any instances they can, and then rebalancing their RIs and investing in more as needed.  This is a best practice for any size company.

For production workloads that run 24×7 long-term, you REALLY should be buying RIs. For production workloads with an unknown future, or for non-production workloads (dev, test, QA, training, etc.), the answer is “probably not”.  Be careful though, as often RIs are seen as a quick fix to save 30-50% on the cloud bill, but then other ways are found to save even more, and you end up with underutilized RIs. Before you buy a Standard RI, make sure you have evaluated your resources for underutilization and overprovisioning.  Also consider the following cost-saving options:

  • Choose smaller instance sizes – each size down can save 50%. Use ParkMyCloud Rightsizing to first make sure your resources are appropriately sized for their load before committing to an RI purchase. 
  • For open-license Linux instances, buy RIs for smaller instance types in the same instance family in order to leverage instance size flexibility (allows the RIs to more easily be allocated against other resources, even if they are running for short periods of time)
  • Use Spot Instances and Autoscaling when appropriate for short-term/volatile workloads.
  • Schedule on/off times for on-demand instances.  Use ParkMyCloud SmartParking to automatically create schedules for when the resources are not being used, while still giving your staff the flexibility to easily restart them if needed outside the normal hours. Even a generous 7 AM – 7 PM workday schedule will immediately save 65%!

The worst thing one can do is…nothing.  Set aside some time each quarter to review EC2 and RDS instance utilization.  Rightsize and schedule what you can – try a free trial of ParkMyCloud to see how we can help you with that.  After that, anything left running 24×7 is a candidate for an RI.  If you are risk-averse or time-crunched, stick with 1-year convertible RIs to save at least 36% on at least some of your resources.  Then take all that money you saved and put it into other things that will bring greater value to your business.

AWS Neptune Overview – Amazon’s Graph Database Service

AWS Neptune Overview – Amazon’s Graph Database Service

AWS Neptune is AWS’s managed graph database service, offered to give customers an option to easily build and run applications that work with highly connected datasets. It was first announced at AWS re:Invent 2017, and made generally available in May 2018.

Graph databases like AWS Neptune were created to address the limitations of relational databases, and offer an efficient way to work with complex data. 

What is a graph database?

A graph database is a database optimized to store and process highly connected data – in short, it’s about relationships. The data structure for these databases consists of vertices or nodes, and direct links called edges.

Use cases for such highly-connected data include social networking, restaurant recommendations, retail fraud detection, knowledge graphs, life sciences, and network & IT ops. For a restaurant recommendations use case, for example, you may be interested in the relationships between various users, where those users live, what types of restaurants those users like, where the restaurants are located, what sort of cuisine they serve, and more. With a graph database, you can use the relationships between these data points to provide contextual restaurant recommendations to users.

Details on the AWS Neptune Offering

AWS Neptune Pricing 

The AWS Neptune cost calculation depends on a few factors:

  • On-Demand instance pricing – you’ll need to pay for the compute instances needed for read-write workloads as well as Amazon Neptune replicas. These follow the general pricing for AWS On Demand instances.
  • Database Storage & I/Os – storage is also paid per usage with no upfront commitments. Storage is billed in per GB-month increments and I/Os are billed in per million request increments. 
  • Backup storage – you are charged for the storage associated with your automated database backups and database cluster snapshots. As per usual, increasing the retention period will cost more. 
  • Data transfer – you are charged per GB for data transferred in and out of AWS Neptune.

For this, as with most AWS services, pricing is confusing and difficult to predict. 

AWS Neptune Use Cases

Use cases for the AWS graph database and other similar offerings include:

  • Machine learning, such as intelligent image recognition, speech recognition, intelligent chatbots, and recommendation engines.
  • Social networking
  • Fraud detection – flexibility at scale makes graph databases useful to work with the huge amount of transactional data needed to detect fraud. 
  • Regulatory compliance – ever-more important as HIPPA, GDPR, and other regulations pose strict regulations on the way organizations use data about customers.
  • Knowledge graphs – such as advanced results for keyword searches and complex content searches.Life sciences – graph databases are uniquely suited to store models of disease and gene interactions, protein matterns, chemical compounds, and more. 
  • Network/IT Operations to keep networks secure, including identity and access management, detection of malicious file paths, and more. 
  • Supply chain transparency – graph databases are great for modeling complex supply chains that span the globe. 

Tired of SQL?

If you’re tired of SQL, AWS Neptune may be for you. A graph database is fundamentally different from SQL. There are no tables, columns, or rows – it feels like a NoSQL database. There are only two data types: vertices and edges, both of which have properties stored as key-value pairs.

AWS Neptune is fully managed, which means that database management tasks like hardware provisioning, software patching, setup, configuration, and backups are taken care of for you.

It’s also highly available and shows up in multiple availability zones. This is very similar to Aurora, the relational database from Amazon, in its architecture and availability.

Neptune supports Property Graph and W3C’s RDF. You can use these to build your own web of data sets that you care about, and build networks across the data sets in the way that makes sense for your data, not with arbitrary presets. You can do this using the graph models’ query languages: Apache TinkerPop Gremlin and SPARQL.

AWS Neptune Visualization is not built in natively. However, data can be visualized with Amazon SageMaker Jupyter notebooks, or third-party options like Metaphactory, Tom Sawyer Software, Cambridge Intelligence/Keylines, and Arcade. 

Other Graph Database Options

There’s certainly competition in the market for other graph database solutions. Here are a few that are frequently mentioned. 

AWS Neptune vs. Neo4j

Neo4j is a graph database that has been rated most popular by mindshare and adoption. Version 1.0 was released in February 2010. Unlike AWS Neptune, Neo4j is open source. Neo4j uses the language Cypher, which it originally developed. While there are several languages available in the graph database market, Cypher is widely known by now. 

Neo4j, unlike AWS Neptune, does actually come with graph visualization, which is a huge plus for working with this kind of data, though as mentioned above, there are several ways to visualize your Neptune data.

Other

Other graph databases include: AllegroGraph, AnzoGraph, ArangoDB, DataStax Enterprise Graph, InfiniteGraph, JanusGraph, MarkLogic, Microsoft SQL Server 2017, OpenLink Virtuoso, Oracle Spatial and Graph (part of Oracle Database), OrientDB, RedisGraph, SAP HANA, Sparksee, Sqrrl Enterprise, and Teradata Aster.

AWS Neptune – Getting Started

If you’re interested in the new service, you can check out more about AWS Neptune. As you get started, the AWS Neptune docs are a great resource. Or, check out some AWS Neptune Tutorials on YouTube

Once you’re on board, make sure you have cost control as a priority. ParkMyCloud can now park Neptune databases to ensure you’re only paying for what you’re actually using. Try it out for yourself!

Microsoft’s Start/Stop VM Solution vs. ParkMyCloud

Microsoft’s Start/Stop VM Solution vs. ParkMyCloud

Users looking to save money on public cloud may be in the market for a start/stop VM solution. While it sounds simple, there is huge savings potential available simply by stopping VMs, typically on a schedule. The basic idea is that non-production instances don’t need to run 24×7, so by turning VMs off when they’re not needed, you can save money.

If you use Microsoft Azure, perhaps you’ve seen the Start/Stop VM solution in the Azure Marketplace. You may want this tool if you want to configure Azure to start/stop VMs for the weekend or on weekday nights. It may also serve as a way to avoid creating a stop VM powershell.

Users of Azure have taken advantage of this option to start/stop VMs during off-hours, but have found that it is lacking some key functionality that they require for their business. Let’s take a look at what this Start/Stop tool offers and what it lacks, then compare it to ParkMyCloud’s comprehensive offering.

Azure Start/Stop VM Solution

Let’s take a look at Azure’s start/stop VM solution. The crux of this solution is the use of a few Azure services, specifically Automation and Log Analytics to schedule the VMs and Azure Monitor emails to let you know when a system was shut down or started. Both scheduling and keeping track of said schedules are important. 

As far as the backbone of Azure services, the use of native tools within Azure can be useful if you’re already baked into the Azure ecosystem, but can be prohibitive to exploring other cloud options. You may only use Azure at the moment, but having the flexibility to use other public clouds in the future is a strong reason to use cloud-agnostic tools today.

Next, this solution costs money, but it’s not very easy to estimate the cost (but does that surprise you?). The total cost is based on the underlying services (Automation, Log Analytics, and Azure Monitor), which means it could be very cheap or very expensive depending on what else you use and how often you’re scheduling resources. 

The schedules themselves can be based on time, but only for a single start and stop time – which is not practical for typical applications. The page claims it can be based on utilization, but in the initial setup there is no place to configure that. It also needs to be set up for 4 hours before it can show you any log or monitoring information. 

The interface for setting up schedules and automation is not very user-friendly. It requires creating automation scripts that are either for stopping or starting only, and only have one time attached. This is tedious, and the single-time configuration makes it difficult to maximize off time and therefore savings. 

To create new schedules, you have to create new scripts, which makes the interface confusing for those who aren’t used to the Azure portal. At the end of the setup, you’ll have at least a dozen new objects in your Azure subscription, which only grows if you have any significant number of VMs.

Users have noted numerous complaints in the solution’s reviews:

  • Great idea – painful to use – I don’t know why it couldn’t work like the auto shutdown built into the VM config with maybe a few more options (on/off weekdays vs. weekends). Feels like a painful set of scripts with no config options once it’s deployed (or I don’t understand how to use it).”
  • “Tried to boil the ocean – This solution is complex and bloated. It still supports classic VMs. The autostop solution only supports stop not start. Why bother using this?”
  • Start/Stop VM Azure – Difficult to do and harder to modify/change components. I’ll have difficulty to repeat to create another schedule for different VM.”

Luckily, there’s an easier option.

How it stacks up to ParkMyCloud

So if the Start/Stop VM Solution from Microsoft can start and stop Azure VMs, what more do you need? Well, we at ParkMyCloud have heard from customers (ranging from day-1 startups to Fortune 100 companies) that there are features necessary for a cloud cost optimization tool if it is going to get widespread adoption. 

That’s why we created ParkMyCloud: to provide simple, straightforward cost optimization that provides rapid ROI while being easy to use. You can use ParkMyCloud to save money through Azure start/stop VM schedules for non-production resources that are not needed evenings and weekends, as well as RightSizing overprovisioned resources.

Here are some of the features ParkMyCloud has that are missing from the Microsoft tool:

  • Single Pane of Glass – ParkMyCloud can work with multiple clouds, multiple accounts within each cloud, and multiple regions within each account, all in one easy-to-use interface.
  • Easy to change or override schedules – Users can change schedules or temporarily override them through the UI, our API, our Slackbot, or through our iOS app. 
  • Schedule recommendations – the Azure tool requires users to determine their own schedules. ParkMyClouds recommends on/off schedules based on keywords found in tags and names, and based on resource utilization history. 
  • Policy engine – ParkMyCloud can assign schedules automatically based on rules you create based on teams, names, or other criteria.
  • RightSizing – in addition to on/off schedules,  you can also save money with RightSizing. Our data shows that more than 95% of VMs are operating at less than 50% average CPU, which means they are oversized and wasting money.  Changing the VM size or family, or modernizing instance types, saves 50-75% of the cost of the instance.
  • User Management – Admins can delegate access to users and assign Team Leads to manage sub-groups within the organization, providing user governance over schedules and VMs. Admin, Team Lead, and Team Member roles are able to be modified to fit your organization’s needs.
  • No Azure-specific knowledge needed – Users don’t need to know details about setting up Automation Scripts or Log Analytics to get their servers up and running. Many ParkMyCloud administrators provide access to users throughout their organizations via the ParkMyCloud RBAC. This is useful for users who may need to, say, start and stop a demo environment on demand, but who do not have the knowledge necessary to do this through the Azure console.
  • Enterprise features – Single sign-on, savings reports, notifications straight to your email or chat group, and full support access helps your large organization save money quickly.
  • Integrations – use ParkMyCloud with your favorite SSO tools such as Ping and Okta. Get notifications and send commands back to ParkMyCloud through tools like Slack and Microsoft Teams.
  • Straightforward setup – it usually takes new users 15 minutes or less to set up a ParkMyCloud account, connect to Azure, and get started saving money. 
  • Reporting – with ParkMyCloud, users can view, download, and email savings reports covering costs, actions, and savings by team, credential, provider, resource, and more.
  • Notifications – users can get configurable notifications of ParkMyCloud updates & activities via email, webhook or ChatOps.
  • Huge cost savings and ROIhere are just a few examples from some of our customers.
    • A global fast food chain is managing 3,500+ resources in ParkMyCloud and saving more than $200,000 per month on their cloud spend
    • A global registry software company has saved more than $2.2 million on their cloud spend since signing up for ParkMyCloud – an ROI of 6173%
    • A global consumer goods company with 200+ ParkMyCloud users saves more than $100,000 per month on their cloud spend.

As you can tell, the Start/Stop VM solution from Microsoft can be useful for very specific cases, but most customers will find it lacking the features they really need to make cloud cost savings a priority. ParkMyCloud offers these features at a low cost, so try out the free trial now to see how quickly you can cut your Azure cloud bill.

Related Reading:

9 Key Takeaways from our AWS Webinar on Automated Cost Control

9 Key Takeaways from our AWS Webinar on Automated Cost Control

We recently held our first AWS webinar, featuring speakers from AWS, Sysco, and our CTO Bill Supernor. If you missed “How to Turn AWS Utilization Data into Automated Cost Control,” not to worry! You can watch a replay here.

Here are 9 takeaways from this AWS webinar – and more resources to learn about them:

    • Cost Optimization is one of five key pillars in the AWS Well-Architected Framework, and we’re glad to see AWS prioritizing controlled costs so highly. If you’re not already familiar with the Well-Architected Framework, learn more on the AWS site. The other pillars, by the way, include operational excellence, security, reliability, and performance efficiency. 
    • Choose the right pricing model for your workload needs. Make sure to evaluate whether Reserved Instances are a good choice before committing, and don’t forget about Spot Instances either. 
    • Tagging resources according to cost allocation was emphasized by AWS as important for decision making – and of course it is! You have to be able to categorize your resources to make decisions about them. Here’s more on how to improve cloud automation through tagging.
    • Use AWS CloudWatch – similarly, use your CloudWatch data to optimize your environment. AWS is collecting data about your usage whether you’re looking at it or not – so put it to work!
    • Bagels work – Sysco Foods’ Kurt Brochu shared that he could motivate his team to show up for cost optimization trainings by providing bagels. Sometimes it takes a bit of prodding to get team members not directly responsible for budget to care about cost, so don’t be afraid to get creative. 
    • Use Gamification as a motivator – similarly, by turning cost savings into a race or other competition, you can awake interest that might otherwise be hard to find.
    • There are plenty more AWS webinars – AWS partners frequently hold webinars in conjunction with the cloud provider. One of the best places to learn about them is the @AWS_Partners Twitter channel.

Watch the replay of our AWS webinar for the full story – and let us know in the comments below what else you’d like to learn about in future webinars!