4 Cloud Computing Jobs to Check Out if You Want to Break Into the Space

Lately, we’ve been thinking about cloud computing jobs and titles we’ve been seeing in the space. One of the great things about talking with ParkMyCloud users is that we get to talk to a variety of different people. That’s right – even though we’re laser-focused on cloud cost optimization, it turns out that can matter to a lot of different people in an organization. (And no wonder, given the size of wasted spend – that hits people’s’ buttons).

You know the cloud computing market is growing. You know that means new employment opportunities, and new niches in which to make yourself valuable. So what cloud computing jobs should you check out?

If you are a sysadmin or ops engineer:

Cloud Operations. Cloud operations engineers, managers, and similar are the people we speak with most often at ParkMyCloud, and they are typically the cloud infrastructure experts in the organization. This is a great opportunity for sysadmins looking to work in newer technology.

If you’re interested in cloud operations, definitely work on certifications from AWS, Azure, Google, or your cloud provider of choice. Attend meetups and subscribe to industry blogs – the cloud providers innovate at a rapid pace, and the better you keep up with their products and solutions, the more competitive you’ll be.

See also: DevOps, cloud infrastructure, cloud architecture, and IT Operations.

If you like technology but you also like working with people:

Customer Success, cloud support, or other customer-facing job at a managed service provider (MSP). As we recently discussed, there’s a growing market of small IT providers focusing on hybrid cloud in the managed services space. The opportunities at MSPs aren’t limited to customer success, of course – just in the past week we’ve talked to people with the following titles at MSPs: Cloud Analyst, Cloud Engineer, Cloud Champion/Cloud Optimization Engineer, CTO, and Engagement Architect.

Also consider: pre-sales engineering at one of the many software providers in the cloud space.

If you love process:

Site Reliability Engineer. This title, invented by Google, is used for operations specialists who focus on keeping the lights on and the sites running. Job descriptions in this discipline tend to focus on people and processes rather than around the specific infrastructure or tools.  

If you have a financial background:

Cloud Financial Analyst. See also: cloud cost analyst, cloud financial administrator, IT billing analyst, and similar. Cloud computing jobs aren’t just for technical people — there is a growing field that allows experts to adapt financial skills to this hot market. As mentioned above, since the cloud cost problem is only going to grow, IT organizations need professionals in financial roles focused on cloud. Certifications from cloud providers can be a great way to stand out.

What cloud computing jobs are coming next?

As the cloud market continues to grow and change, there will be new cloud computing job opportunities – and it can be difficult to predict what’s coming next. Just a few years ago, it was rare to meet someone running an entire cloud enablement team, but that’s becoming the norm at larger, tech-forward organizations. We also see a trend of companies narrowing in “DevOps” roles to have professionals focused on “CloudOps” specifically — as well as variations such as DevFinOps. And although some people hear “automation” and worry that their jobs will disappear, there will always be a need for someone to keep the automation engines running and optimized. We’ll be here.

Read more ›

Multi-Cloud, Hybrid Cloud, and Cloud Spend – Statistics on Cloud Computing

The latest statistics on cloud computing all point to multi-cloud and hybrid cloud as the reality for most companies. This is confirmed by what we see in our customers’ environments, as well as by what industry experts and analysts report. At last week’s CloudHealth Connect18 in Boston we heard from Dave Bartoletti, VP and Principal Analyst at Forrester Research, who broke down multi-cloud and hybrid cloud by the numbers:

  • 62% of public cloud adopters are using 2+ unique cloud environments/platforms
  • 74% of enterprises describe their strategy as hybrid/multi-cloud today
  • But only:
    • 42% regularly optimize cloud spending
    • 41% maintain an approved service catalog
    • 37% enforce capacity limits or expirations

More often than not, public cloud users and enterprises have adopted a multi-cloud or hybrid cloud strategy to meet their cloud computing needs. Taking advantage of features and capabilities from different cloud providers can be a great way to get the most out of the benefits that cloud services can offer, but if not used optimally, these strategies can also result in wasted time, money, and computing capacity.

The data is telling – but we won’t stop there. For more insight on the rise of multi-cloud and hybrid cloud strategies, and to demonstrate the impact on cloud spend (and waste) – we have compiled a few more statistics on cloud computing.

Multi-Cloud and Hybrid Cloud Adoption Statistics

The statistics on cloud computing show that companies not only use multiple clouds today, but they have plans to expand multi- and hybrid cloud use in the future:

  • According to a 451 Research survey, 69% of organizations plan to run a multi-cloud environment by 2019. As they said, “the future of IT is multi-cloud and hybrid” – but with this rise, cloud spending optimization also becomes more of a challenge.
  • In a survey of nearly 1,000 tech executives and cloud practitioners, over 80% of companies were utilizing a multi-cloud strategy, commonly including a hybrid cloud model consisting of both public and private clouds.
  • And by multi-cloud, we don’t mean just two. On average, the number of private and public clouds used by companies to run applications and test out new services is 4.8.
  • On hybrid cloud strategy:
    • 83% of workloads are virtualized today (IDC)
    • 60% of large enterprises run VMs in the public cloud (IDC)
    • 65% of organizations have a hybrid cloud strategy today (IDC)

Cloud Spend Statistics

As enterprises’ cloud footprints expand, so too does their spending:

  • It’s not just public – the rise in cloud spend is happening on all fronts. According to IDC, 62.3 percent of private cloud spending went to on-premise private clouds in 2017.
  • The increase in cloud use, along with the rise of multi-cloud and hybrid cloud strategies, also correlates with an increased investment in cloud services. In a survey of nearly 1,000 tech executives and cloud practitioners, 20% of enterprises plan to increase their cloud spend by more than double, and another 17% plan to up their cloud spending by 50-100%, according to the report.  
  • 75% of participants said that one of their primary concerns was the challenge of managing cloud spend. Cloud cost optimization was a priority for the majority of participants, and average cloud waste was reported at 35%.
  • In another study from 451 Research, 38.8% of CIOs said that “cost savings” was their biggest motivator in migrating to the cloud, but post migration, cloud costs was the biggest challenge they faced. Here’s what else they had to say:

“Cloud is an inexpensive and easily accessible technology. People consume more, thereby spending more, and forget to control or limit their consumption. With ease of access, inevitably some resources get orphaned with no ownership; these continue to incur costs. Some resources are overprovisioned to provide extra capacity as a ‘just in case’ solution. Unexpected line items, such as bandwidth, are consumed. The IT department has limited visibility or control of these items.”

What Does ParkMyCloud User Data Tell Us?

We’ve noticed some interesting patterns in the cloud platforms adopted by ParkMyCloud users as well, which highlight the multi-cloud trends discussed above as well as correlations between the types of companies that are attracted to each of the major public clouds. We observed:

  • A high rate of growth in the number of Google Cloud Platform (GCP) customers over the past several months. While Amazon Web Services still holds the lion’s share among organizations using ParkMyCloud, the rate of growth is much higher for GCP. We believe that as more and larger organizations become enmeshed in GCP’s infrastructure, they are finding a greater need for cost optimization.
  • Among our customers using a multi-cloud strategy, the majority use AWS in combination with Azure, while the rest are using AWS with Google Cloud Platform.
  • The company profiles of AWS and GCP users are similar – we find these to be tech-forward small/medium businesses, whereas Azure attracts a larger proportion of big enterprises.

What These Statistics on Cloud Computing Mean for Cloud Management  

Upon examining these statistics on cloud computing, it’s clear that multi-cloud and hybrid cloud approaches are not just the future, they’re the current state of affairs. While this offers plenty of advantages to organizations looking to benefit from different cloud capabilities, using more than one CSP complicates governance, cost optimization, and cloud management further as native CSP tools are not multi-cloud. As cloud costs remain a primary concern, it’s crucial for organizations to stay ahead with insight into cloud usage trends to manage spend (and prevent waste). To keep costs in check for a multi-cloud or hybrid cloud environment, optimization tools that can track usage and spend across different cloud providers are a CIO’s best friend.

Read more ›

The Cloud Managed Services Market is Growing – and That’s Good for MSPs

Lately, we have been talking to quite a few providers of cloud managed services that play in both the private and public cloud spaces. These conversations have centered around how cloud management needs are evolving as enterprises’ hybrid and multi-cloud needs have accelerated.

Most refer to this market as cloud managed services (for once, no acronym associated), and many of these managed service providers (MSPs) also sell migration services to bring customers from private to public cloud, and cloud services between Amazon Web Services (AWS), Microsoft Azure, and Google Compute Platform (GCP). So these MSPs can help you move your applications to the cloud, sell you the cloud services you’re using, and manage and optimize your cloud services. It’s a rapidly growing market with a lot of M&A activity as MSPs race to provide differentiated cloud managed services that enable them to help enterprises get to market faster, better, and cheaper.

The global cloud managed services market size is expected to reach USD 82.51 billion by 2025, according to a study conducted by Grand View Research, Inc. Enterprises are focusing on their primary business operations, which results in higher cloud managed services adoption. Business services, security services, network services, data center services, and mobility services are major categories in the cloud managed services market. Implementation of these services will help enterprises reduce IT and operations costs and will also enhance productivity of those enterprises.

Taking a step back, I had a look at Wikipedia to make sure we were all aligned on what managed services provider are and cloud management is (cloud managed services):

  • A managed services provider is most often an information technology (IT) services provider that manages and assumes responsibility for providing a defined set of services to its clients either proactively or as the MSP (not the client) determines that services are needed.
  • Cloud management means the software and technologies designed for operating and monitoring applications, data and services residing in the cloud. Cloud management tools help ensure cloud computing-based resources are working optimally and properly interacting with users and other services.

Cloud managed services enable organizations to augment competencies that they lack, or to replace functions or processes that incurred huge recurring costs. These services optimize recurring in-house IT costs, transform IT systems and automate business processes allowing enterprises to achieve their business objectives.

The “net net” is that MSPs providing managed cloud services enable enterprises to adopt and manage their cloud services more efficiently.

In March 2018 Gartner published a Magic Quadrant for Public Cloud Infrastructure Managed Service Providers if your interested to see who they rank as the best of the best in when implementing and operating solutions on AWS, Azure and GCP (note this includes multi-cloud but not hybrid cloud). Several large SI’s are on the list like Accenture, Capgemini, and Deloitte, along with newer born in the cloud pure play MSPs like 2ndWatch, Cloudreach and REANcloud.

What’s interesting to us about this list is the recent M&A activity we have seen with many of these companies, here’s a few we were able to remember over a beer (shout out to Crooked Run Brewery in Sterling, VA):

As you can see, there is a clear bias towards buying “born in the cloud”, public cloud focused MSPs, as that’s where the lack of enterprise expertise lies, and of course the hyper growth is occurring as companies migrate from private to public cloud. Many of these providers started off supporting just AWS, and now need to or have begun supporting Azure and Google as well to support The “big 3” cloud service providers in this new, and emerging multi-cloud world.

MSPs that want to get into the cloud managed services game need to realize the pains are different in the public cloud, and that their focus needs to be on helping enterprises with security and governance, managing cloud spending, the lack of resources/expertise, and the ability to manage multi-cloud.

Read more ›

3 Things Candy Crush Can Do To Make Cloud Migration Sweeter

Candy Crush is migrating to Google Cloud, marking its first major cloud migration as decided by the online game-maker, King. Starting in early 2019, Candy Crush will be hauling a substantial amount of big data from on-premise to Google Cloud Platform.

A cloud migration is no easy feat, and for a company that provides online gaming to over 270 million people globally, choosing the right cloud provider to navigate the challenges of such a move is crucial. Aside from “even richer online gaming experiences,” Sunil Rayan, managing director of gaming at Google Cloud, makes a good case for why Google was the best choice for Candy Crush:

“It will continue to innovate and demonstrate its leadership position as a global innovator by utilising our big data, AI and machine learning capabilities to give its engineers the next generation of tools to build great experiences.”

But with the potential for better gaming, higher speed, and scalability, a cloud migration also comes with a few big risks. Here are 3 things Candy Crush can do to make their cloud migration sweeter:

1. Don’t rush data transfer

Transferring data from on-premise to the cloud is a huge undertaking, especially for a company that claims to have the largest Hadoop cluster in Europe. Transferring massive amounts of data is not recommended because it slows download speed, so it would be best for Candy Crush to make the move in parts, over time, and with the anticipation of potentially massive transfer costs associated with moving data out of or into a cloud.

2. Prepare for potential downtime

Downtime is a huge risk for any application, let alone a game played by millions across the world. Candy Crush can’t afford for downtime on a game users say is downright addictive, so it’s important to account for inconsistencies in data, examine network connections, and prepare for the real possibility of applications going down during the cloud migration process.

3. Adapt to technologies for the new cloud

Since choosing a cloud provider means committing to a heavy amount of time reconfiguring an application for the move – it’s important evaluate that the technology is the best fit. Technology is a big reason for Candy Crush moving their monolothic, on-premise environment to Google Cloud. Asa Bresin, FVP of technology at King, listed innovations in machine learning, query processing, and speed as drivers for cloud migration, and with technology known for speed and scalability, Google has met their requirements.  

Bonus: Keep costs in check. Whether it’s heavy transfer costs, losing money during downtime periods, or the time and manpower needed to reconfigure an application to the cloud – cloud migrations come with costs. The time and costs of a cloud migration are easily misunderstood or drastically understated. For ease and efficiency of keeping costs in check throughout and after the migration process, it’s important to have an understanding of cloud service offerings, pricing models, and the complexity of a cloud adoption budget. Evaluate all of these costs and look into options that will help you save post-migration, like optimization tools.

With a gradual shift, planning for risks of downtime, and the patience and flexibility to reconfigure for Google Cloud, Candy Crush can win at cloud migration.   

Read more ›

New Integration with CloudHealth: Now Even Easier to Automate and Optimize Cloud Cost Governance

Today we have news that both Finance and DevOps folks will appreciate to improve cloud cost governance: ParkMyCloud and CloudHealth have taken our partnership a step further with a first-of-its-kind technical integration. Our products now work together to give you a seamless cloud management experience, with a single place to go for multi-cloud cost management, reporting, and governance. Our goal is to save you time and money, and to improve financial accountability and management processes.

Customers in software, biotechnology, and education have tried it out — and are saving an average of $25,000 per month on their cloud bills and the feedback has been great. They say it’s rare to find integrations between the major platforms they use throughout the day, and this setup is unique.

Melanie Metcalfe, Director of Project Support at Foster Moore, said, “what we need to manage and optimize our cloud environments is cost control, user governance, and detailed reporting. It makes our cloud operations simpler and easier when solutions from different vendors are integrated out of the box, and we’re glad to see CloudHealth and ParkMyCloud making this a reality.“

Here’s what a typical use case might look like if you’re a user of both products:

CloudHealth and ParkMyCloud now integrated for better cloud cost governance

  • You log in to your CloudHealth account and take a quick look at your AWS dashboard.
  • You navigate to Pulse -> HealthCheck to find all possible optimizations in your environment.
  • On the list, you see ParkMyCloud, indicating that you have savings potential.
  • You click that to check out your list of EC2 instances, and find a few with a ParkMyCloud icon to show they’re recommended to park.
    • What does this mean? ParkMyCloud has analyzed your resource utilization patterns and automatically created an optimized on/off schedule that can save you money. You just need to apply it.
  • You click the ParkMyCloud icon, which takes you to your ParkMyCloud recommendations screen to take action. You can click to accept the parking schedule as is, or modify it (including the option to be more conservative or more aggressive.)
  • You go back to check out your CloudHealth reports, which include the data from your ParkMyCloud savings – all able to break down by environment, app, team, and more, for better visibility and cloud cost governance.

The integration is especially exciting as it continues the momentum in the multi-cloud management space kicked off by last week’s news that VMware will acquire CloudHealth to provide multi-cloud operations at a global scale — congrats to the whole team.

Learn more about the ParkMyCloud/CloudHealth integration and partnership on this page. Interested in seeing a demo of this cloud cost governance solution? Schedule a demo here.

Read more ›

Press Release: ParkMyCloud and CloudHealth Technologies Announce Integration for Optimized Cloud Cost Management

ParkMyCloud, Leader in Automated Cost Optimization, and CloudHealth Technologies, Leader in Hybrid Cloud Governance, Integrate to Provide End-to-End Multi-Cloud Cost Control

September 6, 2018 (Dulles, VA) – ParkMyCloud, the leading enterprise platform for continuous cost control in public cloud, and CloudHealth Technologies, trusted cloud management platform provider, today announced that they have furthered their partnership with a technology integration that provides customers with end-to-end multi-cloud cost control and visibility.

More than $12.9 billion will be wasted on unused cloud resources this year – which means public cloud users have an immediate need for automated savings and governance. Customers leveraging ParkMyCloud and CloudHealth’s integrated solution are empowered to reduce this cloud waste – in fact, customers already using both platforms currently save an average of more than $25,000 monthly on their cloud bills.

Our joint customers experience a seamless, integrated solution, including:

  • ParkMyCloud’s SmartParkingTM recommendations to optimize resource on/off time, which can be actioned for savings in a matter of clicks, now manageable through the CloudHealth platform
  • CloudHealth’s recommendations to optimize public and private cloud resources with complete security
  • Analytics and reporting on spending and savings by environment, department, application, resource and more, providing visibility to make smarter business decisions.

“What we need to manage and optimize our cloud environments is cost control, user governance, and detailed reporting,” said Melanie Metcalfe, Director of Project Support at Foster Moore. “It makes our cloud operations simpler and easier when solutions from different vendors are integrated out of the box, and we’re glad to see CloudHealth and ParkMyCloud making this a reality.“

“We’re seeing a trend of more and more companies using multiple clouds,” said ParkMyCloud CEO Jay Chapel. “What single- and multi-cloud customers have in common are skyrocketing bills, and they don’t have the time or resources to control these costs. We solve this problem by providing an automated cost control solution. By combining this with CloudHealth’s visibility and governance, customers can achieve end-to-end cloud cost optimization and governance.”

This announcement continues the momentum in the multi-cloud management space kicked off by last week’s news that VMware will acquire CloudHealth to provide multi-cloud operations at a global scale.

ParkMyCloud and CloudHealth first announced a business partnership earlier this year. To learn more about how the two companies are empowering customers, together, visit https://www.cloudhealthtech.com/partners/integration-partners/parkmycloud.

About ParkMyCloud

ParkMyCloud is a SaaS platform that automatically identifies and eliminates public cloud resource waste, reducing spending by 65% or more — think “Nest for the cloud.” AWS, Azure, Google Cloud, and Alibaba Cloud users such as McDonald’s, Sysco, Unilever, Fox, and Sage Software have used ParkMyCloud to cut their cloud spending by millions of dollars annually. ParkMyCloud helps companies like these optimize and govern cloud usage by integrating cost control into their DevOps processes. For more information, visit https://www.parkmycloud.com.

Contact

Katy Stalcup

kstalcup@parkmycloud.com

(571) 334-3291

About CloudHealth Technologies

CloudHealth Technologies provides the world’s most trusted software platform for accelerating business transformation in the cloud. More than 3,500 organizations globally rely on CloudHealth to manage over $5B in combined cloud spend, based on the platform’s ability to easily manage cost, ensure security compliance, improve governance and automate actions across multi-cloud environments. Known for offering the highest levels of data integrity throughout an organization’s entire cloud journey, CloudHealth is the platform of choice for leading enterprises and service providers, such as Pinterest, Yelp, Dow Jones, Zendesk, Skyscanner and SHI. With offices around the globe, the company is backed by Kleiner Perkins, Meritech, Sapphire Ventures, Scale Venture Partners, .406 Ventures and Sigma Prime Ventures.

For more information, visit us at www.cloudhealthtech.com or follow us @cloudhealthtech.

Read more ›

The Cloud Sizing Epidemic: Average Usage Only 2%

The next plain on the cost optimization frontier for ParkMyCloud is cloud sizing. We have been working on product features around resource sizing that will deliver greater automation in the management of cloud infrastructure. A key part of this effort has involved analysis of cloud usage patterns across our entire user base. We’ve identified some interesting patterns and correlations in cloud sizing and usage.

vCPU Utilization Patterns: Lower than Expected

One data point that caught our attention was vCPU metric data, specifically the very low average (and peak) utilization we see in our users’ infrastructure. We know anecdotally that a large proportion of what users manage in our platform consists of non-production instances used for development, staging, testing, and data analytics workloads, many of which do not need to run 24/7/365. But even bearing this in mind, we see a surprisingly low vCPU utilization. Based on our most recent analysis of instances from across the four public cloud providers we support, some 50% of instances had an average vCPU of only 2% and a peak of 55%. Even at the 75th percentile, average utilization was only 7%, albeit with a peak of 98%.

What leads to these cloud sizing decisions?

Of course, when selecting instance sizes and types, vCPU is not the only consideration. To make an accurate assessment of the match between workload and instance type, there are several data points to consider, including memory, network, disk, etc. We  have no visibility into the specific workloads on these instances and why they were chosen, but we can make some educated guesses about why this systematic overprovisioning of instances is occurring.

A few potential reasons include:

  • A need to provision instances with larger vCPUs in order to access instances with the required memory
  • A need to provision larger storage-optimized instances where the focus is is high data IOPS
  • Using some other ‘rule of thumb’ when provisioning such as the not-so-tried-and-tested ‘determine what I think I need then double it’ rule.

Clearly, there are a number of options which drive the performance and cost of cloud instances (VMs) including: the number of processor cores; the amount of RAM, storage capacity and storage performance, etc. Just focusing on one of these factors might not be overly useful, other than that we observe such extreme underutilization of one of these key components.

How much do cloud sizing choices matter?

Given the sheer volume of workloads moving to public cloud — some 80% of enterprises reported moving workloads to cloud in 2017 — it is critical to accurately determine, monitor and then optimize your compute resources is critical. If you think there’s a problem with improper cloud sizing in your environment, you may want to check out our recently published cloud waste checklist to identify other problem areas and take action to reduce costs.

There are many reasons why this “supersize me” approach to cloud sizing is occurring. We would be interested to get your take. How does your team determine compute requirements for cloud workloads? Are there other reasons why you might deliberately choose to oversize a resource? Comment below to let us know.

Read more ›

How to Create Your 2018 AWS re:Invent Schedule

It’s time to plan your 2018 AWS re:Invent schedule! This will be our team’s fourth re:Invent, so we’ve put together some tips for planning out your conference experience.

First up, if you have not yet registered for re:Invent, do that now! Tickets sold out last year, so don’t wait.

Choose Your Sessions in Advance

The key to a great AWS re:Invent schedule is to plan in advance. The essential part of this planning is to register for sessions in advance. There will be a session registration open date, which has not yet been announced for 2018. When that date is released, though, put it on your calendar and reserve some time for registration – it can be competitive and sessions fill up quickly. Last year, session registration opened on October 19, so expect a similar date this year.

What you can get started with today is reading through the re:Invent agenda and, especially, the immense event catalog. Note the sessions you’re interested in. Here are some tips to keep in mind:

  • Focus – what do you most hope to gain at re:Invent? You can sort sessions based on subject areas and industries – would a “focus path” help you gain more out of your experience?
  • Value of In-Person vs. Session Videos – Many sessions will be online afterward, so prioritize sessions with an element that is more valuable in person – that may be chalk talks, workshops, and others with interactive elements. You’ll be able to watch any sessions you missed and catch up on the information on others with videos. This can put you more at ease and let you have some fun while in Vegas.
  • Travel time – This won’t be the first or the last time you hear this, but it’s worth saying again: the re:Invent campus is big. HUGE. Plan your schedule accordingly, with as few travel periods up and down The Strip as possible. If there are multiple sessions you’re interested in at the same time, prioritize ones with the least travel time. You should also plan to arrive to sessions early.

Once dates, times, and locations have been announced for sessions, we recommend putting them into your calendar for a clean visual of your day, and reminders. Once it’s available, you’ll be able to view your AWS re:Invent schedule in the mobile app, along with maps and more.

Set Aside Time for the Expo Hall

Make sure you plan on time to visit the expo hall! Actually, there are now two expos – the main one at The Venetian and another at the Aria.

The Welcome Reception from 4-7 PM on Monday is a great time to visit the expo and kick off your re:Invent experience with food, drinks, and giveaways. However, it will be crowded. You’ll want to come back again later in the week to check out vendor products and services, chat with vendors whose products you already use, get swag, and enter drawings. The expo is open from 8 AM – 6 PM Tuesday, 10 AM – 6 PM Wednesday, and 10 AM – 4 PM Thursday.

You won’t be disappointed by the swag. Just search #reinventswag for examples — sponsors go all out. By the way, if you’re aiming to maximize swag, definitely stop by after lunch on Thursday. Sponsors will practically beg you to take stuff off their hands so they don’t have to ship it home. You can grab toys, stickers, and keychains for your kids, or build an entire wardrobe of t-shirts and socks for yourself.

And of course, stop by and visit ParkMyCloud at the Venetian expo, booth #1709! Mention this post and we’ll hook you up with some secret bonus swag.

(Also, what secret bonus swag would you want? Asking for a friend…)

Activities and Parties

Round out your Vegas experience with some partying! The great thing about a conference like this is that you can often drink your way through for free, courtesy of vendors with bigger marketing budgets than mine. Outside of Tuesday’s pub crawl, many parties require you to register ahead of time, so keep an eye on your email for invitations. You’ll want to bookmark this list of 2018 re:Invent parties. As of this writing, it’s a bit sparse, but check out last year’s party list for an idea of the multitude of options to come.

Obviously, you don’t want to miss re:Play, the centerpiece of the conference (you know, besides the keynotes.) More free food, drink, an EDM concert, retro arcade, laser escape room, drone obstacle course, climbing wall, dodgeball, bounce castle, archery tag, and/or whatever else they come up with for this year.

Or venture out beyond the conference hall walls and try your luck or catch a show – it’s hard to be bored in Vegas.

 

Do you have any other tips for planning the perfect AWS re:Invent schedule? Let us know in the comments. Cheers, and see you there!

 

More on re:Invent: 2017 recap.

Read more ›

Terraform vs. CloudFormation – Infrastructure Deployment Comparison

In the world of infrastructure as code, the biggest divide seems to come in the war between Hashicorp’s Terraform vs. CloudFormation in AWS. Both tools can help you deploy new cloud infrastructure in a repeatable way, but have some pretty big differences that can mean the difference between a smooth rollout or a never ending battle with your tooling. Let’s look at some of the similarities and some of the differences between the two.

Common Traits

While the tools have some very unique features, they also share some common aspects. In general, both CloudFormation and Terraform help you provision new AWS resources from a text file. This means you can iterate and manage the entire infrastructure stack the same as you would any other piece of code. Both tools are also declarative, which means you define what you want the end goal to be, rather than saying how to get there (such as with tools like Chef or Puppet). This isn’t necessarily a good or bad thing, but is good to know if you’re used to other config management tools.

Unique Characteristics of CloudFormation

One of the biggest benefits of using CloudFormation is that it is an AWS product, which means it has tighter tie-ins to other AWS services. This can be a huge benefit if you’re all-in on AWS products and services, as this can help you maximize your cost-effectiveness and efficiency within the AWS ecosystem. CloudFormation also makes use of either YAML or JSON as the format for your code, which might be familiar to those with dev experience. Along the same lines, each change to your infrastructure is a changeset from the previous one, so devs will feel right at home.

There’s some additional tools available around CloudFormation, such as:

  • Stacker – for handling multiple CloudFormation stacks simultaneously
  • Troposphere -if you prefer python for creating your configuration files
  • StackMaster – if you prefer Ruby
  • Sceptre – for organizing CloudFormation stacks into environments

Unique Characteristics of Terraform

Just as being an AWS product is a benefit of CloudFormation if you’re in AWS, the fact that Terraform isn’t affiliated with any particular cloud makes it much more suited for multi-cloud and hybrid-cloud environments, and of course, for non-AWS clouds. There are Terraform modules for almost any major cloud or hypervisor in the Terraform Registry, and you can even write your own modules if necessary.

Terraform treats all deployed infrastructure as a state, with any subsequent changes to any particular piece being an update to the state (unlike the changesets mentioned above for CloudFormation). This means you can keep the state and share it, so others know what your stack should look like, and also means you can see what would change if you modify part of your configuration before you actually decide to do it. The Terraform configuration files are written in HCL (Hashicorp Configuration Language), which some consider easier to read than JSON or YAML.

More on Terraform: How to Use Terraform Provisioning and ParkMyCloud to Manage AWS

Terraform vs. CloudFormation: Which to choose?

The good news is that if you’re trying to decide between Terraform vs. CloudFormation, you can’t really go wrong with either. Both tools have large communities with lots of support and examples, and both can really get the job done in terms of creating stacks of resources in your environments. They are both also free, with CloudFormation having no costs (aside from the infrastructure that gets created) and Terraform being open-source while offering a paid Enterprise version for additional collaboration and governance options. Each has their pros and cons, but using either one will help you scale up your infrastructure and manage it all as code.

Read more ›

5 Things to Do When You Outgrow the AWS Free Tier

The AWS free tier is a great way to get started using Amazon Web Services — it can be a great boost to individuals, startups, and small businesses. In fact, the AWS free tier was essential to getting ParkMyCloud off the ground when we launched. But of course, this program has limits on what you can use without being charged.

The AWS free tier is designed to give you the AWS experience without the cost, but that also comes with limitations on instance types, storage, hours, and how often you can call operations each month. Of course, all good things must come to an end. If you’ve outgrown the free tier option and are ready to experience the full benefits of AWS, there are a few things you can do to make sure you’re getting the most out of being a paying AWS customer.

#1 Set spending limits

The first thing to consider when your 12 months on forgoing the AWS free tier expire option is the most obvious difference – cost versus no cost. You’re paying for cloud services now, so ensure that you don’t pay more than you intend to.

Use AWS Budgets to create custom cost and usage budgets that notify you when you exceed (or are about to exceed) your budgeted amount. Track budgets by the month, quarter, or year, with custom start and end dates. You can also track costs by services, account, tags, and more, receiving alerts directly to your email or through the Simple Notification Service.

With AWS Budgets, you can also set custom utilization targets for reserved instances including Amazon EC2 instances, Amazon RDS, Amazon Redshift, and Amazon ElastiCache, receiving alerts whenever your usage drops below your set utilization target. To get started with creating and tracking budgets, start from the AWS Budgets dashboard or the Budgets API.

#2 Optimize resource usage

Next, you need to ensure that that budget is only going toward resources you actually need – so cost optimization should be a top priority. You might be overpaying by leaving instances running during non-production times, when you don’t need them. Scheduling stop/start times with automation is an easy way to integrate cost control outside of the AWS free tier.

#3 Set sizing limits

Yet another caveat of cost optimization is right sizing. Besides making sure your instances are turned off when not in use, you should also make a practice of only using as much as you need at a given time, and that’s where right sizing comes into play. Size your workloads according to performance and capacity requirements, both initially and on an ongoing basis to ensure that resources do not end up underused or idle. AWS suggests that you use CloudWatch metrics to get a full view of your environment, and make a habit of right sizing once per month to keep the process smooth, ensure that you’re monitoring costs and keeping track of your billing and usage over time.

See a full list of cost traps to avoid in The Cloud Waste Checklist.  

#4 Plan your tagging structure

As your infrastructure grows, it’s important to manage your AWS resources with an effective tagging strategy. Tagging gives you the ability to attach custom metadata to instances, images, and more. Resources can be categorized by owner, purpose, or environment, helping you stay organized, improve visibility, and keep costs in check.

A good tagging strategy gives you a more accurate model for chargeback and showback and better insight in your usage and spend, but it’s up to you to enforce quality of tagging. Soft enforcement gives users notifications when policies are not followed, and hard enforcement automatically removes resources that are not tagged to align with company standard. According to AWS, organizations that use hard enforcement have a better time ensuring that quality of tagging is enforced.

Learn more about tagging best practices.

#5 Establish governance

Scheduling, right sizing, budget limits, and tagging are all methods of keeping costs optimized after you switch from the AWS free tier to a paid, full-service option. But what do all of these practices have in common? Governance. Clear policies and processes to keep usage, capacity requirements, and billing in check are all part of cloud and cost management, and should remain an ongoing priority as you continue using AWS or any cloud service provider.

For more information and how to plan governance after outgrowing the AWS free tier option, learn about how one software company automates governance.

Read more ›

How to Run Alibaba Instances in China from Another Country

Alibaba Cloud is growing at an amazing rate, recently claiming to have overtaken both Google and IBM as the #3 public cloud provider globally, and certainly the #1 provider in China. Many sites and services hosted outside China are accessible from within China, but can suffer high latency and potentially lost functionality if their web interface requires interaction with blocked social media systems. As such, it is no surprise that a number of our (non-Chinese) customers have expressed interest in actually running virtual machine Alibaba instances in China. In this blog we are going to outline the process…and give an alternate plan.

General Process to Run Alibaba Instances in China

The steps to roll-out a deployment on Alibaba in mainland China are relatively clear:

  1. Establish a “legal commercial entity” in Mainland China.
  2. Select what services you want to run on Alibaba Cloud
  3. Apply for Internet Content Provider (ICP) certification
  4. Launch

The first three steps are described in more detail below.

Establish a Legal Commercial Entity

Or putting it another way – you need to have an office in China. This can range from an actual office with your own employees, to a Joint Venture, which is a legal LLC between your organization and an established Chinese company. If your service is more informational in nature and is not actually selling anything via the service, then this can be relatively easy, taking only a couple weeks (at least for the legal side), though you will still need to find a Joint Venture partner and make the deal worth their while financially. For commerce or trade-related services, the complexity, time requirements, and costs start going up significantly.

What to run on Alibaba Cloud

There is a decision-point here, as there is one set of rules for Alibaba-hosted web/app servers, and additional rules for everything else. Base virtual machines, databases and other such core IT building blocks require the ICP registration described below, plus “real-name registration”, where a passport is needed to actually confirm the identity of whomever is purchasing the resource. If all you need is a web server, then you can skip this step. In either case, some of the filing requirements involve having a server and/or DNS record prepared in order to complete the later steps. A web site does not need to be completely finished until launch, but a placeholder may be needed.

Internet Content Provider (ICP) certification

There are two flavors of ICP certification:

  • A “simple” ICP Filing – which is the bare minimum needed for informational websites that are not directly generating revenue.
  • ICP Commercial Filing – This starts with getting an approved ICP Filing, and then also includes a Commercial License that must be obtained a province/municipality in China. In some cases, this appears to be related to which Alibaba region you are using, and even the physical location of your public IP address.

Many references recommend finding an experienced consultant to guide you through these processes, and it is easy to see why!

OK…WAY too much work. What is Plan B?

The other way to run Alibaba instances in China is to host your site or services in Hong Kong. All of the rules described above apply to “Mainland China”, which does not include Hong Kong. Taiwan is also not included in Mainland China, but Hong Kong has the advantage of being better connected to the rest of China. If the main problem you are trying to solve is to reduce latency to your site for China-based customers, Hong Kong is the closest you can get without actually being there, and Alibaba appears to do a pretty good job optimizing the Hong Kong experience. No local office or legal filings required!

Once you are all set up: Optimize your Costs!

After your instances are set up, make sure you’re optimizing Alibaba costs. Our Mainland China-based customers using Alibaba have confirmed that ParkMyCloud is able to access the Alibaba APIs from our US-based servers – so you can go ahead and try it out.

Read more ›

Tagging Best Practices for Automated Cloud Governance

tagging best practices for automated cloud governance

Earlier this week we discussed ways to improve cloud automation through tagging. Today, I want to extend the conversation to look at how one ParkMyCloud user is applying tagging best practices to improve their cloud governance.

The company we talked to — they’re in media, so let’s call them MediaCorp — has about 10,000 employees, which means the Cloud Engineering team has several hundred cloud users to manage, with a combined 100+ AWS accounts and more than 5,000 active AWS resources. The only way they can maintain security and cost control in a cloud environment of this magnitude is through automated governance. Here’s how they do it.

Tagging Best Practices #1: Always Tag

MediaCorp has a strict policy: every AWS resource must have the same set of five tags attached to it:

  • team — essential to establishing ownership of the resource, both for maintenance and for billing
  • environment — knowing whether the resource is for production, staging, or QA has implications for on/off schedules
  • application — MediaCorp uses this as a trigger for Chef Cookbooks, but can also apply to billing
  • expiration date — Any non-production resource has a stated expiration date to prevent orphaned resources
  • cost center — The finance department has internal billing codes for all IT resources

How does MediaCorp ensure that all resources are tagged?

Tagging Best Practices #2: Automated Compliance

The key is to use automated rules to enforce that every resource has the five required tags — this is where ParkMyCloud’s policy engine comes into play. MediaCorp has a set of policies set up to check for the five tags. If a resource is missing any, the resource is immediately put on an “always parked” schedule and moved to a team (a way to group instances in ParkMyCloud) specifically for mistagged resources.

When this happens, the Cloud Engineering team gets an email and a Slack notification, so they can track down the creator of the offending resource and correct the process that created it.

Tagging Best Practices #3: Optimize Workflows

Now the tags themselves come into play. MediaCorp uses their five-tag system for three main purposes:

Configuration management: as mentioned above, they use tags as the trigger for Chef cookbooks, and of course the same applies to Puppet Modules, or Ansible Playbooks.

CI/CD: MediaCorp uses Jenkins to provision cloud resources, so they use tags to associate build and deployment servers with their corresponding repository and build number, for both automated and manual development tasks.

Cost control: the “environment” tag determines what parking schedule is applied to each resource. Production resources run 24×7, of course, while “dev” or “test” resources are put on a schedule to park 7:00 PM – 7:00 AM and on weekends. (Users can always log in to override these schedules if needed.)

Conclusion: Tagging is Worth the Effort

It may at first seem unnecessarily harsh to automatically park any resource that doesn’t have proper tags applied, but this process is what allows MediaCorp to keep a well-governed, cost-controlled infrastructure. You can always adapt their use case to your own needs by simply moving resources to another team and notifying that action is needed, without changing the state or schedule on the resource.

Either way, with a rigorous application of tagging best practices in place, you can automate governance and improve your workflows.

Read more ›

8 Ways to Improve Cloud Automation Through Tagging

Since the beginning of public cloud, users have been attempting to improve cloud automation. This can be driven by laziness, scale, organizational mandate, or some combination of those. Since the rise of DevOps practices and principles, this “automate everything” approach has become even more popular, as it’s one of the main pillars of DevOps. One of the ways you can help sort, filter, and automate your cloud environment is to utilize tags on your cloud resources.

Tagging Methodologies

In the cloud infrastructure world, tags are labels or identifiers that are attached to your instances. This is a way for you to provide custom metadata to accompany the existing metadata, such as instance family and size, region, VPC, IP information, and more. Tags are created as key/value pairs, although the value is optional if you just want to use the key. For instance, your key could be “Department” with a value of “Finance”, or you could have a key of just “Finance”.

There are 4 general tag categories, as laid out in the best practices from AWS:

  1. Technical – This often includes things like the application that is running on the resource, what cluster it belongs to, or which environment it’s running in (such as “dev” or “staging”).
  2. Automation – These tags are read by automated software, and can include things like dates for when to decommission the resource, a flag for opting in or out of a service, or what version of a script or package to install.
  3. Business and billing – Companies with lots of resources need to track which department or user owns a resource for billing purposes, which customer an instance is serving, or some sort of tracking ID or internal asset management tag.
  4. Security – Tags can help with compliance and information security, as well as with access controls for users and roles who may be listing and accessing resources.

In general, more tags are better, even if you aren’t actively using those tags just yet. Planning ahead for ways you might search through or group instances and resources can help save headaches down the line. You should also ensure that you standardize your tags by being consistent with the capitalization/spelling and limiting the scope of both the keys and the values for those keys. Using management and provisioning tools like Terraform or Ansible can automate and maintain your tagging standards.

Automation Methodologies

Once you’ve got your tagging system implemented and your resources labelled properly, you can really dive into your cloud automation strategy. Many different automation tools can read these tags and utilize them, but here are a few ideas to help make your life better:

  1. Configuration Management – Tools like Chef, Puppet, Ansible, and Salt are often used for installing and configuring systems once they are provisioned. This can determine which settings to change or configuration bundles to run on the instances.
  2. Cost Control – this is the automation area we focus on at ParkMyCloud – our platform’s automated policies can read the tags on servers, scale groups, and databases to determine which schedule to apply and which team to assign the resource to, among other actions.
  3. CI/CD – If your build tool (like Jenkins or Bamboo) is set to provision or utilize cloud resources for the build or deployment, you can use tags for the build number or code repository to help with the continuous integration or continuous delivery.
  4. Cloud Account Clean-up – Scripts and tools that help keep your account tidy can use tags that set an end date for the resource as a way to ensure that only necessary systems are around long-term. You can also take steps to automatically shut down or terminate instances that aren’t properly tagged, so you know your resources won’t be orphaned.

Conclusion: Tagging Will Improve Your Cloud Automation

As your cloud use grows, implementing cloud automation will be a crucial piece of your infrastructure management. Utilizing tags not only helps with human sorting and searching, but also with automated tasks and scripts. If you’re not already tagging your systems, having a strategy on the tagging and the automation can save you both time and money.

Read more ›

Google Cloud Machine Types Comparison

Google Cloud Platform offers a range of machine types optimized to meet various needs. Machine types provide virtual hardware resources that vary by virtual CPU (vCPU), disk capability, and memory size, giving you a breadth of options. But with so much to choose from, finding the right Google Cloud machine type for your workload can get complicated.

In the spirit of our recent blog on EC2 instance types, we’re doing an overview of each Google Cloud machine type. This image shows the basics of what we will cover, but remember that you’ll want to investigate further to find the right machine type for your particular needs.

Predefined Machine Types

Predefined machine types are a fixed pool of resources managed by Google Compute Engine. They come in five “classes” or categories:

Standard (n1-standard)

Standard machine types work well with workloads that require a balance of CPU and memory. The n1-standard family of machine types come with 3.75 GB of memory per vCPU. There are 8 total in the series and they range from 3.75 to 360 GB of memory, corresponding accordingly with 1 to 96 vCPU.

High-Memory (n1-highmem)

High memory machine types work for just what you’d think they would – tasks that require more system memory as opposed to vCPUs. The n1-highmem family comes with 6.50 GB of memory per vCPU, offering 7 total varieties ranging from 13 to 624 GB in memory, corresponding accordingly with 2 to 96 vCPUs.

High-CPU (n1-highpcu)

If you’re looking for the most compute power, the n1-highcpu series is the way to go, offering 0.90 GB per vCPU. There are 7 options within the high cpu machine type family, ranging from 1.80 to 86.6GB and 2 to 96 vCPUS.

Shared-Core (f1-micro)

Share-core machine types are cost-effective and work well with small or batch workloads that only need to run for a short time. They provide a single vCPU that runs on one hyper-thread of the host CPU running your instance.

The f1-micro machine type family provides bursts of physical CPU for brief periods of time in moments of need. They’re like spikes in compute power that can only happen in the event that your workload requires more CPU than you had allocated. These bursts are only possible periodically and are not permanent.

Memory Optimized (n1-ultramem or n1-megamem)

For more intense workloads that require high memory but also more vCPU than that you’d get with the high-memory machine types, memory-optimized machine types are ideal. With more than 14 GB of memory per vCPU, Google suggests that you choose memory-optimized machine types for in-memory databases and analytics, genomics analysis, SQL analysis services, and more. These machine types are available based on zone and region.

Custom Machine Types

Predefined machine types vary to meet needs based on high memory, high vCPU, a balance of both, or both high memory and high vCPU. If that’s not enough to meet your needs, Google has one more option for you – custom machine types. With custom machine types, you can define exactly how many vCPUs you need and what amount of system memory for the instance. They’re a great fit if your workloads don’t quite match up with any of the available predefined types, or if you need more compute power or more memory, but don’t want to get bogged down by upgrades you don’t need that come with predefined types.

About GPUs and machine types

On top of your virtual machine instances, Google also offers graphics processing units (GPUs) that can be used to boost workloads for processes like machine learning and data processing. GPUs typically can only be attached to predefined machine types, but in some cases can also be placed with custom machine types depending on zone availability. In general, the higher number of GPUs attached to your instances, the higher number of vCPUs and system memory available to you.

What Google Cloud machine type should you use?

Between the predefined options and the ability to create custom Google Cloud machine types, Google offers enough variety for almost any application. Cost matters, but with the new resource-based pricing structure, the actual machine you chose matters less when it comes to pricing.

With good insight into your workload, usage trends, and business needs, you have the resources available to find the machine type that’s right for you.

Read more ›

Should Your Company Adopt Google’s Site Reliability Engineering Approach?

Over the past year or so, we have spoken with quite a few prospective users who have defined their responsibilities as site reliability engineering (SRE). If, like me, you’re not familiar with the term, I’ll save you the Google search. SRE is a discipline that incorporates aspects of software engineering and applies that to IT operations problems. Practitioners aim to create ultra-scalable and highly reliable software systems. According to Ben Treynor, founder of Google’s Site Reliability Team, SRE is “what happens when a software engineer is tasked with what used to be called operations.” And its origins can also be traced back to 2003 and Google when Ben was hired to lead software engineers to run a production environment.

The site reliability engineering footprint at Google is now larger than 1,500 engineers. Many products have small to medium sized SRE teams supporting them, though not all products do. The SRE processes that have been honed over the years are being used by other, mainly large scale, companies that are also starting to implement this paradigm, including ServiceNow, Microsoft, Apple, Twitter, Facebook, Dropbox, Amazon, Target, IBM, Xero, Oracle, Zalando, Acquia, and GitHub.

The people we talk to on a daily basis are typically charged with operational management of their company’s cloud infrastructure, and thus governing and controlling costs (that’s where we come in). I got to wondering, how is this approached different by, say, a site reliability engineer vs. someone who labels himself as “DevOps”?

How Does Site Reliability Engineering Compare to DevOps?

In simple terms, the difference between SREs and DevOps seems clear based on our conversations with folks. SREs are engineers focused on production environments, while DevOps is a philosophy as well as a role. DevOps folks are definitely less concerned with production vs. non-production, and more concerned with the overall cloud management and operations. Side note, DevOps was coined around 2008, so a SRE actually predates a DevOps engineer.

A site reliability engineer (SRE) will spend up to 50% of their time doing “ops” related work such as issues, on-call, and manual intervention. Since the software system that an SRE oversees is expected to be highly automatic and self-healing, the SRE should spend the other 50% of their time on development tasks such as new features, scaling or automation. The ideal SRE candidate is a highly skilled system administrator with knowledge of code and automation.

When I first encountered it, site reliability engineering just seemed like another buzzword to replace “IT” or “Ops”. As I read more on it, I understand that it’s more about the people and the process and less about the technology. There is rarely a mention of the underlying infrastructure or tools, and it seems like the main requirement is just the desire to improve. With that, you can align your development and operations (funny, right – DevOps) around the discipline of SRE.

Should Your Company Implement a Site Reliability Engineering Approach?

So while all the hype is around implementing DevOps in your organization, should you really be adopting the idea of site reliability engineering? It certainly makes sense based on the name alone, as “site reliability” is synonymous with “business availability” in our modern internet-connected culture. Any downtime for your service or application means lost revenue and dissatisfied customers, which means the business takes a hit. Using site reliability engineering to keep things running smoothly, while employing DevOps principles to improve those smooth-running processes, seems to be the best combination to really empower your company.

Read more ›

Google Cloud Discounts Just Got Better For You with Resource-Based Pricing

Among the many announcements made at Google Cloud Next last week was a new option for Google Cloud discounts: resource-based pricing.

This new option, which Google will roll out in the fall, expands their idea of “pay per use pricing”. For resource types n1-standard, n1-highmem, and n1-highcpu, Google will no longer charge based on machine types. Instead, they will now aggregate across resources and charge based on the quantity of vCPU and GB of memory you use.

This new addition to the family of Google Cloud discounts will have its biggest effect on Sustained Use discounts.

Sustained Use Discounts are Even Better

We recently asked the question, do sustained use discounts really save you money? The answer was yes, although depending on the situation, perhaps not the maximum amount of money possible.

With the resource-based pricing change, Sustained Use Discounts will be based in regions instead of just zones, so you can rack up “percentage of the month” usage and therefore discounts faster and easier. For example, if you have a single busy week in the month, during which you run several VMs with varying amounts of vCPU, the vCPU will all be counted together before the sustained use amount is calculated, giving you potential for a better-optimized discount.

For some customers, the biggest impact of this change will be in Autoscaling Managed Instance Groups. In the old system, if a group of instances scaled up and down over time (especially daily), the new instances that were created and then shut down a short time later never had an opportunity to accumulate enough hours to reach a sustained use discount tier. In the new system, the aggregated use of these systems counts toward the sustained use, giving a much higher likelihood of getting the Sustained Use Discount.

Billing Simplicity (…Hopefully)

While this should make your bill lower, it may not make your bill “easier to understand” as Google claims. Since discounts will apply at a regional level, and there’s now yet another step going on behind the scenes to calculate your bill, some users may find it harder to predict their monthly bills. You will no longer be able to see the machine types that you are using in your invoice, although you can obtain them via Billing BigQuery. Keep this in mind, and be sure to dig into your first few invoices after the change is made to see how it’s affecting your particular environment.

It’s All About Automation

One thing we appreciate about the change is that Google Cloud customers do not need to take any action to receive these discounts – it’s all done automatically. The same has always been true for Sustained Use Discounts, something that makes Google Cloud stand out from its immediate competition – neither AWS nor Azure offers something directly equivalent.

Google Cloud Discounts are Good for the Customer

Here’s what people are saying about the update.

It shows flexibility as a priority:

It’s pro-customer:

And AWS would be wise to do the same:

We’re glad to see another addition to the Google Cloud discounts that go directly toward improving the customer experience. It’s clear to see that GCP is focusing on a customer-first experience – which is good news for all of us.

Read more ›

AWS Reserved Instance Marketplace – Seller’s FAQ

As we continue to dive into AWS Reserved Instances, today we want to take a look at the AWS Reserved Instance Marketplace.

Reserved Instances are a great way to save money – unless they don’t get used, and you won’t really know until you get the bill. But just because you’re locked into that contract doesn’t mean that your unused RIs have to be a total waste of money. AWS has given users a place to sell them –  the Reserved Instance Marketplace.

Using the Reserved Instance Marketplace, you can list your reservation for other users to purchase.  Of course, like any online marketplace, there’s no guarantee that you’ll actually sell them, but at least you have a shot at getting some of your money back.

AWS has some solid documentation for all the ins and outs of buying and selling in the Reserved Instance Marketplace, but we decided to highlight answers to some of the questions we most commonly see about  how to get started with selling unused RIs. Read our FAQ below.

Selling on the Reserved Instance Marketplace

AWS customers and third-parties are free to use the marketplace to sell unused Standard RIs regardless of length terms or original pricing options.

When is it a good idea to sell unused RIs?

If you’re changing instance types (perhaps for rightsizing or better optimizing the instance type for its load or application), moving regions, your  business needs have changed, your capacity needs have changed, or you just don’t need that instance type anymore – use the marketplace.  

How do I become a seller?

To register as a seller, you’ll need to provide bank account and tax information. Once you’ve completed registration, you’ll receive a confirmation email.

Are there any restrictions or limitations to what I can sell?

  • Once you’ve registered as a seller, you’re free to sell any EC2 Standard Reserved Instances as long as your term length has at least one month remaining.
  • Convertible instances cannot be sold in the marketplace.
  • You can sell Standard RIs regardless of the purchasing plan (No Upfront, Partial Upfront, or All Upfront), but in the case of All Upfront – you must have made the full payment before you can sell, and the reservation must be active for at least 30 days before listing. AWS also charges a 12% service fee for upfront pricing.
  • Pricing is flexible – the minimum sale price is $0.00
  • You can’t modify or change a listing once it’s been made, but you can cancel it and create a new one.

What information does AWS share with buyers?

According to US regulations, buyers will be able to see your legal name on the buyer’s statement. In the event that AWS Support is contacted regarding invoices or tax purposes, the buyer may receive your email address to be able to communicate with you directly, along with your ZIP code and country.   

How does selling work?

Once you list the RIs you want to sell in the marketplace, buyers will be able to see them. Instances are grouped by remainder of term length and hourly rate. The cheapest reservations are sold first, followed by the next cheapest, and so on until the buyer’s order is fulfilled. AWS handles the transaction and transfer of ownership. The instances are yours until they’re sold, and once you make a sale, you’ll go back to paying the on-demand rate whenever you use that instance type moving forward.

How do I list my RIs in the marketplace?

There’s a few ways you can list your unused RIs in the AWS Reserved Instances Marketplace. You can sell them all at once, in parts, or by instance type, platform, and scope. You can also cancel your listing, but you won’t get anything back on any portions that have already been sold. There are also several routes you can take for where and how to list your RIs: using the AWS Management Console, using the AWS CLI or Amazon EC2 API, and from the Listing State of the My Listings tab of the Reserved Instances page.

How do I price my RIs in the marketplace?

When selling an RI, the only fee that you can decide on is the upfront fee – the one-time fee that the buyer is charged for purchasing your instance. Usage and recurring fees cannot be specified – the buyer will pay what was charged for the original purchase. The minimum sales price allowed is $0.00 and the maximum you can sell per year is $50,000 (although AWS can grant you permission to sell more on a case-by-case basis).

AWS also sets a default pricing schedule for your listed RIs. Pricing decreases incrementally over a month-to-month period to account for the value of the RI decreasing over time. What you can do, however, is set upfront prices based on the point of sale for your RI (a set price if its sold with 5 months remaining in the term, 3 months remaining, etc).

What happens after I make a sale?

You’ll get an email notification anytime an RI has sold, and each day there is any activity on your account, such as creating or selling a listing. Once the buyer pays AWS for your RIs, you’ll get a message to your email account about the sold reservation. AWS sends a wire transfer to the bank account provided, typically 1-3 days from the date of sale, but you won’t be able to receive funds until after AWS has verified the account with your bank, which can take up to 2 weeks. You can also see your sales in the Reserved Instance disbursement report, where you can check the status of everything you’ve listed. Or you can track the status of your RI listings in the console (Reserved Instance > My Listings > Listing State) for a full breakdown of available listings, pending, sold, and canceled.

Conclusion

Reserved Instances can save money on your AWS bill, but can just as easily waste money by going unused. Luckily, the AWS Reserved Instances Marketplace can help by giving you a place to sell your unused RIs. Did we miss any of your questions in this AWS Reserved Instances Marketplace FAQ? Let us know!

Read more ›

How to Analyze Google Cloud Committed Use Discounts

In this blog we will look at the Google Cloud Committed Use discount program for customers that are willing to “commit” to a certain level of usage of the GCP Compute Engine.

The Committed Use purchasing option is particularly useful if you are certain that you will be continually operating instances in a region and project over the course of a year or more. If your instance usage does not add up to a full month, you may instead want to look at the Google Cloud Sustained Use discounts, which we discussed in a previous blog.

The Google Cloud Committed Use discount program has many similarities to the AWS and Azure Reserved Instances (RI) programs, and a couple unique aspects as well. A comparison of the various cloud providers’ reservation programs is probably worth a blog in itself, so for now, let’s focus on the Google Cloud Committed Use discounts, and the best times and places to use them.

Critical Facts about Google Cloud Committed Use

  • The Committed Use discount is best for a stable and predictable workload (you are committed to pay – regardless of whether you use the resources or not!)
  • Commitment periods are for either 1 or 3 years.
  • Commitments are for a specific region and a specific project. Zone assignment within a region is not a factor.
  • Discounts apply to the total number of vCPUs and amount of memory– not to a specific machine or machine type.
  • No pre-payment – the commitment cost is billed monthly.
  • GCP Committed Use discounts are available for all of the GCP instance families except the shared-core machine types, such as f1-micro and g1-small.
  • Committed Use discounts do not apply to the premium charges for sole-tenants, nor can they be used for Preemptible instances.
  • The commitments for General Purpose instances are distinct from those for Memory Optimized instances. If you have some of both types, you must buy two different types of Commitment. These types are:
General Purpose Memory Optimized
  • Standard – n1-standard
  • High Memory – n1-highmem
  • High CPU – n1-highcpu
  • Custom
  • General purpose sole-tenant
  • n1-ultramem

How much does it cost?

Each Committed Use purchase must include a specific number of vCPUs and the amount of memory per vCPU. This combination of needing to commit to both a number of vCPUs and amount of Memory can make the purchase of a commitment a bit more complicated if you use a variety of machine types in your environment. The following table illustrates some GCP machine types and the amount of memory automatically provided per vCPU:

Machine Type Memory per vCPU
n1-standard 3.75 GB
n1-highmem 6.50 GB
n1-highcpu 0.90 GB
n1-ultramem 14-24 GB
custom 0.9 – 6.5 GB

While the vCPU aspect is fairly straightforward, the memory commitment to purchase requires a bit of thought. Since it is not based on a specific machine type (like AWS and Azure), you must decide just how much memory to sign-up for. If your set of machine types is homogeneous, this is easy – just match the vCPU/memory ratio to what you run. The good news here is that you are just buying a big blob of memory – you are not restricted to rigidly holding to some vCPU/memory ratio. The billing system will “use up” a chunk of memory for one instance and then move on to the next.

Looking at a specific example, the n1-standard-8 in the Oregon region that we discussed in the Sustained Usage Blog, we can see that the Committed Use discount does amount to some savings, but one must maintain a usage level throughout the month to fully consume the commitment.

Google Cloud Committed Use Discount vs Sustained Use Discount Break Even Point

Recall from the earlier blog that the base price of this instance type in the GCP Price list already assumes a Sustained Usage discount over a full month, and that the actual “list price” of the instance type is $277.40, and Sustained Usage provides up to a maximum of a 30% discount. With that as a basis, we can see that the net savings for the Committed Use discount over 1 year is 37%, and over 3 years, rises to 55%. This is close to the advertised discount of 57% in the GCP pricing information, which varies by region.

The break-even points in this graph are about 365 hours/month for a 3 year commitment, and 603 hours/month for a 1 year commitment. In other words, if you are sure you will be using a resource less than 365 hours/month over the course of a year, then you probably want to avoid purchasing a 3 year Commitment.

Allocation of Commitments

Because Commitments are assigned on a vCPU/RAM basis, you cannot simply point at a specific instance, and say THAT instance is assigned to my Committed Use discount. Allocation of commitments is handled when your bill is generated, and your discount is applied in a very specific order:

  1. To custom machine types
  2. Sole-tenant node groups
  3. Predefined machine types

This sequence is generally good for the customer, in that it applies the Commitment to the more expensive instances first. For example, an n1-standard-4 instance in Northern Virginia normally costs $109.35. If an equivalent server was constructed as a Custom instance, it would cost $114.76.

For sole-tenant node groups, you are typically paying for an entire physical machine, and the Committed Use discount serves to offset the normal cost for that node. For a sole-tenant node group that is expected to be operating 7x24x365, it makes the most sense to buy Committed Use for the entire system, as you will be paying for the entire machine, regardless of how many instances are running on it.

Commitments are allocated over the course of each hour in a month, distributing the vCPUs and RAM to all of the instances that are operating in that hour. This means you cannot buy a Commitment for 1 vCPU and 3.75 GB of RAM, and run two n1-standard-1 instances for the first half of the month, and then nothing for the second half of the month, expecting it all to be covered by the Commitment. In this scenario, you would be charged for one month at the committed rate, and two weeks at the regular rate (subject to whatever Sustained Usage discount you might accumulate for the second instance).

Thank you for….not…sharing?

Unlike AWS, where Reserved Instances are automatically shared across multiple linked accounts within an organization, GCP Commitments cannot be shared across projects within a billing account. For some companies, this can be a major decision point as to whether or not they commit to Commitments. Within the ParkMyCloud platform, we see customers with as many as 60 linked AWS accounts, all of which share in a pool of Reserved Instances. GCP customers do not have this flexibility with Commitments, being locked-in to the Project in which they were purchased. A number of our customers use AWS Accounts as a mechanism to track resources for teams and projects; GCP has Projects and Quotas for this purpose, and they are not quite as flexible for committed resource sharing. For a larger organization, this lack of sharing means each project needs to be much more careful about how they purchase Commitments.

Conclusions

Google Cloud Committed Use discounts definitely offer great savings for organizations that expect to maintain a certain level of usage of GCP and that expect to keep those resources within a stable set of regions and projects. Since GCP Commitments are assigned at the vCPU/Memory level, they provide excellent flexibility over machine-type-based assignments. With the right GCP usage profile over a year or more, purchase of Google Cloud Committed Use discounts is a no-brainer, especially since there are no up-front costs!

Read more ›

EC2 Instance Types Comparison (and how to remember them)

aws ec2 instance types comparison

AWS offers a range of EC2 instance types optimized for various purposes. It’s great that they provide so much variety, but of course, it means one more thing that you have to learn.

We broke this down in a new video, which also compares EC2 purchasing options. Check it out here:

Or, read on for a look into each instance type. Remember that within each type, you’ll still need to choose instance sizes for your specific needs. Additionally, older generations within each instance types are available for purchase – for example, c5 is the latest “c” instance, but c4 and c3 are still available – but as the newer types tend to perform better at a cheaper price, you’ll only want to use the older types if you have an AMI or other dependency.

This image shows a quick summary of what we’ll cover:

ec2 instance types comparison chart with mnemonics

General Purpose

These general purpose EC2 instance types are a good place to start, particularly if you’re not sure what type to use. There are two general purpose types.

t2 instance type

The t2 family is a burstable instance type. If you have an application that needs to run with some basic CPU and memory usage, you can choose t2. It also works well if you have an application that gets used sometimes but not others. When the resource is idle, you’ll generate CPU credit, which you’ll utilize when the resource is used. It’s a cheaper option that’s useful for things that come and go a lot, such as websites or development environments.

We’ll also add a mnemonic to help you remember the purpose of each instance type.

Mnemonic: t is for tiny or turbo.

m5 instance type

The m5 instance type is similar, but for more consistent workloads. It has a nice balance of CPU, memory, and disk. It’s not hard to see why almost half of EC2 workloads are on “m” instances.

There’s also an m5d option, which uses solid state drives (SSD) for the instance storage.

Mnemonic: m is for main choice or happy medium.

Compute Optimized

c5 instance type

The c5 instance type has a high ratio of compute/CPU versus memory. If you have a compute-intensive application – maybe scientific modelling, intensive machine learning, or multiplayer gaming – these instances are a good choice. There is also the c5d option, which is SSD-backed.

Mnemonic: c is for compute (at least that one’s easy!)

Memory Optimized

r4 instance family

The r4 instance family is memory-optimized, which you might use for in-memory databases, real-time processing of unstructured big data, or Hadoop/Spark clusters. You can think of it as a kind of midpoint between the m5 and the x1e.

Mnemonic: r is for RAM.

x1e instance family

The x1e family has a much higher ratio of memory, so this is a good choice if you have a full in-memory application or a big data processing engine like Apache Spark or Presto.

Mnemonic: x is for xtreme, as in “xtreme RAM” seems to be generally accepted, but we think this is a bit weak. If you have any suggestions, comment below.

Accelerated Computing

p3 instance type

If you need GPUs on your instances, p3 instances are a good choice. They are useful for video editing, and AWS also lists use cases of “computational fluid dynamics, computational finance, seismic analysis, speech recognition, autonomous vehicles” – so it’s fairly specialized.

Mnemonic: p is for pictures (graphics).

Storage Optimized

h1 instance type

The h1 type is HDD backed, with a balance of compute and memory. You might use it for distributed file systems, network file systems, or data processing applications.

Mnemonic: h is for HDD.

i3 instance type

The i3 instance type is similar to h1, but it is SSD backed, so if you need an NVMe drive, choose this type. Use it for NoSQL databases, in-memory databases, Elasticsearch, and more.

Mnemonic: i is for IOPS.

d2 instance type

d2 instances have an even higher ratio of disk to CPU and memory, which makes them a good fit for Massively Parallel Processing (MPP), MapReduce and Hadoop distributed computing, and similar applications.

Mnemonic: d is for dense.

What EC2 instance types should you use?

As AWS has continued to add options to EC2, there are now EC2 instance types for almost any application. If you have comparison questions around pricing, run them through the AWS monthly calculator. And if you don’t know, then generally starting with t2 or m5 is the way to go.

Read more ›

Don’t Waste Money on Unused Reserved Instances

Among the various ways to lose money on idle or orphaned resources in the cloud, here’s another for AWS users to add to the list: unused reserved Instances. At first, the idea of wasting money on AWS Reserved Instances seems counterintuitive. After all, aren’t RIs meant to save money? The short answer is yes – but only if you use them efficiently.

How Unused Reserved Instances Occur

To understand how unused Reserved Instances contribute to cloud waste, consider how they work. With AWS Reserved Instances, you’re making a commitment of usage by renting instances for a fixed amount of time in exchange for a lower rate (per-hour or per-second) than on-demand. You’re still free to use all the same families, OS types, and instance sizes with either one, except with RIs your ability to use certain instance types is limited to the purchasing plan you choose.

The only real difference between an AWS On-Demand instance and an AWS Reserved Instance is how you get billed for them on the backend – and this is where it gets tricky. You don’t know if your Reserved Instances have been used until you get the bill. Instead, you run your instances as you always would, with no insight into what will get billed as reserved instances. It’s only when your bill is created the following month that AWS reviews your reservations alongside your usage to apply the Reserved Instances that match up with your workload. This leaves you with little visibility into what your costs will be, forcing you to track usage on your own, and running the risk of unused reservations that result in, you guessed it – wasted money.

Ways to Avoid Losing Money Unused Reserved Instances

Reserved Instances require commitment of usage, ongoing awareness and insight into your future costs, and the possibility of going unused if AWS can’t apply them sufficiently. But that doesn’t mean you should shy away from using them. Reserved Instances can be cost-effective if used with a few things in mind:

Pick the RI type that suits your usage and workload. The best plan is one of prevention. Before you get started with purchasing reservations, get a detailed look at your usage and the most optimal instance types for your workload (something you should already be doing as part of your cost control measures). By design, Reserved Instances work best with steady state workloads and consistent usage. Once you confirm that your usage makes you a good candidate, you’ll want to choose the RI instance type that will benefit your needs most:

  • Standard RIs – Recommended for steady-state usage, and provide the most savings.
  • Convertible RIs – a smaller discount from On-Demand instances, but in return provide flexibility to change families, OS types, and tenancies.
  • Scheduled RIs – similar to Standard RIs, but only apply to instances launched within the time windows you select, which can recur on a daily, weekly, or monthly schedule.

Sell unused reserved instances on the Reserved Instance Marketplace. Using the marketplace allows you to list your reservation for purchase by other users. The cheapest reservations are sold first, and once someone purchases yours, you’ll be the charged the on-demand rate whenever you use that instance type moving forward.  

Purchase convertible reservations. With convertible reservations, you have the option to convert your reserved instances to other types, so long as the new type is more expensive. You won’t get as much of a discount, but flexibility and more options for use make up for the smaller savings.

The Lesson to be Learned

Just like any other idle or unused cloud resource, unused reserved instances can only do one thing – waste your money. Cloud services were meant to help you keep infrastructure costs in check, but only if you use them smartly. Optimize your cloud spend with awareness of your usage, ongoing insight into your infrastructure needs, and running instances only when you need them.

More questions answered in our recent blog, AWS Reserved Instance FAQs.

Read more ›
Page 1 of 1412345...10...Last »
Copyright © ParkMyCloud 2016-2018. All rights reserved|Privacy Policy