One of the activities we are engaged in is the cloud marketplaces run by the large public cloud providers. These marketplaces provide an alternative channel for software and services companies apart from the more typical direct sales or reseller/distributor models. Many customers ask about options to buy our product via one of these marketplaces – which has given us some tips for others interested in purchasing this way.
Given the “app store model” that has been so widely embraced by consumers (App Store, Google Play Store etc) it’s not surprising that the cloud providers see an opportunity to leverage their customer footprint and derive revenue share when customers choose to purchase via their marketplaces. For customers, it can be a way to consolidate bills, get discounts, and simplify administration.
How Cloud Marketplaces Work
The business model is simple. The Cloud Service Providers (CSPs) charge a percentage of revenue based on the value of the purchase price being paid by the customer. Companies list their products to reach buyers who they would not otherwise reach, or provide a purchasing method which better suits the customers needs since they can add the cost of the purchased product onto their monthly cloud bill thus avoiding complex new procurement / purchasing arrangements. The CSPs obviously hope that the value proposition is strong enough to warrant the sellers giving up some margin in exchange for net additional sales or sales which would be otherwise overly complex to close and bureaucratically burdensome.
Currently, we only participate in the AWS Marketplace, but there are similar options available in the Azure Marketplace and the Google Cloud Marketplace. The largest and seemingly most well stocked is that of AWS where there are close to some 10,000 listings from a broad range of software vendors. In fact, Gartner estimates that some 25% of global 1,000 organizations are using such online marketplaces. Forrester reports that as much as 40% of all B2B transactions are through digital sales channels. And a 2112 Group Survey reported that 11% of channelexecutives believe marketplaces will drive the majority of their indirect revenue as soon as 2023.
These organizations claim the benefits as being: a lower sales/buying cycle time; ease of use, increased buyer choice; and ease of provisioning and deployment. Additionally the promise of leveraging the CSPs own account managers to support co-selling on specific opportunities and the potential for them to act as lead sources, albeit we imagine these need to be larger deals and part of a broader relationship between the CSPs and their most valuable ISV customers. Still finding and aligning with CSP sales reps who get to retire quota by selling your product via the marketplace especially if it means those same reps get to sell more of their core cloud services.
Opportunities to offer alternate sales models can also be made available through the marketplace. For example, charging on a metered basis where the customer only pays for what is used and has this cost added to the bill (rather than a fixed monthly fee) or via longer term contracts secured over two or three years at discounted rates.
Those companies that have managed to optimize their offerings in partnership with CSPs and have developed co-developed / co-branded products have the potential for a lot of upside. Databricks partnership with Azure and Snowflake and Datadog with AWS have driven enormous growth and helped them build unicorn sized businesses within a few years.
One area which has been somewhat frustrating is the ability for customers to discover appropriate software products to meet their needs within the marketplaces. In part this is a similar challenge as faced in consumer facing app marketplaces where there is an over abundance of products and the categorization and search algorithms are often weak. This leaves the sellers (particularly the lesser known ones) frustrated and customers unable to determine what software is best to meet their needs. In our own cost optimization space this has many different dimensions and lots of offerings often
Tips for Purchasing on the Marketplace
So what do buyers need to know about these marketplaces and making them work to your advantage? To help answer this we have included a short checklist of tips and considerations.
Always carefully check that any products you wish to research or purchase are listed in the marketplace. Despite the likes of Amazon and Google running these, the listing can often be hidden and categorized in unusual ways so if you do not find it listed always contact the vendor and ask.
Marketplace pricing can often differ from buying directly from the vendor. Products might be bundled in certain ways or for different time periods (e.g. multi-year) which are not offered via a direct purchase. Additionally, all three of the large CSPs allow for a concept called Private Offers. These are uniquely negotiated between buyer and seller and allow for custom agreements such as additional discounts, different payment schemes, etc.
The vendor’s pricing model can sometimes differ from buying directly given the availability of metering options i.e. paying only for what you use. If this is something available it will typically require some analysis to determine which model might deliver the greatest ROI.
If you have an existing relationship with your account manager at the CSP it might be worth seeing what additional discretionary incentives might be available for use of the marketplace.
Determine the potential reduction in administrative burden by adding the product cost to your monthly bill can be a worthwhile exercise. Minimizing purchasing and procurement team involvement as well as monthly processing of invoices by the finance team can alone be advantageous even if there is not a significant cost saving when buying in the marketplace.
Depending on your situation, there may be other considerations but what is for sure is that managing multiple marketplaces requires time and resources. If you have not already investigated these, either as a buyer or a seller, now might be the time to have a look.
In July, AWS updated the cost optimization pillar of their Well-Architected Framework to focus on cloud financial management. This change is a rightful acknowledgment of the importance of functional ownership and cross-team collaboration in order to optimize public cloud costs.
AWS Well-Architected Framework and the Cost Optimization Pillar
If you use AWS, you are probably familiar with the Well-Architected Framework. This is a guide of best practices to help you understand the impact of the decisions you make while designing and building systems on AWS. AWS Well-Architected allows users to learn best practices for building high-performing, resilient, secure, and efficient infrastructure for their workloads and applications.
This framework is based on five pillars — operational excellence, security, reliability, performance efficiency, and cost optimization. Overall, AWS has done a great job with these particular resources, making them clear and accessible with links to further detail.
The Cost Optimization pillar generally covers principles we have been preaching for a long time: expenditure and usage awareness; choosing cost-effective resources; managing demand and supply resources; and regularly reviewing your environments and architectural decisions for cost.
Now, they have added Cloud Financial Management to this pillar. Cloud Financial Management is a set of activities that enables Finance and Technology organizations to manage, optimize and predict costs as they run workloads on AWS.
Why Do Businesses Need Cloud Financial Management?
Incorporating Cloud Financial Management into an organization’s cost optimization plans allows them to accelerate business value realization and optimize cost, usage and scale to maximize financial success.
This is an important part of the cost optimization pillar as it dedicates resources and time to build capability in specific industries and technology domains. Similar to the other pillars, users need to build capability with different resources, programs, knowledge building, and processes to become a cost-efficient organization.
The first step AWS proposes for CFM is functional ownership. (Further reading: Who Should Manage App Development Costs? and 5 Priorities for the Cloud Center of Excellence).The reason all of this is important is since many organizations are composed of different units that have different priorities, there’s not one standard set of objectives for everyone to follow. By aligning your organization on a set of financial objectives, and providing them with the means to make it happen, organizations will become more efficient. Once an organization is running more efficiently, this will lead to more innovation and the ability to build faster. Not to mention organizations will be more agile and have the means to adjust to any factors.
What You Need to Keep in Mind
When most people think of cost optimization they think of cutting costs – but that’s not exactly what AWS is getting at by adding cloud financial management to their framework. It’s about assigning responsibility; partnering between finance and technology; and creating a cost-aware culture.
In a survey conducted earlier this year by 451 Research, they found that adopting Cloud Financial Management practices doesn’t only lower IT costs. In fact, enterprises that adopted Cloud Financial Management practices also benefited in many other aspects of the organization such as, growing revenue through increased business agility, increasing operational resilience to decrease risk, improved profitability and the potential for increased staff productivity.
Cloud Financial Management increases with cloud maturity, so it’s important to be patient with the process and remember that small changes can have huge impacts and benefits can increase as time goes on.
Amazon provides you with a few services to manage cloud costs such as Cost Explorer, AWS Budgets, AWS Cost and Usage Report (CUR), Reserved Instances Recommendation and Reporting, and EC2 Rightsizing Recommendations. But, it’s important to note that while many CFM tools are free to use, there can be some costs associated with labor to build ongoing use of these tools and continuous organizational processes – it may be in your best interest to look into a tool that can optimize costs on an ongoing basis. Ensure your people and/or tools are able to scale applications to address new demands.
By using the framework to evaluate and implement your cloud financial management practices, you’ll not only achieve cost savings, but more importantly, you’ll see business value increase across operational resilience, staff productivity and business agility.
During its virtual Google Cloud Next ’20 “On Air” series, Google announced the introduction of BigQuery Omni. This is an extension of its existing BigQuery data analytics solution to now analyze data in multiple public clouds, currently including Google Cloud and Amazon Web Services (AWS), with Microsoft Azure coming soon. Powered by Google Cloud’s Anthos, and using a unified interface, BigQuery Omni allows developers to analyze data locally without having to move data sets between the platforms.
BigQuery Engine to Analyze Multi-Cloud Data
Google Cloud’s general manager and VP of engineering, Debanjan Saha, says “BigQuery Omni is an extension of Google Cloud’s continued innovation and commitment to multi-cloud that brings the best analytics and data warehouse technology, no matter where the data is stored.” And that, “BigQuery Omni represents a new way of analyzing data stored in multiple public clouds, which is made possible by BigQuery’s separation of compute and storage.”
According to Google Cloud, this provides scalable storage that can reside in Google Cloud or other public clouds, and stateless, resilient compute that executes standard SQL queries.
Google Cloud reports that BigQuery Omni will:
Break down silos and gain insights on data with a flexible, multi-cloud analytics solution that doesn’t require moving or copying data from other public clouds into Google Cloud for analysis.
Get consistent data experience across clouds and datasets with a unified analytics experience across datasets, in Google Cloud, AWS, and Azure (coming soon) using standard SQL and BigQuery’s familiar interface. BigQuery Omni supports Avro, CSV, JSON, ORC, and Parquet.
Securely run analytics to another public cloud with a fully managed infrastructure, powered by Anthos, so you can query data without worrying about the underlying infrastructure. Users can choose the public cloud region where their data is located, and run the query.
Why is Google Aiming Multi-Cloud?
Many organizations leveraging public cloud are doing so with multiple clouds: 55% of organizations are multi-cloud according to a recent survey from IDG, and 80% according to a recent Gartner survey. (Is this actually necessary? Maybe.)
Google Cloud has been the most open to supporting this multi-cloud reality, and perhaps implicit in releases like Anthos and BigQuery Omni is Google’s recognition that it’s #3 in the market, and many of its customers have a presence in AWS or Azure.
So, BigQuery Omni actually involves physically running BigQuery clusters in the cloud on which the remote data resides. This is something that in the past, could only be done if your data was stored only in Google Cloud. Now with Kubernetes-powered Anthos, as well as the visualization tool gained in Google’s acquisition of Looker, Google is moving toward a middleware strategy. Now, it is offering services to bridge data silos, as a strategy to gain market share from its bigger competitors. Expect to see more similar service offerings coming from Google as they look to break AWS’s lead on public cloud.
Spot instances and similar “spare capacity” models are frequently cited as one of the top ways to save money on public cloud. However, we’ve noticed that fewer cloud customers are taking advantage of this discounted capacity than you might expect.
We say “spot instances” in this article for simplicity, but each cloud provider has their own name for the sale of discounted spare capacity – AWS’s spot instances, Azure’s spot VMs and Google Cloud’s preemptible VMs.
Spot instances are a type of purchasing option that allows users to take advantage of spare capacity at a low price, with the possibility that it could be reclaimed for other workloads with just brief notice.
In the past, AWS’s model required users to bid on Spot capacity. However, the model has since been simplified so users don’t actually have to bid for Spot Instances anymore. Instead, they pay the Spot price that’s in effect for the current hour for the instances that they launch. The prices are now more predictable with much less volatility. Customers still have the option to control costs by providing a maximum price that they’re willing to pay in the console when they request Spot Instances.
Spot Instances in Each Cloud
Variations of spot instances are offered across different cloud providers. AWS has Spot Instances while Google Cloud offers preemptible VMs and as of March of this year, Microsoft Azure announced an even more direct equivalent to Spot Instances, called Azure Spot Virtual Machines.
Spot VMs have replaced the preview of Azure’s low-priority VMs on scale sets – all eligible low-priority VMs on scale sets have automatically been transitioned to Spot VMs. Azure Spot VMs provide access to unused Azure compute capacity at deep discounts. Spot VMs can be evicted at any time if Azure needs capacity.
AWS spot instances have variable pricing. Azure Spot VMs offer the same characteristics as a pay-as-you-go virtual machine, the differences being pricing and evictions. Google Preemptible VMs offer a fixed discounting structure. Google’s offering is a bit more flexible, with no limitations on the instance types. Preemptible VMs are designed to be a low-cost, short-duration option for batch jobs and fault-tolerant workloads.
Adoption of Spot Instances
Our research indicates that less than 20% of cloud users use spot instances on a regular basis, despite spot being on nearly every list of ways to reduce costs (including our own).
While applications can be built to withstand interruption, specific concerns remain, such as loss of log data, exhausting capacity and fluctuation in the spot market price.
In AWS, it’s important to note that while spot prices can reach the on-demand price, since they are driven by long-term supply and demand, they don’t normally reach on-demand price.
A Spot Fleet, in which you specify a certain capacity of instances you want to maintain, is a collection of Spot Instances and can also include On-Demand Instances. AWS attempts to meet the target capacity specified by using a Spot Fleet to launch the number of Spot Instances and On-Demand Instances specified in the Spot Fleet request.
To help reduce the impact of interruptions, you can set up Spot Fleets to respond to interruption notices by hibernating or stopping instances instead of terminating when capacity is no longer available. Spot Fleets will not launch on-demand capacity if Spot capacity is not available on all the capacity pools specified.
AWS also has a capability that allows you to use Amazon EC2 Auto Scaling to scale Spot Instances – this feature also combines different EC2 instance types and pricing models. You are in control of the instance types used to build your group – groups are always looking for the lowest cost while meeting other requirements you’ve set. This option may be a popular choice for some as ASGs are more familiar to customers compared to Fleet, and more suitable for many different workload types. If you switch part or all of your ASGs over to Spot Instances, you may be able to save up to 90% when compared to On-Demand Instances.
Another interesting feature worth noting is Amazon’s capacity-optimized spot instance allocation strategy. When customers diversify their Fleet or Auto Scaling group, the system will launch capacity from the most available capacity pools, effectively decreasing interruptions. In fact, by switching to capacity-optimized allocation users are able to reduce their overall interruption rate by about 75%.
Is “Eviction” Driving People Away?
There is one main caveat when it comes to spot instances – they are interruptible. All three major cloud providers have mechanisms in place for these spare capacity resources to be interrupted, related to changes in capacity availability and/or changes in pricing.
This means workloads can be “evicted” from a spot instance or VM. Essentially, this means that if a cloud provider needs the resource at any given time, your workloads can be kicked off. You are notified when an AWS spot instance is going to be evicted: AWS emits an event two minutes prior to the actual interruption. In Azure, you can opt to receive notifications that tell you when your VM is going to be evicted. However, you will have only 30 seconds to finish any jobs and perform shutdown tasks prior to the eviction making it almost impossible to manage. Google Cloud also gives you 30 seconds to shut down your instances when you’re preempted so you can save your work for later. Google also always terminates preemptible instances after 24 hours of running. All of this means your application must be designed to be interruptible, and should expect it to happen regularly – difficult for some applications, but not so much for others that are rather stateless, or normally process work in small chunks.
Companies such as Spot – recently acquired by NetApp (congrats!) – help in this regard by safely moving the workload to another available spot instance automatically.
Our research has indicated that fewer than one-quarter of users agree that their spot eviction rate was too low to be a concern – which means for most, eviction rate is a concern. Of course, it’s certainly possible to build applications to be resilient to eviction. For instance, applications can make use of many instance types in order to tolerate market fluctuations and make appropriate bids for each type.
AWS also offers an automatic scaling feature that has the ability to increase or decrease the target capacity of your Spot Fleet automatically based on demand. The goal of this is to allow users to scale in conservatively in order to protect your application’s availability.
Early Adopters of Spot and Other Innovations May be One and the Same
People who are hesitant to build for spot more likely use regular VMs, perhaps with Reserved Instances for savings. It’s likely that people open to the idea of spot instances are the same who would be early adopters for other tech, like serverless, and no longer have a need for Spot.
For the right architecture, spot instances can provide significant savings. It’s a matter of whether you want to bother.
Google Sustainability is an effort that ranges across their business, from the Global Fishing Watch to environmental consciousness in the supply chain. Given that cloud computing has been a major draw of global energy in recent years, the amount of computing done in data centers more than quintupled between 2010 and 2018. But, the amount of energy consumed by the world’s data centers grew only six percent during that period, thanks to improvements in energy efficiency. However, that’s still a lot of power. That’s why Google’s sustainability efforts for data centers and cloud computing are especially important.
Google Cloud Sustainability Efforts – As Old as Their Data Centers
Reducing energy usage has been an initiative for Google for more than 10 years. Google has been carbon neutral since 2007, and 2019 marked the third year in a row that they’ve matched their energy usage with 100 percent renewable energy purchases. Google’s innovation in the data center market also comes from the process of building facilities from the ground up instead of buying existing infrastructures and using machine learning technology to monitor and improve power-usage-effectiveness (PUE) and find new ways to save energy in their data centers.
When comparing the big three cloud providers in terms of sustainability efforts, AWS is by far the largest source of carbon emissions from the cloud globally, due to its dominance. However, AWS’s sustainability team is investing in green energy initiatives and is striving to commit to an ambitious goal of 100% use of renewable energy by 2040 to become as carbon-neutral as Google has been. Microsoft Azure, on the other hand, has run on 100 percent renewable energy since 2014 but would be considered a low-carbon electricity consumer and that’s in part because it runs less of the world than Amazon or Google.
Nonetheless, data centers from the big three cloud providers, wherever they are, all run on electricity. How the electricity is generated is the important factor in whether they are more or less favorable for the environment. For Google, reaching 100% renewable energy purchasing on a global and annual basis was just the beginning. In addition to continuing their aggressive move forward with renewable energy technologies like wind and solar, they wanted to achieve the much more challenging long-term goal of powering operations on a region-specific, 24-7 basis with clean, zero-carbon energy.
Why Renewable Energy Needs to Be the Norm for Cloud Computing
It’s no secret that cloud computing is a drain of resources, roughly three percent of all electricity generated on the planet. That’s why it’s important for Google and other cloud providers to be part of the solution to solving global climate change. Renewable energy is an important element, as is matching the energy use from operations and by helping to create pathways for others to purchase clean energy. However, it’s not just about fighting climate change. Purchasing energy from renewable resources also makes good business sense, for two key reasons:
Renewables are cost-effective – The cost to produce renewable energy technologies like wind and solar had come down precipitously in recent years. By 2016, the levelized cost of wind had come down 60% and the levelized cost of solar had come down 80%. In fact, in some areas, renewable energy is the cheapest form of energy available on the grid. Reducing the cost to run servers reduces the cost for public cloud customers – and we’re in favor of anything that does that.
Renewable energy inputs like wind and sunlight are essentially free – Having no fuel input for most renewables allows Google to eliminate exposure to fuel-price volatility and especially helpful when managing a global portfolio of operations in a wide variety of markets.
Google Sustainability in the Cloud Goes “Carbon Intelligent”
In continuum with their goals for data centers to consume more energy from renewable resources, Google recently revealed in their latest announcement that it will also be time-shifting workloads to take advantage of these resources and make data centers run harder when the sun shines and the wind blows.
“We designed and deployed this first-of-its-kind system for our hyperscale (meaning very large) data centers to shift the timing of many compute tasks to when low-carbon power sources, like wind and solar, are most plentiful.”, Google announced.
Google’s latest advancement in sustainability is a newly developed carbon-intelligent computing platform that seems to work by using two forecasts – one indicating future carbon intensity of the local electrical grid near its data center and another of its own capacity requirements – and using that data “align compute tasks with times of low-carbon electricity supply.” The result is that workloads run when Google believes it can do so while generating the lowest-possible CO2 emissions.
The carbon-intelligent computing platform’s first version will focus on shifting tasks to different times of the day, within the same data center. But, Google already has plans to expand its capability, in addition to shifting time, it will also move flexible compute tasks between different data centers so that more work is completed when and where doing so is more environmentally friendly. As the platform continues to generate data, Google will document its research and share it with other organizations in hopes they can also develop similar tools and follow suit.
Leveraging forecasting with artificial intelligence and machine learning is the next best thing and Google is utilizing this powerful combination in their platform to anticipate workloads and improve the overall health and performance of their data center to be more efficient. Combined with efforts to use cloud resources efficiently by only running VMs when needed, and not oversizing, resource utilization can be improved to reduce your carbon footprint and save money.
As you accelerate your organization’s containerization in the cloud, key stakeholders may worry about putting all your eggs in one cloud provider’s basket. This combination of fears – both a fear of converting your existing (or new) workloads into containers, plus a fear of being too dependent on a single cloud provider like Amazon AWS, Microsoft Azure, or Google Cloud – can lead to hasty decisions to use less-than-best-fit technologies. But what if using more of your chosen cloud provider’s features meant you were less reliant on that cloud provider?
The Core Benefit of Containers
Something that can get lost in the debate about whether containerization is good or worthwhile is the feature of portability. When Docker containers were first being discussed, one of the main use cases was the ability to run the container on any hardware in any datacenter without worrying if it would be compatible. This seemed to be a logical progression from virtual machines, which had provided the ability to run a machine image on different hardware, or even multiple machines on the same hardware. Most container advocates seem to latch on to this from the perspective of container density and maximizing hardware resources, which makes much more sense in the on-prem datacenter world.
In the cloud, however, hardware resource utilization is now someone else’s problem. You choose your VM or container size and pay just for that size, instead of having to buy a whole physical server and pay for the entirety of it up-front. Workload density still matters, but is much more flexible than on-prem datacenters and hardware. With a shift to containers as the base unit instead of Virtual Machines, your deployment options in the cloud are numerous. This is where container portability comes into play.
The Dreaded “Vendor Lock-in”
Picking a cloud provider is a daunting task, and choosing one and later migrating away from it can have enormous impacts of lost money and lost time. But do you need to worry about vendor lock-in? What if, in fact, you can pivot to another provider down the road with minimal disruption and no application refactoring?
Implementing containerization in the cloud means that if you ever choose to move your workloads to a different cloud provider, you’ll only need to focus on pointing your tooling to the new provider’s APIs, instead of having to test and tinker with the packaged application container. You also have the option of running the same workload on-prem, so you could choose to move out of the cloud as well. That’s not to say that there would be no effort involved, but the major challenge of “will my application work in this environment” is already solved for you. This can help your Operations team and your Finance team to worry less about the initial choice of cloud, since your containers should work anywhere. Your environment will be more agile, and you can focus on other factors (like cost) when considering your infrastructure options.