Blog - ParkMyCloud

Cloud Nine, Ten or Eleven: What all those cloud computing growth statistics really mean?

cloud computing growth statistics

Photo by Abigail Keenan on Unsplash

 

Growth in the various cloud platforms has become a dinner party conversation staple of those in the tech industry, in much the same way that house price appreciation was in the mid-2000’s. It’s interesting, everyone has an opinion about cloud computing growth statistics and it’s not entirely clear how it ends.

So let’s start with some industry projections. According to Gartner, the global infrastructure as a service (IaaS) market will grow by 39% in 2017 to reach $35 billion by the end of the year. IaaS growth shows no sign of slowing down and is expected to reach $72 billion by 2021, a CAGR of 30%. This revenue is principally split by the big-four players in public cloud: Amazon Web Services (AWS), Microsoft Azure (Azure), Google Compute Platform (GC) and IBM.

The approximate market share of these four public cloud platforms at the end of the first quarter of 2017 can be seen in the Canalys chart below. The reasons these numbers are only approximate is that each of these vendors include (or exclude) different facets of their cloud business and each seek to ensure their growth remains opaque to the investor community.

However, Amazon reported their earning in April 2017 and showed revenue growing 43 percent in the quarter to $3.66 billion, an annualized run rate of some $14.6 billion. Meanwhile Microsoft reported their cloud earnings in July 2017 and that its annualized revenue run rate was just under $19 billion. However, this includes a lot more than just IaaS and once non-IaaS is removed, analysts suggest that revenue is likely at the $6 billion run rate. Google cloud business is even harder to separate but its cloud revenue was estimated to be some $1 billion at the end of 2015, and although they seem to have hit their stride in the last year or so they clearly have a lot of ground to make. Current estimates are for approximately $2.5 billion in 2017. Lastly, IBM are estimated to be of a similar size to Google but appear to have a lot less momentum than the others, and certainly based on the requests we hear from our customer base IBM is not very often, if ever, referenced.

OK so other than guessing on the winners and losers why does this matter? In our humble opinion, it matters because this scenario creates increased competition and competition is good for consumers. It’s also relevant as companies have a choice, and many are looking at more than one cloud platform, even if they have not yet done anything about it. But what is really interesting, and what keeps us awake at night, is how much of this consumption is being wasted. We think and talk about this waste in terms of three buckets:

1) Always on means always paying – 44% of workloads are classified as non-production (i.e., test, development etc.) and don’t need to run 24×7

2) Over provisioning – 55% of all public cloud resources are not correctly sized for their workloads

3) Inventory Waste – 15% of spend on paying resources which are no longer used.

Combine these three buckets and by our reckoning you are looking at some estimated $6 billion in wasted cloud spend cloud in 2016, growing to $20 billion by 2020. Now that is something to really care about.

Few tools exist to actively monitor and manage this waste, and today there is not a cloud waste management industry per se, and currently tech analysts tend to lump everything under ‘cloud management’. We think that will change in the near future as cloud cost control becomes top-of-mind and the industry is able to leverage cloud computing growth statistics to calculate the scale of this industry problem.If you are in the cloud this is definitely something you should know about. Maybe you should consider optimizing your cloud spend now (before your CTO, CIO or CFO ask you to do so).

Read more ›

Real Time Cases – How AWS Free Credits is Helping this Startup

AWS Free Credits

We sat down with Brian Park from Real Time Cases to talk to him about his company, how he uses AWS, and AWS free credits. We found out that the AWS startup package is a crucial part of making his business run.

Can you start by telling us about Real Time Cases and what you guys do?

So Real Time Cases is an education tech startup that is a new generation experiential learning platform. The new form of learning for today’s student is “learning by doing” and not just learning by reading antiquated textbooks. So Real Time Cases, through our partners, approach high level executives and say: “if you can hire 70-80 students to solve any problem in your department, then what would it be?” This forms the foundation for a “Real Time Case”. We film and document the issue, and professors can use that to drive concepts, theories and frameworks that they are trying to teach in the classroom, and use current, real life examples. Our cases are ongoing and happen “in real time” so they are like mini projects. This also opens the door for students to pitch some of these ideas to local business executives, which is exciting.

What is your role in the company?

I am the Director of Product. We have a platform that hosts the cases, videos are the primary content, due to the fact that most students would prefer to watch, rather than to read – think YouTube and Netflix. I am responsible for overseeing the technical team, both developers and designers. Amazon Web Services (AWS) is our cloud provider of choice and our entire infrastructure is hosted there

Why AWS over others?

We chose AWS because of the startup package, we get $10,000 of AWS free credits to use as we wish – compute, databases, and storage all for free! As with any startups, we have to bootstrap operations by keeping costs as low as possible and in addition AWS services are easy to use and access. If we had launched this company 10 years ago, we couldn’t operate at this cost point. So the credits and service offerings were very important to getting us successfully off the the ground and to market quickly and in a cost effective way.  We have both domestic and international customers, and we can host and publish content for any university in the cloud at negligible cost which translates into affordable price points for students, and at our current cloud burn we can further sustain our operations for many months to come.

What technologies do you use in AWS?

We don’t have an official DevOps team, but we use Github for our code repository, Jira for agile processes, and Slack for communication . These low cost, SaaS tools plus AWS have been very productive for us. We are able to push code out in either 1 or 2 week cycles depending on the size of our stories. Our output used to be a 2 week sprint, but is now a 1 week sprint due to improved tools and processes. We follow agile development practices, participate in scrums and try to utilize the latest DevOps tools. As we have a distributed development and QA team. It’s best to use a tool like Jira to coordinate over time zones and accomplish the harder logistical tasks. We don’t have an overly complex architecture in AWS and use EC2, RDS and S3. S3 is used to store and host the video content we create for the professors and students.

 

Do you have any cost control measures in place for AWS?

Right now, no. When our AWS free credits expire we don’t expect our costs to be very high, but  as a startup being able to leverage cost control tools like ParkMyCloud, to save 20-30% will be important – every dollar counts in a startup. We have been using AWS since our inception and haven’t had to move into the paid area yet – Bezos has created a truly disruptive business model that enables the startup community to rapidly prototype and test their thesis by quickly and inexpensively getting to market.

Read more ›

Implementing a Cloud Cost Management Tool

cloud cost management tool

What do people say when they evaluate and implement a cloud cost management tool? Are they concerned with automation? Projected savings? Or are they interested in the ease of access of the product? An experience that I had when I started with ParkMyCloud shed some light on these questions for me.

One of the first tasks that I was assigned as an intern this summer at ParkMyCloud was to go through our Capterra reviews and pull out some compelling customer quotes that helped answer those questions. What I found interesting in reading the quotes is that what’s important to you depends on the size and type of company, your role in that company, and the outcome you’re looking for. We went back and talked to several of the people who left reviews:

One customer was excited to see how easy it was to start saving with ParkMyCloud:

“Try it out as soon as you can if you’re running on AWS and watch the savings add up!”

-John K. Manager of Solutions Analytics

He even followed up with the long-term savings he was able to get:

“ParkMyCloud is an excellent service that allows us to easily manage our AWS instances so that we’re only paying for our AWS instances when we’re actively using them. We were able to save almost 50% off of our monthly bill after about only 20 minutes of setup!”

-John K. Manager of Solutions Analytics

Other customers were excited about the usability of ParkMyCloud. They viewed it as incredibly important that just about anyone can use the product for cloud cost management, you don’t have to be an IT pro. In fact, it usually only takes around 15 minutes to get going with ParkMyCloud!

“As a tool that you can give to ANYONE in your organization, and have them be responsible for their own AWS costs, it is certainly unmatched. I’ve given it to execs who had no technical ability at all, and told them “here you go – you can only control your specific servers, design a power schedule that works for you”, and they’ve done it with zero assistance.”

-Reed S.

Our role-based access controls that allows multiple members of a team or different teams to dictate their own schedules was worth mentioning by some of our reviewers:

“The ability to distribute rights to groups has made the ability for our teams to take advantage of individual application sleep schedules.”

-Edward P. Software Engineer
So what do people say when they are implementing a cloud cost management tool?  Every CFO says that it needs to happen today, because those cloud bills arent getting any smaller.  Every manager says that the tool needs to make it easy to implement governance on a per-team basis.  Every developer says they need something that works right out of the box without getting in their way.  Whatever your role might be, ParkMyCloud will have you saying Its about time!  Try it out for free today!

Read more ›

AWS vs Google Cloud Pricing – A Comprehensive Look

aws vs google cloud pricing

Back in May 2017 I wrote a very popular blog about Cutting through the AWS and Azure Cloud Pricing Confusion.

Since ParkMyCloud also provides cost control for Google Cloud Platform (GCP) resources, I thought it might be useful to compare AWS vs Google Cloud pricing. An addition I will take a look at the terminology, and billing differences NOTE: There are other “services” involved, such as networking, storage and load balancing, when looking at your overall bill. I am going to be focused mainly on compute charges in this article.

AWS and GCP Terminology Differences

As mentioned before, in AWS, their compute service is called “Elastic Compute Cloud” (EC2). The virtual servers are called “Instances”.

In GCP, the service is referred to as “Google Compute Engine” (GCE). The servers are called also called “instances”. However, in GCP there are “preemptible” and non-preemptible instances.  Non-preemptible instances are the same as AWS “on demand” instances.  

Preemptible instances are similar to AWS “spot” instances, in that they are a lot less expensive, but can be preempted with little or no notice. The difference is that GCP preemptible instances can actually be stopped without being terminated. That is not true for AWS spot instances.

Flocks of these instances spun up from a snapshot according scaling rules are called “auto scaling groups” in AWS.

The similar concept can be created within GCP using “instance groups”. However, instance groups are really more of a “stack”, which are created using an “instance group template”. As such, they are more closely related to AWS CloudFormation stacks.

 

aws vs. google cloud pricing

 

AWS and GCP Compute Sizing

Both AWS and GCP have a dizzying array of instance sizes to choose from, and doing an apples-to-apples comparison between them can be quite challenging. These predefined instance sizes are based upon number of virtual cores, amount of virtual memory and amount of virtual disk.

They have different categories.

AWS offers:

  • Free tier – inexpensive, burst performance (t2 family)
  • General purpose (m3/m4 family)
  • Compute optimized (c4 family)
  • GPU instances (p2 family)
  • FPGA instances (f1 family)
  • Memory optimized (x1, r3/r4 family)
  • Storage optimized (i3, d2 family)

 

GCP offers the following predefined types:

  • Free tier – inexpensive, burst performance (f1/g1 family)
  • Standard, shared core (n1-standard family)
  • High memory (n1-highmem family)
  • High CPU (n1-highCPU family)

 

However, GCP also allows you to make your own custom machine types, if none of the predefined ones fit your workload. You pay for uplifts in CPU/Hr and memory GiB/Hr. You can also add GPUs and premium processors as uplifts.

Both providers take marketing liberties with things like memory and disk sizes.  For example, AWS lists its memory size in GiB (base2) and disk size in GB (base10).
GCP reports its memory size and disk size as GB. However, to make things really confusing this is what they say on their pricing page: “Disk size, machine type memory, and network usage are calculated in gigabytes (GB), where 1 GB is 230 bytes. This unit of measurement is also known as a gibibyte (GiB).”

This, of course, is pure nonsense. A gigabyte (GB) is 109 bytes. A gibibyte (GiB) is 230 bytes. The two are definitely NOT equal. It was probably just a typo.
If you look at what is actually delivered, neither seems to match what is shown on their pricing pages. For example, an AWS t2.micro is advertised as having 1 GiB of memory. In reality, it is 0.969 GiB (using “top”).

For GCP, their f1.micro is advertised as “0.6 GB”. Assuming they simply have their units mixed up and “GB” should really be “GiB”, they actually deliver 0.580 GiB. So, both round up, as marketing/sales people are apt to do.

With respect to pricing, this is how the two seem to compare, by looking at some of the most common “work horses” and focusing on CPU, memory and cost. (One would have to run actual benchmarks to more accurately compare):

 

aws vs. google cloud pricing

 

The bottom line:

In general, for most workloads AWS is less expensive on a CPU/Hr basis. For compute intensive workloads, GCP instances are less expensive

Also, as you can see from the table, both providers charge uplifts for different operating systems, and those uplifts can be substantial! You really need to pay attention to the fine print. For example, GCP charges a 4 core minimum for all their SQL uplifts (yikes!). And, in the case of Red Hat Enterprise Licensing (RHEL) in GCP, they charge you a 1 hour minimum for the uplifts and in 1 hour increments after that. (We’ll talk more about how the providers charge you in the next section.)

AWS vs. Google Cloud Pricing – Examining the Differences

Cost/Hr is only one aspect of the equation, though. To better understand your monthly bill, you must also understand how the cloud providers actually charge you. AWS prices their compute time by the hour, but requires a 1 hour minimum. If you start an instance and run it for 61 minutes then shut it down, you get charged for 2 hours of compute time.

Google Compute Engine pricing is also listed by the hour for each instance, but they charge you by the minute, rounded up to the nearest minute, with a 10 minute minimum charge. So, if you run for 1 minute, you get charged for 10 minutes. However, if you run for 61 minutes, you get charged for 61 minutes. On the surface, this sounds very appealing (and makes me want to wag my finger at AWS and say, “shame on you, AWS”).

You also really need to pay attention to the use case and the comparable instance prices. Let me give you a concrete example. So, here is a graph of 6 months worth of data from an m4.large instance. Remember that our goal at ParkMyCloud is to help you “park” non-production instances automatically, when they are not being used, to save you money.

This instance is on a ParkMyCloud parking schedule, where it is RUNNING from 8:00 a.m. to 7:00 p.m. on weekdays and PARKED evenings and weekends. This instance, assuming Linux pricing, costs $0.10 per hour in AWS. From November 6, 2016 until May 9, 2017, this instance ran for 111,690 minutes. This is actually about 1,862 hours, but AWS charged for 1,922 hours and it cost $192.20 in compute time.

 

aws vs. google cloud pricing

 

Why the difference? ParkMyCloud has a very fast and accurate orchestration engine, but when you start and stop instances, the cloud provider and network response can vary from hour-to-hour and day-to-day, depending on their load, so occasionally things will run that extra minute. And, even though this instance is on a parking schedule, when you look at the graph, you can see that the user took manual control a few times, perhaps to do maintenance. Stuff happens!

What would it have cost to run the similar instance in GCP?  If you look at the comparable GCP instance, (the n1-standard-2), it costs $0.1070/hour. So, this workload running in GCP would have cost $199.18 (not including Sustained Use Discounts). Since this instance really only ran 42.6% of the time (111,690 minutes out of 262,140 minutes), it would qualify for a partial Sustained Use Discount. With those discounts the actual cost would have been about $182.72. This is about $10 cheaper than AWS, even though per hour cost for AWS was lower). That may not seem much, but if you have hundreds or thousands of instances, it adds up.

AWS Reserved Instances vs GCP Committed Use

Both providers offer deeper discounts off their normal pricing, for “predictable” workloads that need to run for sustained periods of time, if you are willing to commit to capacity consumption upfront. AWS offers Reserved Instances. Google offers Committed Use Discounts (currently in beta). An in-depth comparison of these is beyond the intent of this blog (and you have already been very patient, if you made it this far). Therefore, I’ll reserve that discussion for a future blog.

Conclusion

If you are new to public cloud, once you get past all the confusing jargon, the creative approaches to pricing and the different ways providers charge for usage, the actual cloud services themselves are much easier to use than legacy on-premise services.

The public cloud services do provide much better flexibility and faster time-to-value. The cloud providers simply need to get out of their own way. Pricing is but one example where AWS and GCP could stand to make things a lot simpler, so that newcomers can make informed decisions.

When comparing AWS vs. Google Cloud pricing AWS oEC2 n-demand pricing may on the surface appear to be more competitive than GCPPpricing for comparable compute engine’s. However, when you examine specific workloads and factor in Google’s more enlightened approach to charging for CPU/Hr time and their use of Sustained Use Discounts, GCP may actually be less expensive. AWS really needs to get in-line with both Azure and Google, who charge by the minute and have much smaller minimums. Nobody likes being charged extra for something they don’t use.

In the meantime, ParkMyCloud will continue to help you turn off non-production cloud resources, when you don’t need them and help save you a lot of money on your monthly cloud bills, regardless of which public cloud provider you use.

Read more ›

Was the Acquisition of Cloudyn About the need to Manage Microsoft Azure? Sort of.

Manage microsoft azure

Perhaps you heard that Microsoft recently acquired Cloudyn in order to manage Microsoft Azure cloud resources, along with of course Amazon Web Services (AWS), Google Cloud Platform (GCP), and others. Why? Well the IT landscape is becoming more and more a multi-cloud landscape. Originally this multi-cloud (or hybrid cloud) approach was about private and public cloud, but as we recently wrote here the strategy as we talk to large enterprises is becoming more about leveraging multiple public clouds for a variety of reasons – risk management, vendor lock in, and workload optimization seem to be the three main reasons.

 

That said, according to TechCrunch and quotes from Microsoft executives the acquisition is meant to provide Microsoft a cloud billing and management solution that provides it with an advantage over competitors (particularly AWS and GCP) as companies continue to pursue, drum roll please … a multi-cloud strategy. Additional, benefits for Microsoft include visibility into usage patterns, adoption rates, and other cloud-related data points that they can leverage in the ‘great cloud war’ to come … GOT reference of course.

 

Why are we writing about this – a couple reasons. One of course is that this a relevant event in the cloud management platform (CMP) space, as this is really the first big cloud visibility and governance acquisition to date. The other acquisitions by Dell (Enstratius), Cisco (Cliqr), and CSC (ServiceMesh) for example were more orchestration and infrastructure platforms than reporting tools. Second, this points to the focus enterprises have on cost visibility, cost management and governance as they look to optimize their spend and usage as one does with any utility. And third, this proves that a ‘pushback’ from enterprises to more widely adopt Azure has been, “I am already using AWS, I don’t want to manage through yet another screen / console”, and that multi-cloud visibility and governance helps solve that problem.

 

Now, taking this one step farther: the visibility, recommendations, and reporting are all well and good, but what about the actions that must be taken off those reports, and integration into enterprise Devops processes for automation and continuous cost control? That’s where something like Cloudyn falls short, and where a platform like ParkMyCloud kicks in:

 

  • Multi-cloud Visibility and Governance- check
  • Single-Sign On (SSO) – check
  • REST API for DevOps Automation – check
  • Policy Engine for Automated Actions (parking) – check
  • Real-time Usage and Savings data – check
  • Manage Microsoft Azure (AWS + GCP) – check

 

The next step in cloud cost control is automation and action, not just visibility and reporting. Let technology automate these tasks for you instead of just telling you about it.

Read more ›

AWS Slack Integration for Interactive Cost Control

AWS slack integration

Today we’re happy to announce a new chatbot for AWS Slack integration that allows you to fully interact with ParkMyCloud without having to access the GUI.  Combined with the recent addition of Notifications in ParkMyCloud, you can manage your continuous cost control from the Slack channels you live in every day!

 

Developers and operations engineers are increasingly utilizing ChatOps to manipulate their environments and help users self-manage the servers and databases they require for their work.  There’s a few different chat systems and bot platforms available, but the most common used today is Slack.  By setting up the SlackBot to interact with your ParkMyCloud account, you can allow users to assign schedules, temporarily override parked instances, or toggle instances to turn off or on as needed.

 

Combine this with notifications from ParkMyCloud, and you can have full visibility into your cost control initiatives right from your standard Slack chat channels.  Notifications allow you to have ParkMyCloud post messages for things like schedule changes or instances that are being turned off automatically.  Now, with the new ParkMyCloud Slackbot, you can reply back to those notifications to snooze the schedule, turn a system back on temporarily, or assign a new schedule.

 

The chatbot is open-source, so you can feel free to modify the bot as necessary to fit your environment or use cases.  It’s written in Python using the slackclient library, but even if you’re not a Python expert, you’ll find it easy to modify to suit your needs.  We’d love to have you send your ideas and modifications back to us for rapid improvement.

 

If you haven’t already signed up for ParkMyCloud, then start a free trial and get the Slackbot hooked up for easy AWS Slack integration.  You’ll find that ParkMyCloud can make continuous cost control easy and help reduce your cloud spend, all while integrating with your favorite DevOps tools!

Read more ›

New on ParkMyCloud: Notifications via Slack and Email

New on ParkMyCloud: you can now receive notifications about your environment and ParkMyCloud account via email as well as Slack and other webhooks. We’re happy to deliver this user-requested feature, and look forward to an improved user experience.

The notifications are divided into system-level notifications and user-level notifications, as outlined below.

Administrators: Configure Notifications of Account-Level Actions via Slack/Webhooks

Administrators can now set up shared account-level notifications for parking actions and/or system errors. You can choose to receive these actions via Slack or a custom webhook.

These notifications include information about:

  • Parking Actions
    • Resource stop/start as a result of a schedule
    • Manual resource start/stop via toggles
    • Manual schedule snoozes
    • Attach/detach of schedules to resources
    • Manual changes to schedules
  • System Errors
    • Permissions issues, such as a lack of permissions on an instance or credential that prevents parking actions
    • Errors related to your cloud service provider, for example, errors due to service outages.

For instructions on how to configure these notifications, please see this article on our support portal.

All Users: Get Notified via Email

While system-level notifications must be configured by an administrator, individual ParkMyCloud users can choose to set up email notifications as well. These notifications include the same information listed above for the teams you choose.

Email notifications will be sent as a rollup every 15 minutes. If no actions occur, you will not receive an email. For instructions on how to configure these notifications, please see this article on our support portal.

Let Us Know What You Think

To our current users: we look forward to your feedback on the notifications, and welcome any suggestions you have to improve the functionality and usability of ParkMyCloud.

If you aren’t yet using ParkMyCloud, you can get started here with a free trial.

Read more ›

Optimizing Dev Instance Costs on Side Projects in Public Clouds

Dev Instance Costs

While ParkMyCloud was exhibiting at DeveloperWeek New York 2017, we had a chance to talk to a lot of developers about how to optimize dev instance costs on their personal projects outside of their normal day job.  When asked where they host these development instances, I heard time and time again how the major public cloud providers (like AWS, Azure, and Google Compute)  can be overly expensive to run for such small projects.  Many choose to use something like Heroku or spin up local VMs just to save some money, even though it might be less convenient – cost trumps convenience when it’s your own money.

While chatting about these side projects, I got the chance to talk about how some ParkMyCloud customers choose to use the “always parked” schedule within ParkMyCloud to have development instances ready to go when they are ready to work on them.  By clicking a single button in the ParkMyCloud interface, you can temporarily disable the parking schedule for the specific length of time that you need to have the dev instance powered on, ParkMyCloud can automatically turn the instance back off when you’re done so you don’t have to remember to shut everything down.

Many developers I talked to were shocked that they didn’t think of an approach like this for their development instances!  This method allows someone to run their services on standard public cloud platforms without worrying about wasted cloud spend, which lets them develop it anywhere and anytime.  A couple folks said they like that the cloud lets them connect from their desktop, laptop, or even a thinner client like a tablet or chromebook.

Here’s the kicker that really pushed these developers over the edge — ParkMyCloud has a fully free tier, no credit card required.  This option is perfect for personal projects or small teams, as it allows unlimited instances to be under management at no cost and with no installation.  Setup is simple, and the power of the platform is evident when you first log in.

The simplicity of the parking and scheduling features of ParkMyCloud allow the freedom to use AWS, Azure, or Google Compute for side projects along with day-to-day work tasks.  By freeing you from worrying about dev instance costs or wasted money, ParkMyCloud can help enable you to make great products and tools while doing what you love.  So take that idea you’ve always wanted to work on, fire up some development instances in your favorite public cloud, and let ParkMyCloud manage your environment for free today!

Read more ›

Top Cloud Computing Trends: Cloud Cost Control

Enterprise Management Associates (EMA) just released a new report on the top cloud computing trends for hybrid cloud, containers, and DevOps in 2017. With this guide, they aim to provide recommendations to enterprises on how you can implement products and processes in your business to meet the top priority trends.

First Priority Among Cloud Computing Trends: Cost Control

Of the 260 companies interviewed in EMA’s study, 42% named “cost control” as their number one priority. Here at ParkMyCloud, we weren’t surprised to hear that. As companies mature in their use of the cloud, cost control moves to the top of the list as their number one cloud-related priority.

EMA has identified a few key problems that contribute to the need for cloud cost control:

  • Waste – inefficient use of cloud resources
  • Unpredictable bills – cloud bills are higher than expected
  • Vendor lock-in – inability to move away from a cloud provider due to contractual or technological dependencies

Related to this is another item on EMA’s list of cloud computing trends: the demand for a single pane of glass for monitoring the cloud. This goes hand-in-hand with the need for cost control, as well as concerns about governance: if you can’t see it, you don’t know there’s a problem. However, it’s important to keep in mind that a pane of glass is only one step toward reaching a solution. You need to actually take action on your cloud environment to keep costs in control.

How to Implement Changes to Control Costs

To actually implement changes in your environment and control costs, EMA has provided a starting recommendation:

Consider simple tools with large impact: Evaluate tools that are quick to implement and help harvest “low-hanging fruit.”

In fact, EMA provided a list of its top 3 vendors that it recommends as a Rapid ROI Utility – among which it has included ParkMyCloud.

Cost Control among top cloud computing trends

EMA recommends these top tools, particularly the “rapid ROI tools,” as a good starting point for controlling  cloud costs – as each of the tools can easily be tried out and the results can be verified in a brief period of time. (If you’re interested in trying out ParkMyCloud in your environment, we offer a 14-day free trial, during which you get to pocket the savings and try out a variety of enterprise-grade features like SSO, a Policy Engine, and API automation for continuous cost control.)

 

Download the report here to check out the full results from EMA.

Read more ›

New: Park AWS RDS Instances with ParkMyCloud

Now You Can Park AWS RDS Instances with ParkMyCloud

We’re happy to share that you can now park AWS RDS instances with ParkMyCloud!

AWS just recently released the ability to start and stop RDS instances. Now with ParkMyCloud, you can automate RDS start/stop on a schedule, so your databases used for development, testing, and other non-production purposes are only running when you actually need them – and you only pay for the hours you use. This is the first parking feature on the market that’s fully integrated with AWS’s new RDS start/stop capability.

You can also use ParkMyCloud’s policy engine to create rules that automatically assign your RDS instances to parking schedules and to teams, so they’re only accessible to the users who need them.

Why it Matters

Our customers who use AWS have long asked for the ability to park RDS instances. In fact,

RDS is the area of biggest of cloud spend after compute, accounting for about 15-20% of an average user’s bill. The savings users can enjoy from parking RDS will be significant. On average, ParkMyCloud users save $140 per parked instance per month on compute – and as RDS instances cost significantly more per hour, the savings will be proportionally higher.

“We’ve used ParkMyCloud for over a year to reduce our EC2 spend, enjoying a 13X return on our yearly license fee – it’s literally saved us thousands of dollars on our AWS bill. We look forward to saving even more now that ParkMyCloud has added support for RDS start/stop!” – Anthony Suda, Release Manager/Senior Network Manager, Sundog.

How to Get Started

It’s easy to get started and park AWS RDS instances with ParkMyCloud.

If you don’t yet use ParkMyCloud, you can try it now for free. We offer a 14-day free trial of all ParkMyCloud features, after which you can choose to subscribe to a premium  plan or continue parking your instances using ParkMyCloud’s free tier.

If you already use ParkMyCloud, you’ll need to check your AWS permissions and ParkMyCloud policies out, and then turn on the RDS feature via your settings page. Please see more information about this on our support page.

As always, we welcome your feedback about this new addition to ParkMyCloud, and anything else you’d like to see in the future.

Happy parking!

Read more ›

ParkMyCloud Releases Parking to Automate AWS RDS Cost Savings

Amazon Web Services Customers Can Now Schedule Stop/Start for AWS RDS Instances with ParkMyCloud’s Automated Cost Savings Platform

June 20, 2017 (Dulles, VA) – ParkMyCloud, the leading enterprise platform for continous cost control in public cloud, today announced that it now offers “parking” for Amazon Web Services (AWS) Relational Database Service (RDS) instances. With parking, users can automatically put resources on on/off schedules, so they only pay for the hours they’re actually using. This is the first parking feature on the market that’s fully integrated with AWS’s recently launched RDS start/stop capability.

RDS is the area of biggest of cloud spend after compute, accounting for about 15-20% of an average user’s bill. In fact, the savings users can enjoy from parking RDS will be significant. On average, ParkMyCloud users save $140 per parked instance per month on compute – and as RDS instances cost significantly more per hour, the savings will be proportionally higher.

“Adding the ability to ‘park’ RDS is a great enhancement to ParkMyCloud’s platform which already helps us reduce our monthly AWS EC2 spend. It’s actually a feature we’ve asked for, so we appreciate how quickly they were able to get this out the door,” said Greg Austin, Global IT DevOps Automation Manager at RateMyAgent.

“Our customers made it clear they wanted RDS parking, and we’re happy to deliver it after working with AWS on the integration,” said ParkMyCloud CEO Jay Chapel. “We’re focused on continuing to build the best cloud cost control platform in the market, so when our customers speak, we listen.”

In addition to AWS, ParkMyCloud also supports Microsoft Azure, and Google Cloud Platform. As with the RDS Parking enhancement, ParkMyCloud will be adding automated cost savings functionality for services beyond compute across all three providers.

About ParkMyCloud

ParkMyCloud is a SaaS platform that helps enterprises optimize their public cloud spend by automatically reducing resource waste — think “Nest for the cloud”. ParkMyCloud has helped customers such as McDonald’s, Capital One, Unilever, Fox and Sage Software dramatically cut their cloud bills by up to 65%, delivering millions of dollars in savings. For more information, visit http://www.parkmycloud.com.

Press Contact

Katy Stalcup

kstalcup@parkmycloud.com

(571) 334-3291

Read more ›

Cloud Access Control Policy – How to Balance Security and Access

Cloud access control policy can be a tricky balance. On the one hand, cloud security is a top concern among many cloud users we talk to. On the other, the ease, flexibility, and speed of the cloud can be sacrificed when users aren’t given the access they need to the resources they use.

Cloud Access Control Policy & Cloud Management Platforms

Internal cloud access control policy is a matter that can be determined within each organization – but what about when an organization wants to use an external cloud management platform? As mentioned, we constantly hear that cloud security ranks #1 or close to it in terms of enterprise priorities, yet when we look around we see a lot of divergence in what different cloud management products require.

Some require literally the keys to the kingdom when you wish to partake of their systems capabilities. You might just want to run some simple analytical reports, but the vendor starts from the perspective of requiring broad ranging policy access, way beyond what’s required to do that job.

We have begun a survey of policy requirements across cloud management platforms, and from our research so far, it seems that the “principle of least privilege” is not as widely adopted in the market as it should be.

The Principle of Least Privilege

In the world of cyber security there is a widely-known cloud access control policy concept called “the principle of least privilege.”  In essence, this concept means that users of any system should only be provided with the privileges that they need to do their job. In the world of on-demand cloud computing where resources are spun up and access shared within seconds, this principle is often stretched beyond its limit.

When designing ParkMyCloud, this concept was top-of-mind. We understood the need to assure clients that controlling their infrastructure with our product made their environments safer, not more vulnerable. What this means in practice is minimizing the number of policy permissions any user of the system needs to have to optimize and control their public cloud.

Each public cloud provider (AWS, Azure, Google Cloud Platform), has a unique set of policy controls used to manage how people access and utilize their company’s cloud infrastructure. These range at the low end to just allowing people to view things (and not create, change or terminate) to in essence giving users the keys to the kingdom.

When evaluating and subscribing to cloud tools, you should demand that access controls are tightly enforced. ParkMyCloud uses the bare minimum to save you money in the cloud, so you can be sure that your infrastructure is secure and optimized for cost control. Keep your environment secure, while balancing by providing users with limited access so they can do their jobs efficiently and cost-effectively.

Read more ›

Is a multi-cloud strategy really just a risk mitigation decision?

Manage Microsoft Azure

Now that ParkMyCloud supports AWS, Azure, and Google, we’re starting to see more businesses who utilize a multi-cloud strategy. The question this raises is: why is a multi-cloud strategy important from a functional standpoint, and why are enterprises deploying this strategy?

To answer this, let’s define “multi-cloud”, as this means different things to different people. I appreciated this description from TechTarget, which describes multi-cloud as:

the concomitant use of two or more cloud services to minimize the risk of widespread data loss or downtime due to a localized component failure in a cloud computing environment. …. A multi-cloud strategy can also improve overall enterprise performance by avoiding “vendor lock-in” and using different infrastructures to meet the needs of diverse partners and customers.

From our conversations with some cloud gurus and our customers, a multi-cloud strategy boils down to:

  • Risk Mitigation – low priority
  • Managing vendor lock-in (price protection) – medium priority
  • Optimizing where you place your workloads – high priority

Risk Mitigation 

Looking at our own infrastructure at ParkMyCloud, we use AWS and other AWS services including RDS, Route 53, SNS and SES. In a risk mitigration exercise, would we look for those like services in Azure, and try to go through the technical work of mapping a 1:1 fit and building a hot failover in Azure? Or would we simply use a different AWS region – which uses fewer resources and less time?

You don’t actually need multi-cloud to do hot failovers, as you can instead use different regions within a single cloud provider – but that’s of course betting on the fact that those regions won’t go down simultaneously. In our case we would have major problems if multiple AWS regions went down simultaneously, but if that happens we certainly won’t be the only one in that boat!

Furthermore, to do a hot failover from one cloud provider to another (say, between AWS and Google), would require a degree of working between the cloud providers and infrastructure and application integration that is not widely available today.

Ultimately, risk mitigation just isn’t the most significant driver for multi-cloud.

Vendor Lock-in

What happens when your cloud provider changes their pricing? Or your CIO says we will never be beholden to one IT infrastructure vendor, like Cisco on the network, or HP in the data center? In that case, you lose your negotiating leverage on price and support.

On the other hand, look at SalesForce. How many enterprises use multiple CRMs?

Do you then have to design and build your applications to undertake a multi-cloud strategy from the get-go, so that transitioning everything to a different cloud provider will be a relatively simple undertaking? The complexity of moving your applications across clouds over a couple of months is nothing compared to the complexity of doing a real-time hot failover when your service is down. For enterprises this might be doable, given enough resources and time. Frankly, we don’t see much of this.

Instead, we see customers using a multi-cloud strategy to design and build applications in the clouds best suited for optimizing their applications. Bythe way — you can then use this leverage to help prevent vendor lock-in.

Workload Optimization

Hot failovers may come to mind first when considering why you would want to go multi-cloud, but what about normal operations, when your infrastructure is running smoothly? Having access to multiple cloud providers lets your engineers pick the one that is the most appropriate for the workload they want to deploy. By avoiding the “all or nothing’ approach,” IT leaders gain greater control over their different cloud services. They can pick and choose the product, service or platform that best fits their requirements, in terms of time-to-market or cost effectiveness,, then integrate those services. Also, this approach may help avoiding problems that arise, when a single provider runs into trouble!

A multi-cloud strategy addresses several inter-related problems. It’s not just a technical avenue for hot failover. It includes vendor relationship management and the ability optimize your workloads based on the strengths of your teams and that CSP’s infrastructure.

By the way — when you actually deploy your multi-cloud strategy, make sure you have a management plan in place upfront. Too often, we hear from companies who deploy on multiple clouds, but don’t have a way to see or compare them in one place — so make sure you have a multi-cloud dashboard in place to provide visibility that spans across cloud providers, their locations and your resources, for proper governance and control, so you can get the most benefit out of a multi-cloud infrastructure.

Read more ›

Announcing Google Cloud Platform Cost Control with ParkMyCloud

Now Supporting Google Cloud Platform Cost Control

Today, we’re excited to announce that ParkMyCloud now supports Google Cloud Platform!

Amazon Web Services (AWS) customers have been using ParkMyCloud for automated cost control since the product launch in 2015, and Azure customers have enjoyed the same capabilities since earlier this year. With ParkMyCloud, you can automate on/off scheduling to ensure your resources are only running when you actually need them. Customers such as McDonald’s, Fox, Capital One, Sage Software, and Wolters Kluwer have already saved millions.

If you use multiple public cloud providers, you can manage them together on a single dashboard.

Why it Matters

With the addition of Google Cloud Platform, ParkMyCloud now provides continuous cost control for the three largest cloud providers in the $23 billion public cloud market. This means ParkMyCloud enables enterprises to eliminate wasted cloud spend – a $6 billion problem in 2017. See more in our official press release.

How Does ParkMyCloud Work on Google Cloud Platform?

It’s simple to get started using ParkMyCloud to manage your Google compute resources:

  1. Connect – Create a ParkMyCloud account – no credit card required – and connect to your Google Cloud Platform account
  2. Manage – Discover and manage all your cloud resources in a single view
  3. Park – Just click the schedule to automatically “Park” (stop) and start resources based on your needs.

If you’re new to ParkMyCloud, please see these additional resources:

  • ParkMyCloud Single Sign-On Integrations – integrate with Active Directory, Centrify, Google G-Suite, Okta, OneLogin, or Ping Identity for single sign-on to ParkMyCloud
  • Zero-Touch Parking – how to use the ParkMyCloud policy engine to create rules for schedules to be automatically applied
  • Resource Group Parking – create “logical groups” for your resources for sequenced startup and shutdown

See it In Action

We’re happy to schedule a demo for you to see ParkMyCloud in action – if you’re interested, please contact us.

Try Now for Free

You can get started now with a free 14-day trial of ParkMyCloud, with full access to premium features.

After your trial expires, you can choose to continue using the core parking functionality for free (forever!), or upgrade to use premium features such as the API, advanced reporting and SSO. Happy parking!

Read more ›

ParkMyCloud Adds Google Cloud Platform to Automated Cost Savings Tool

Customers to Realize Billions of Dollars in Reduced Cloud Spend and Waste Through Continuous Cost Control

June 6, 2017 (Dulles, VA) – ParkMyCloud, the leading enterprise app for optimizing and reducing cloud spend, today announced that it now supports auto-scheduling for Google Cloud Platform in addition to Amazon Web Services (AWS) and Microsoft Azure. ParkMyCloud launched its “Nest for the cloud” platform in September 2015 to enable public cloud users to automatically turn off idle instances, saving 20-60% on their cloud bills every month. The company has seen rapid customer growth, and customers include companies such as McDonald’s, Fox, Capital One, Sage Software, and Wolters Kluwer.

With the addition of Google Cloud Platform, ParkMyCloud now provides continuous cost control for the three largest cloud providers in the $23 billion public cloud market. This means ParkMyCloud enables enterprises to eliminate wasted cloud spend – a $6 billion problem in 2017 (with the growing public cloud market, that’s a $17 billion problem by 2020).

451 Research Vice President William Fellows said of the announcement, “ParkMyCloud is the first vendor in the single-purpose cost control space to support all three major public cloud providers, which is great news for Azure and Google Cloud users. As companies mature in their use of public cloud and Google’s cloud becomes more relevant, cost control becomes even more of a priority.”

“Google has made enterprises a priority target for their cloud offering this year, so we knew it was imperative that we support these users as quickly as possible,” said ParkMyCloud CEO Jay Chapel. “We’ve helped Fortune 500 and multinational corporations achieve millions in savings on AWS and Azure, and we’re looking forward to doing the same for users of Google Cloud Platform.”

Tosin Ojediran, a DevOps engineer at a financial technology company, explains how the savings and immediate usability of ParkMyCloud made a difference for his organization, saying, “We sent the savings numbers to our CFO, and he said, ‘wow, this is awesome.’ ParkMyCloud was up and running in 5-10 minutes, it was easy to integrate, easy to use, and delivers what it promises.”

With the major milestone of Google Cloud Platform support achieved, ParkMyCloud plans to broaden its offerings within each cloud provider by adding cost-savings functionality for services beyond compute, while continuing to provide innovative features that allow customers to integrate continuous cost control seamlessly into their DevOps processes.

About ParkMyCloud

ParkMyCloud is a SaaS platform that helps enterprises optimize their public cloud spend by automatically reducing resource waste — think “Nest for the cloud”. ParkMyCloud has helped customers such as McDonald’s, Capital One, Unilever, Fox and Sage Software dramatically cut their cloud bills by up to 65%, delivering millions of dollars in savings. For more information, visit http://www.parkmycloud.com.

Read more ›

Start and Stop RDS Instances on AWS – and Schedule with ParkMyCloud

Amazon Web Services shared today that users can now start and stop RDS instances – check out the full announcement on their blog.

This is good news for cost-conscious engineering teams. Until now, databases were generally left running 24×7, even if they were only used during working hours for testing and staging purposes. Now, they can be turned off, so you’re not charged for time you’re not using. Nice!

Keep in mind that stopping the RDS instances will not bring the cost to zero – you will still be charged for provisioned storage, manual snapshots and automated backup storage.

Now, what if you want to start and stop RDS instances on an automated schedule to ensure they’re not left running when they’re not needed? Coming soon, you’ll be able to with ParkMyCloud!

Start and Stop RDS Instances on a Schedule with ParkMyCloud

Since ParkMyCloud was first released, customers have been asking us for the ability to park their RDS instances in the same way that they can park EC2 instances and auto scaling groups.

The logic to start/stop RDS instances using schedules is already in the production code for ParkMyCloud. We have been patiently waiting for AWS to officially announce this capability, so that we could turn the feature ON and release it to the public. That day is finally here!

Our development team has some final end-to-end testing to complete, just to make sure everything works as expected. Expect RDS parking to be released within a couple of weeks! Let us know if you’d like to be notified when this is released, or if you’re interested in beta testing the new functionality.

 

We’re excited about this opportunity to give ParkMyCloud users what they’re asking for. What else would you like to see for optimal cost control? Comment below to let us know.

Read more ›

Continuous Integration and Delivery Require Continuous Cost Control

Today, we propose a new concept to add to the DevOps mindset: Continuous Cost Control.

In DevOps, speed and continuity are king. Continuous Operations, Continuous Delivery, Continuous Integration. Keep everything running and get new features in the hands of users quickly.

For some organizations, this approach leads to a mindset of “speed at any cost”. Especially in the era of easily consumable public cloud, this results in a habit of wasted spend and blown budgets – which may, of course, meet the goals for delivery. But remember that a goal of Continuous Delivery is sustainability. This applies to the coding and backend of the application, but also to the business side.

With that in mind, we get to the cost of development and operations. At some point in every organization’s lifecycle comes the need to control costs. Perhaps it’s when your system or product reaches a certain level of predictability or maturity – i.e. maintenance mode – or perhaps earlier, depending your organization.

We all know that agility has helped companies create competitive advantage; but customers and others tell us it can’t be “agility at any cost.” That’s why we believe the next challenge is cost-effective agility. That’s what Continuous Cost Control is all about.

What is Continuous Cost Control?

Think of it as the ability to see and automatically take action on development and operations resources, so that the amount spent is a controlled factor and not merely a result. This should occur with no impact to delivery.

Think of the spend your department manages. It likely includes software license costs and true-ups and perhaps various service costs. If you’re using private cloud/on-premise infrastructure, you’ve got equipment purchases and depreciations, plus everything to support that equipment, down to the fuel costs for backup generators, to consider.

However, the second biggest line item (after personnel) for many agile teams is public cloud. Within this bucket, consider the compute costs, bandwidth costs, database costs, storage, transactions… and the list goes on.

While private cloud/on-premise infrastructure requires continuous monitoring and cost control, the problem becomes acute when you change to the utility model of the public cloud. Now, more and more people in your organization have the ability to spin up virtual servers. It can be easy to forget that every hour (or minute, depending on the cloud provider) of this compute time costs money – not to mention all the surrounding costs.

Continually controlling these costs means automating your cost savings at all points in the development pipeline.  Early in the process, development and test systems should only be run while actually in use.  Later, during testing and staging, systems should be automatically turned on for specific tests, then shut down once the tests are complete.  During maintenance and production support, make sure your metrics and logs keep you updated on what is being used – and when.

How to get started with Continuous Cost Control

While Continuous Cost Control is an idea that you should apply to your development and operations practices throughout all project phases, there are a few things you can do to start a cultural behavior of controlled costs.

  • Create a mindset. Apply principles of DevOps to cloud cost control.
  • Take a few “easy wins” to automate cost control on your public cloud resources.
    • Schedule your non-production resources to turn off when not needed
    • Build in a process to “right size” your instances, so you’re not paying for more capacity than you need
    • Use alternate services besides the basic compute services where applicable. In AWS, for example, this includes Auto Scaling groups, Spot Instances, and Reserved Instances
  • Integrate cost control into your continuous delivery process. The public cloud is a utility which needs to optimized from day one – or if not then, as soon as possible.
  • Analyze usage patterns of your development team to apply rational schedules to your systems to increase adoption rates
  • Allow deviations from the normal schedules, but make sure your systems revert back to the schedule when possible
  • Be honest about what is being used, and don’t just leave it up for convenience

We hope this concept of Continuous Cost Control is useful to you and your organization – and we welcome your feedback.

Read more ›

Top 3 Ways to Save Money on Azure

Perhaps your CFO or CTO came to you and gave a directive to save money on Azure. Perhaps you received the bill on your own, and realize that this needs to be reduced. Or maybe you’re just migrating to the cloud and want to make sure you’re set up for cost control in advance (if so, props to you for being proactive!)

Whatever the reason you want to reduce your bill, there are a lot of little tips and tricks out there. But to get started, here are the top 3 ways to save money on Azure.

1. Set a spending limit on your Azure account

Our first recommendation to save money on Azure is to set a spending limit on your Azure account. We especially recommend this if you are using your Azure account for non-production. This is because once your limit is reached, your VMs will be stopped and deallocated. You will get an email alert and an alert in the Azure portal, and you do have the ability to turn these back on, but this is of course not ideal for any production systems.

Additionally, keep in mind that there are still services you will be charged for, even if your spending limit has been reached, including Visual studio licenses, Azure Active Directory premium, and support plans.

Here are full instructions on how to use the Azure spending limit on the Azure website.

2. Right size your VMs

One easy way to spend too much on your Azure compute resources is to use VMs that are not properly sized for the workload you are running on them. Use Azure’s Advisor to ensure that you’re not overpaying for processor cores, memory, disk storage, disk I/O, or network bandwidth. More on right-sizing from TechTarget.

While you’re at it, check to see if there’s a less-expensive region you could choose for the VM for additional cost savings.

3. Turn non-production VMs off when they’re not being used

Our third recommendation to save money on Azure is to turn non-production VMs off when they’re not being used – otherwise, you’re paying for time you don’t need. It’s a quick fix, and one that can save 65% of the cost of the VM – if, for example, it was running 24×7 but is only needed 12 hours per day, Monday through Friday.

One basic approach is to ask developers and testers to turn their VMs off when they are done using them — if you do this, ensure that your users are using the Azure portal to put these VMs in the “stopped deallocated” state. If you stop from within a VM, it will be put in a “stopped” state and you will continue to be charged.

However, relying on human memory is not best, so you’ll want to schedule your non-production VMs to shut down on a schedule. You could attempt to script this, but this is counter productive and wastes valuable development resources to write and maintain.

Instead, it’s best to use software like ParkMyCloud’s to automate on/off schedules – including automating schedule and team assignment for access control – and keep your Azure non-production costs in check.

 

 

These three methods should get you started on your goal to reduce costs. Have any other preferred methods to save money on Azure? Leave a comment below to let us know.

Read more ›

DevOps Cloud Cost Control: How DevOps Can Solve the Problem of Cloud Waste

DevOps cloud cost control: an oxymoron? If you’re in DevOps, you may not think that cloud cost is your concern. When asked what your primary concern is, you might say speed of delivery, or integrations, or automation. However, if you’re using public cloud, cost should be on your list of problems to control.

The Cloud Waste Problem

If DevOps is the biggest change in IT process in decades, then renting infrastructure on demand is the most disruptive change in IT operations. With the switch from traditional datacenters to public cloud, infrastructure is now used like a utility. Like any utility, there is waste. (Think: leaving the lights on or your air conditioner running when you’re not home.)  

How big is the problem? In 2016, enterprises spent $23B on public cloud IaaS services. We estimate that about $6B of that was wasted on unneeded resources. The excess expense known as “cloud waste” comprises several interrelated problems: services running when they don’t need to be, improperly sized infrastructure, orphaned resources, and shadow IT.

Everyone who uses AWS, Azure, and Google Cloud Platform is either already feeling the pressure — or soon will be — to reel in this waste. As DevOps teams are primary cloud users in many companies, DevOps cloud cost control processes become a priority.

4 Principles of DevOps Cloud Cost Control

Let’s put this idea of cloud waste in the framework of some of the core principles of DevOps. Here are four key DevOps principles, applied to cloud cost control:

1. Holistic Thinking

In DevOps, you cannot simply focus on your own favorite corner of the world, or any one piece of a project in a vacuum. You must think about your environment as a whole.

For one thing, this means that, as mentioned above, cost does become your concern. Businesses have budgets. Technology teams have budgets. And, whether you care or not, that means DevOps has a budget it needs to stay within. Whether it’s a concern upfront or doesn’t become one until you’re approached by your CTO or CFO, at some point, infrastructure cost is going to be under scrutiny – and if you go too far out of budget, under direct mandates for reduction.

Solving problems not only speedily and elegantly, but cost efficiently becomes a necessity. You can’t just be concerned about Dev and Ops, you need to think about BizDevOps.

Holistic thinking also means that you need to think about ways to solve problems outside of code… more on this below.

2. No Silos

The principle of “no silos” means not only no communication silos, but also, no silos of access. This applies to the problem of cloud cost control when it comes to issues like leaving compute instances running when they’re not needed. If only one person in your organization has the ability to turn instances on and off, then all responsibility to turn those instances off falls on his or her shoulders.

It also means that if you want to use an instance that is scheduled to be turned off… well, too bad. You either call the person with the keys to log in and turn your instance on, or you wait until it’s scheduled to come on.  Or if you really need a test environment now, you spin up new instances – completely defeating the purpose of turning the original instances off.

The solution is eliminating the control silo by allowing users to access their own instances to turn them on when they need them and off when they don’t — of course, using governance via user roles and policies to ensure that cost control tactics remain uninhibited.

(In this case, we’re thinking of providing access to outside management tools like the one we provide, but this can apply to your public cloud accounts and other development infrastructure management portals as well.)

3. Rapid, Useful Feedback

In the case of eliminating cloud waste, the feedback you need is where, in fact, waste is occurring. Are your instances sized properly? Are they running when they don’t need to be? Are there orphaned resources chugging away, eating at your budget?

Useful feedback can also come in the form of total cost savings, percentages of time your instances were shut down over the past month, and overall coverage of your cost optimization efforts.  Reporting on what is working for your environment helps you decide how to continually address the problem that you are working on next.

You need monitoring tools in place in order to discover the answers to these questions. Preferably, you should be able to see all of your resources in a single dashboard, to ensure that none of these budget-eaters slip through the cracks. Multi-cloud and multi-region environments make this even more important.

4. Automation

The principle of Automation means that you should not waste time creating solutions when you don’t have to. This relates back to the problem of solving problems outside of code mentioned above.

Also, when “whipping up a quick script”, always remember the time cost to maintain such a solution. More about why scripting isn’t always the answer.

So when automating, keep your eyes open and do your research. If there’s already an existing tool that does what you’re trying to code, it could be a potential time-saver and process-simplifier.

Take Action

So take a look at your DevOps processes today, and see how you can incorporate a DevOps cloud cost control – or perhaps, “continuous cost control”  – mindset to help with your continuous integration and continuous delivery pipelines. Automate cost control to reduce your cloud expenses and make your life easier.

Read more ›

New: ParkMyCloud Supports Centrify for Single Sign-On

Announcing: ParkMyCloud now integrates with Centrify for Single Sign-On (SSO). What, did you think we were finished with SSO integrations?

That brings the list of SSO providers you can use with your ParkMyCloud account to:

  • Active Directory Federation Services (ADFS) – Microsoft
  • Azure Active Directory – Microsoft
  • Centrify
  • Google G-Suite
  • Okta (in Okta App Network)
  • OneLogin (in App Catalog)
  • Ping Identity (in App Catalog)

Stay tuned: ParkMyCloud will be listed in the Centrify marketplace shortly.

We have integrated with Centrify for Single Sign-On, as well as the other SSO providers, to make it simpler:

  1. For account administrators, who can use just-in-time provisioning to automatically add their organization members to ParkMyCloud as they are authenticated in Centrify – all you need to do as an administrator is share your organization’s unique ParkMyCloud login link with your users. This can be found in the ParkMyCloud management console.
  2. For users, who will not need separate login information and a password for ParkMyCloud.

For a step-by-step guide for setting up Centrify as a SAML IdP server for ParkMyCloud, please see this article on our support site. Note that you will already need to have your ParkMyCloud account created – though there’s no need to add additional users until you’ve connected with Centrify, at which point you can add them directly from the SSO provider.

If we still don’t support your SSO provider of choice, please leave a comment below or contact us – we’re all about meeting user needs, here!

Read more ›
Page 1 of 812345...Last »
Copyright © ParkMyCloud 2016. All rights reserved|Privacy Policy