Software Development in the Cloud Archives - ParkMyCloud

Historically, the primary benefit of software development in the cloud has been the opportunity to access a massive computing infrastructure without the capital costs of procurement. This opportunity has driven the growth of cloud computing services over the past decade into an industry now worth more than $200 billion.

As cloud computing services have developed, other factors – such as high-speed processing, advances in security and smart computing architectures – have influenced organizations to adopt a digital business strategy and make the change from legacy IT systems to cloud-based services. The opportunity to work from anywhere has also been a driving factor.

However, cloud computing services come at a price. They are not always as scalable as they are implied to be, and the management of cloud-based application can be complex. For this reason, organizations evaluating the benefits of software development in the cloud should also evaluate the benefits of cloud management software.

Cloud management software overcomes many of the issues associated with software development in the cloud. Organizations can reduce cloud computing costs by temporarily stopping non-production instances and VMs when not required. Administrators can also obtain a single view of all their cloud-based applications, data and services to facilitate budget and capacity planning.

If your organization would like to enjoy the benefits of software development in the cloud without experiencing cost, scalability and management issue, you are invited to take advantage of a free thirty-day trial of ParkMyCloud – a versatile Software-as-a-Service app that can reduce cloud compute costs by up to 60% and save organizations valuable time in cloud administration and management.

For further details of our free trial offer, contact us today.

5 Favorite AWS Training Resources

When it comes to AWS training resources, there’s no shortage of information out there. Considering the wide range of videos, tutorials, blogs, and more, it’s hard knowing where to look or how to begin. Finding the best resource depends on your learning style, your needs for AWS, and getting the most updated information available. With this in mind, we came up with our 5 favorite AWS training resources, sure to give you the tools you need to learn AWS:

1. AWS Self-Paced Labs

What better way to learn that at your own pace? AWS self-paced labs give you hands-on learning in a live AWS environment, with AWS services, and actual scenarios you would encounter in the cloud. Among the recommended labs you’ll find an Introduction to Amazon Elastic Compute Cloud (EC2), and for more advanced users, a lab on Creating Amazon EC2 Instances with Microsoft Windows. If you’re up for an adventure, enroll in a learning quest and immerse yourself in a collection of labs that will help you master any AWS scenario at your own pace. Once completed, you will earn a badge that you can boast on your resume, LinkedIn, website, etc.  

2. AWS Free Tier

Sometimes the best way to learn something is by jumping right in. With the AWS Free Tier, you can try AWS services for free. This is a great way to test out AWS for your business, or for the developers out there, to try services like AWS CodePipeLine, AWS Data Pipeline, and more. While you are still getting a hands-on opportunity to learn a number of AWS services, the only downside is that there are certain usage limits. You can track your usage with a billing alarm to avoid unwanted charges, or you can try ParkMyCloud and park your instances when they’re not in use to get the most out of your free tier experience. In fact, ParkMyCloud started its journey by using AWS’ free tier – we eat our own dog food!

3. AWS Documentation

AWS Documentation is like a virtual encyclopedia of tools, terms, training, and everything AWS. You’ll find white papers, case studies, tutorials, cloud computing basics, and so much more. This resource is a one-stop-shop for all of your AWS documentation needs, whether you’re a beginner or advanced user. No matter where you are in your AWS training journey, AWS documentation is always a useful reference and certainly deserves a spot in your bookmarks.

4. YouTube

So far, we’ve gone straight to the source for 3 out of 5 of our favorite AWS training resources. Amazon really does a great job of providing hands-on training, tutorials, and documentation for users with a range of experience. However, YouTube opens up a whole new world of video training that includes contributions from not only Amazon, but other great resources as well. Besides the obvious Amazon Web Services channel, there are also popular and highly rated videos by Edureka, Simplilearn, Eli the Computer Guy, and more.

5. Bloggers

As cloud technology usage continues to expand and evolve, blogs are a great way to stay up to speed with AWS and the world of cloud computing. Of course, in addition to labs, a free-trial, extensive documentation, and their own YouTube channel, AWS also has their own blog. Since AWS actually has a number of blogs that vary by region and technology, we recommend that you start by following Jeff Barr – Chief Evangelist at Amazon Web Services, and primary contributor. Edureka was mentioned in our recommended YouTube channels, they also have a blog that covers plenty of AWS topics. In addition, the CloudThat blog is an excellent resource for AWS and all things cloud, and was co-founded by Bhaves Goswami – a former member of the AWS product development team.

 


There’s plenty of information out there when it comes to AWS training resources. We picked our 5 favorite resources for their reliability, quality, and range of information. Whether you’re new to AWS or consider yourself an expert, these resources are sure to help you find what you’re looking for.

Read more ›

Continuous Integration and Delivery Require Continuous Cost Control

Today, we propose a new concept to add to the DevOps mindset: Continuous Cost Control.

In DevOps, speed and continuity are king. Continuous Operations, Continuous Delivery, Continuous Integration. Keep everything running and get new features in the hands of users quickly.

For some organizations, this approach leads to a mindset of “speed at any cost”. Especially in the era of easily consumable public cloud, this results in a habit of wasted spend and blown budgets – which may, of course, meet the goals for delivery. But remember that a goal of Continuous Delivery is sustainability. This applies to the coding and backend of the application, but also to the business side.

With that in mind, we get to the cost of development and operations. At some point in every organization’s lifecycle comes the need to control costs. Perhaps it’s when your system or product reaches a certain level of predictability or maturity – i.e. maintenance mode – or perhaps earlier, depending your organization.

We all know that agility has helped companies create competitive advantage; but customers and others tell us it can’t be “agility at any cost.” That’s why we believe the next challenge is cost-effective agility. That’s what Continuous Cost Control is all about.

What is Continuous Cost Control?

Think of it as the ability to see and automatically take action on development and operations resources, so that the amount spent is a controlled factor and not merely a result. This should occur with no impact to delivery.

Think of the spend your department manages. It likely includes software license costs and true-ups and perhaps various service costs. If you’re using private cloud/on-premise infrastructure, you’ve got equipment purchases and depreciations, plus everything to support that equipment, down to the fuel costs for backup generators, to consider.

However, the second biggest line item (after personnel) for many agile teams is public cloud. Within this bucket, consider the compute costs, bandwidth costs, database costs, storage, transactions… and the list goes on.

While private cloud/on-premise infrastructure requires continuous monitoring and cost control, the problem becomes acute when you change to the utility model of the public cloud. Now, more and more people in your organization have the ability to spin up virtual servers. It can be easy to forget that every hour (or minute, depending on the cloud provider) of this compute time costs money – not to mention all the surrounding costs.

Continually controlling these costs means automating your cost savings at all points in the development pipeline.  Early in the process, development and test systems should only be run while actually in use.  Later, during testing and staging, systems should be automatically turned on for specific tests, then shut down once the tests are complete.  During maintenance and production support, make sure your metrics and logs keep you updated on what is being used – and when.

How to get started with Continuous Cost Control

While Continuous Cost Control is an idea that you should apply to your development and operations practices throughout all project phases, there are a few things you can do to start a cultural behavior of controlled costs.

  • Create a mindset. Apply principles of DevOps to cloud cost control.
  • Take a few “easy wins” to automate cost control on your public cloud resources.
    • Schedule your non-production resources to turn off when not needed
    • Build in a process to “right size” your instances, so you’re not paying for more capacity than you need
    • Use alternate services besides the basic compute services where applicable. In AWS, for example, this includes Auto Scaling groups, Spot Instances, and Reserved Instances
  • Integrate cost control into your continuous delivery process. The public cloud is a utility which needs to optimized from day one – or if not then, as soon as possible.
  • Analyze usage patterns of your development team to apply rational schedules to your systems to increase adoption rates
  • Allow deviations from the normal schedules, but make sure your systems revert back to the schedule when possible
  • Be honest about what is being used, and don’t just leave it up for convenience

We hope this concept of Continuous Cost Control is useful to you and your organization – and we welcome your feedback.

Read more ›

DevOps Cloud Cost Control: How DevOps Can Solve the Problem of Cloud Waste

DevOps cloud cost control: an oxymoron? If you’re in DevOps, you may not think that cloud cost is your concern. When asked what your primary concern is, you might say speed of delivery, or integrations, or automation. However, if you’re using public cloud, cost should be on your list of problems to control.

The Cloud Waste Problem

If DevOps is the biggest change in IT process in decades, then renting infrastructure on demand is the most disruptive change in IT operations. With the switch from traditional datacenters to public cloud, infrastructure is now used like a utility. Like any utility, there is waste. (Think: leaving the lights on or your air conditioner running when you’re not home.)  

How big is the problem? In 2016, enterprises spent $23B on public cloud IaaS services. We estimate that about $6B of that was wasted on unneeded resources. The excess expense known as “cloud waste” comprises several interrelated problems: services running when they don’t need to be, improperly sized infrastructure, orphaned resources, and shadow IT.

Everyone who uses AWS, Azure, and Google Cloud Platform is either already feeling the pressure — or soon will be — to reel in this waste. As DevOps teams are primary cloud users in many companies, DevOps cloud cost control processes become a priority.

4 Principles of DevOps Cloud Cost Control

Let’s put this idea of cloud waste in the framework of some of the core principles of DevOps. Here are four key DevOps principles, applied to cloud cost control:

1. Holistic Thinking

In DevOps, you cannot simply focus on your own favorite corner of the world, or any one piece of a project in a vacuum. You must think about your environment as a whole.

For one thing, this means that, as mentioned above, cost does become your concern. Businesses have budgets. Technology teams have budgets. And, whether you care or not, that means DevOps has a budget it needs to stay within. Whether it’s a concern upfront or doesn’t become one until you’re approached by your CTO or CFO, at some point, infrastructure cost is going to be under scrutiny – and if you go too far out of budget, under direct mandates for reduction.

Solving problems not only speedily and elegantly, but cost efficiently becomes a necessity. You can’t just be concerned about Dev and Ops, you need to think about BizDevOps.

Holistic thinking also means that you need to think about ways to solve problems outside of code… more on this below.

2. No Silos

The principle of “no silos” means not only no communication silos, but also, no silos of access. This applies to the problem of cloud cost control when it comes to issues like leaving compute instances running when they’re not needed. If only one person in your organization has the ability to turn instances on and off, then all responsibility to turn those instances off falls on his or her shoulders.

It also means that if you want to use an instance that is scheduled to be turned off… well, too bad. You either call the person with the keys to log in and turn your instance on, or you wait until it’s scheduled to come on.  Or if you really need a test environment now, you spin up new instances – completely defeating the purpose of turning the original instances off.

The solution is eliminating the control silo by allowing users to access their own instances to turn them on when they need them and off when they don’t — of course, using governance via user roles and policies to ensure that cost control tactics remain uninhibited.

(In this case, we’re thinking of providing access to outside management tools like the one we provide, but this can apply to your public cloud accounts and other development infrastructure management portals as well.)

3. Rapid, Useful Feedback

In the case of eliminating cloud waste, the feedback you need is where, in fact, waste is occurring. Are your instances sized properly? Are they running when they don’t need to be? Are there orphaned resources chugging away, eating at your budget?

Useful feedback can also come in the form of total cost savings, percentages of time your instances were shut down over the past month, and overall coverage of your cost optimization efforts.  Reporting on what is working for your environment helps you decide how to continually address the problem that you are working on next.

You need monitoring tools in place in order to discover the answers to these questions. Preferably, you should be able to see all of your resources in a single dashboard, to ensure that none of these budget-eaters slip through the cracks. Multi-cloud and multi-region environments make this even more important.

4. Automation

The principle of Automation means that you should not waste time creating solutions when you don’t have to. This relates back to the problem of solving problems outside of code mentioned above.

Also, when “whipping up a quick script”, always remember the time cost to maintain such a solution. More about why scripting isn’t always the answer.

So when automating, keep your eyes open and do your research. If there’s already an existing tool that does what you’re trying to code, it could be a potential time-saver and process-simplifier.

Take Action

So take a look at your DevOps processes today, and see how you can incorporate a DevOps cloud cost control – or perhaps, “continuous cost control”  – mindset to help with your continuous integration and continuous delivery pipelines. Automate cost control to reduce your cloud expenses and make your life easier.

Read more ›

“Is that old cloud instance running?” How visibility saves money in the cloud

make sure you didn't leave a cloud instance running with better cloud visibility“Is that old cloud instance running?”

Perhaps you’ve heard this around the office. It shouldn’t be too surprising: anyone who’s ever tried to load the Amazon EC2 console has quickly found how difficult it is to keep a handle on everything that is running.  Only one region gets displayed at a time, which makes it common for admins to be surprised when the bill comes at the end of the month.  In today’s distributed world, it not only makes sense for different instances to be running in different geographical regions, but it’s encouraged from an availability perspective.

On top of this multi-region setup, many organizations are moving to a multi-cloud strategy as well.  Many executives are stressing to their operations teams that it’s important to run systems in both Azure and AWS.  This provides extreme levels of reliability, but also complicates the day-to-day management of cloud instances.

So is that old cloud instance running?

You may get a chuckle out of the idea that IT administrators can lose servers, but it happens more frequently than we like to admit.  If you only ever log in to US-East1, then you might forget that your dev team that lives in San Francisco was using US-West2 as their main development environment. Or perhaps you set up a second cloud environment to make sure your apps all work properly, but forgot to shut them down prior to going back to your main cloud.

That’s where a single-view dashboard (like the view you get with ParkMyCloud) can provide administrators with unprecedented visibility into their cloud accounts.  This is a huge benefit that leads to cost savings right off the bat, as the cloud servers running that you forgot about or thought you turned off can be seen in a single pane of glass. Knowledge is power: now that you know it exists, you can turn it off. You also get an easy view into how your environment changes over time, so you’ll be aware if instances get spun up in various regions.

This level of visibility also has a freeing effect, as it can lead you to utilizing more regions without fear of losing instances.  Many folks know they should be distributed geographically, but don’t want to deal with the headache of keeping track of the sprawl.  By tracking all of your regions and accounts in one easy-to-use view, you can start to fully benefit from cloud computing without wasting money on unused resources.

Now with ParkMyCloud’s core functionality available for free, it’s easy to get this single view of your AWS and Azure environments.  We think you’ll get a new perspective on your existing cloud infrastructure – and maybe you’ll find a few lost servers! Get started with the free version of ParkMyCloud.

Read more ›

The Cloud Waste Problem That’s Killing Your Business (and What To Do About It)

cloud wasteWaste not, want not. That was one of the well-healed quips of one the United States’ Founding Fathers, Benjamin Franklin. It couldn’t be more timely advice in today’s cloud computing world – the world of cloud waste. (When he was experimenting with static electricity and lightning, I wonder if he saw the future of Cloud? :^) )

Organizations are moving to the Cloud in droves. And why not? The shift from CapEx to monthly OpEx, the elasticity, the reduced deployment times and faster time-to-market: what’s not to love?

The good news: the public cloud providers have made it easy to deploy their services. The bad news: the public cloud providers have made it easy to deploy their services…really easy.  

And, experience over the past decade has shown that leads to cloud waste. What is “cloud waste” and where does it come from? What are the consequences? What can you do to reduce it?

What is Cloud Waste?

“Cloud waste” occurs when you consume more cloud resources than you actually need to run your business.

It takes several forms:

  • Resources left running 24×7 in development, test, demo, and training environments where they don’t need to be running 24×7.  (Thoughts of parents yelling at children to “turn the lights out” if they are the last one in a room.) I believe this is bad habit that was reinforced by the previous era of on premise data centers. The thinking: It’s a sunk cost any, why bother turning it off?  Of course, it’s not a sunk cost anymore.

This manifests itself in various ways:

    • Instances or VMs which are left running, chewing up $/CPU-Hr costs and network charges
    • Orphaned volumes (volumes not attached to any servers), which are not being used and incurring monthly $/GB charges
    • Old snapshots of those or other volumes
    • Old, out-of-date machine images

However, cloud consumers are not the only ones to blame. The public cloud providers are also responsible when it comes to their PaaS (platform as a service) offerings for which there is no OFF switch (e.g., AWS’ RDS, Redshift, DynamoDB and others). If you deliver a PaaS offering, make sure it has an OFF switch.

  • Resources that are larger than needed to do the job. Many developers don’t know what size instance to spin up to do their development work, so they will often spin up larger ones. (Hey, if 1 core and 4 GB of RAM is good, then 16 cores and 64 GB of RAM must be even better, right?) I think this habit also arose in the previous era of on-premise data centers: “We already paid for all this capacity anyway, so why not use it?” (Wrong again.)

This, too, rears its ugly head in several ways:

    • Instances or VMs which are much larger than they need to be
    • Block volumes which are larger than they need to be
    • Databases which are way over-provisioned compared to what their actual IOPS or sequential throughput requirements actually are.

Who is Affected by Cloud Waste?

The consequences of cloud waste are quite apparent. It is killing everyone’s business bottom line. For consumers, it erodes their return on assets, return on equity and net revenue.  All of these ultimately impact earnings per share for their investors as well.

Believe it or not, it also hurts the public cloud providers and their bottom line.  Public cloud providers are most profitable when they can oversubscribe their data centers. Cloud waste forces them to build more, very expensive data centers than they need to, killing their oversubscription rates and hurting their profitability as well. This is why you see cloud providers offering certain types of cost cutting solutions. For example, AWS offers Reserved Instances, where you can pay up front for break in on-demand pricing. They also offer Spot Instances, Auto-Scaling Groups and Lambda.  Azure also offers price breaks to their ELA customer and Scale Sets (the equivalent of ASGs).

How to Prevent Cloud Waste

So, what can you do to address this? Ultimately, the solution to this problem exists between your ears. Most of it is common sense: It requires rethinking… rewiring your brain to look at cloud computing in a different way. We all need to become honorary Scotsmen (short arms and deep pockets… with apologies to my Scottish friends).

  • When you turn on resources in non-production environments, turn on the minimum size needed to get the job done and only grudgingly move up to the next size.
  • Turn stuff off in non-production environments, when you are not using it. And for Pete’s sake, when it comes to compute time, don’t waste your time and money writing your own scripts…that just exacerbates the waste. Those DevOps people should spend that time on your bread and butter applications. Use ParkMyCloud instead! (Okay, yes, that was a shameless plug, but it is true.)
  • Clean up old volumes, snapshots and machine images.
  • Buy Reserved Instances for your production environments, but make sure you manage them closely, so that they actually match what your users are provisioning, otherwise you could be double paying.
  • Investigate Spot fleets for your production batch workloads that run at night. It could save you a bundle.

These good habits, over time, can benefit everyone economically: Cloud consumers and cloud producers alike.  

Read more ›

Where the Traditional IT Companies Will Never Catch Up to Those Born in the Cloud

born-in-the-cloudTraditional IT companies may dominate in a few fields, but in others, they will never catch up to those companies “born in the cloud.”

I actually have a unique perspective on these two worlds, as prior to this adventure at ParkMyCloud, I worked at IBM for many years. I was originally with Micromuse, where we had a fault and service assurance solution (Netcool) to manage and optimize Network and IT Operations. Micromuse was acquired by IBM in 2006 by the Tivoli Software Group business unit (later to be named Smarter Cloud). IBM was great – I learned a lot and met a lot of very smart, bright people. I was in Worldwide Sales Management so I had visibility across the globe into IT trends.

In the 2012/2013 timeframe, I noticed we were losing a lot of IT management, monitoring and assurance deals to companies like ServiceNow, New Relic, Splunk, Microsoft, and the like – all these “born in cloud” companies offering SaaS-based solutions to solve complex enterprise problems (that is, “born in the cloud” other than Microsoft – I’ll come back to them).

At first these SaaS-based IT infrastructure management companies were managing traditional on-premise servers and networks, but as more and more companies moved their infrastructure into the cloud, the SaaS companies were positioned to manage that as well – but at IBM, we were not. All of the sudden we were trying to sell complex, expensive IT management solutions for stuff running in this “cloud” called Amazon Web Services (AWS) – a mere 5 years ago. And then Softlayer, Rackspace, and Microsoft Azure popped up. I start thinking, there must be something here, but what is it and who’s going to manage and optimize this infrastructure?

After a few years sitting on the SaaS side of the table, now I know. Many meetings and discussions with very large Fortune 100 enterprises have taught me several very salient points about the cloud:

  1. Public cloud is here to stay – see Capital One or McDonald’s at recent AWS re:Invent Keynotes (both customers of ParkMyCloud, by the way)
  2. Enterprises are NOT using “traditional” IT tools to build, test, run and manage infrastructure and applications in the cloud
  3. What’s different about the cloud is that it’s a YUGE utility, which means companies now focus on cost control. Since it’s an OpEx model rather than a CapEx model they want to continually optimize their spend

Agility and innovation drive public cloud adoption but as cloud maturity grows so does the need for optimization – governance, cost control, and analytics.

So where does this leave the traditional companies like Oracle, HPE, and IBM? How are they involved in the migration to and lifecycle management of cloud-based applications? Well, from what I have seen they on the outside looking in – which is why when my good friend sent this to me the other day I was shocked – I guess Oracle decided to spot AWS a $13B lead – pretty smart, I am sure they will make this gap up by oh, let’s say 2052… brilliant strategy.

That said, one company that “gets it” seems to be Microsoft, both in terms of providing cloud infrastructure (Azure) but also being progressive enough to license their technologies for even the smallest of companies to adopt and grow using their applications.

To put a bow on this point, I was at a recent meeting where a Fortune 25 company was talking to us about their migration into the cloud, and the tools they are using:

  • Clouds – AWS / Azure
  • Migration – service partner
  • Monitoring – DataDog
  • Service Desk and CMDB – ServiceNow
  • Application Management – NewRelic
  • Log analytics – Splunk
  • Pipeline automation – Jenkins
  • Cost control (yes, that’s a category now) – ParkMyCloud

Now that’s some pretty good company! And not a single “traditional” IT tool on the list. I guess it takes one born in the cloud to manage it.

Read more ›

How to Save Money in DevOps: Interview with FinTech Company Using ParkMyCloud

We spoke to Tosin Ojediran, a DevOps Engineer at a FinTech company, about how he’s using ParkMyCloud as part of his approach to save money in DevOps.

Hi Tosin. So you work in FinTech. Can you tell us about what your team does within the company?

save money in devopsI’m on the DevOps team. We’re in charge of the cloud infrastructure, which ranges from servers to clusters and beyond. We have the task of maintaining the integrations between all the different services we use. Our main goal is to make sure our infrastructure is up and running and to maintain it. Our team just grew from two to three people.

What drove you to search for a cost optimization tool?

Last year, we were scaling our business, and with all the new development and testing, we kept needing to launch new clusters, databases, and instances. We did monitor the costs, but it was the Finance team that came to us and said, “hey, what’s going on with AWS? The costs keep going up, can you guys find a way to reduce this bill or move to a cheaper provider?”

So we looked into different options. We could move to Google for example, or we could move on prem, but at the time we were a team of two running a new project, trying to get things up and running, so we didn’t have the time. We had to find out how we could save money in DevOps without spending all our time to move to a new infrastructure. We went online to do research and came across ParkMyCloud, and started a trial.

What challenges did you experience in using AWS prior to using ParkMyCloud?

Like I mentioned, we were trying to cut costs. To do that, we were brainstorming about how we could write scripts to shut down machines during certain hours and spin them up. The problem was that this would require our time to write, integrate, and maintain.

We have different automation tools and containers – Chef, Docker machines, and Auto Scaling. Each of these takes time to script up. This all takes away from the limited time we have. With ParkMyCloud, we didn’t need to spend time on this automation – it was fast and simple. It allowed me to have all teams, including Analysts and others outside of the DevOps team, park their own resources. If you have a script that you run and if you have a two-man DevOps team, every time someone wants to park their machine, or start it outside of hours, they have to call me and ask me to do start their machines for them. But now with ParkMyCloud, I can assign machines to individual teams, they can start their machines whenever you want them – and it’s easy to use, you don’t have to know programming to use it

It frees up my time, because now everyone can control their own resources, when they used to have to ask me to do it for them.

Can you describe your experience so far using ParkMyCloud?

It’s been great for us to reduce AWS costs. We’re better staying within budget now. ParkMyCloud actually really exceeded my expectations. We sent the savings numbers to our CTO, and he said, “wow, this is awesome.” It’s easy to use, it does what it’s supposed to use. We’re reducing our bill by about 25-30%.

One other thing I love about ParkMyCloud. So, I work with a lot of vendors. A lot of times, they promise you one thing, and you get something else. There’s different terms and conditions, or you have to pay extra to actually qualify for different features. But with ParkMyCloud, it was up and running in 5-10 minutes, it was easy to integrate, easy to use, and you all deliver what you promise.

Read more ›

In 2017, I will… Not “build” when I should “buy”. (When to buy vs. build software.)

Buy vs. build software: The eternal question

buy vs. build softwareThe question of whether to buy vs. build software may be an old one, but it’s still relevant. Particularly as companies face rising IT costs, it’s important to consider the most cost-effective options for your business.

When you have an internal development team, it’s tempting to believe that “just having them whip something up” is cheaper than purchasing an off-the-shelf software solution. However, this ignores the opportunity cost of having your skilled developers focus their efforts on non-core activities and ones typically that deliver less value to the business.

To put a number on it, the national average salary for a software developer is $85,000. Including benefits, that’s about $110,000. . So a back-of-the-napkin estimate puts an hour of a developer’s time at $55. Then, consider the number of developers involved, and that you may not be as stringent in budgeting their time for “side projects” as you might for your core work.

So it’s expensive to build. Isn’t the outcome the same?

Actually, probably not. Though internally developed solutions may in theory have the same functionality as purchased software – for example, “it turns instances off when you don’t need them” – they will require additional work to integrate with team structures and to cover a broad variety of use cases. In that example, what about the reporting and savings information? After all, isn’t that the point of turning the instances off in the first place? And then there’s advanced features and the cost to maintain homegrown solutions over time as new requirements creep in.

For one look at how an off-the-shelf solution may compare in functionality to homegrown scripted solutions, here’s a simple side-by-side comparison we put together, showing ParkMyCloud vs. an in-house developed solution.

Functionality In-house Developed Scripting ParkMyCloud
Multi-User / Multi-Team · In small environments, may be difficult to meet demand for skilled DevOps personnel with knowledge of scripting & automation

·  In small environments,  Significant risk if knowledge of infrastructure and scripting is managed by single individual (knowledge transfer)   

In large environments, unless highly centralized, difficult to ensure consistency and standardization of automation approach across entire organization

·   DevOps support for all AWS environments across multiple teams / business units will get complex and resource intensive

·   DevOps resources distracted from core business activities – PMC offers API for integration into DevOps process

·   Opportunity Cost

·   Ability to devolve management of AWS instances to non-technical teams for scheduling on/off (PMC requires NO scripting)

·   Supporting existing team structures and ensuring appropriate controls is difficult to achieve without building out complete custom solution.

·  Role-based access controls (RBAC) and access-based enumeration (ABE) for enhanced security

·  Unlimited teams

·  Unlimited users

·  Laser development focus on EC2 cost optimization

·  One way to automate on/off times with enterprise-wide visibility

·  Options for centralizing or decentralizing control to departments, teams & individuals

·  Designed to support global operations

·  Single view of all resources across locations, account and cloud service providers (CSPs)

·  Reporting

·  $3.00 or less per instance per month

·  Configures in 15 minutes or less

Multiple Credentials /

Multiple CSPs

(Coming soon)

·  Must develop means to securely handle and manage credentials and other sensitive account information.

·  Must keep up-to-date on changes / updates to public cloud which is constantly evolving and adding and changing services.

·  Must develop approach to assign access to different credentials by different teams with PMC RBAC

·  Must develop approach and interface across multiple CSPs

·  Unlimited number of credentials / accounts

·  IAM Role and IAM User support (for AWS)

·  Secure credential management (AES-256 encryption)

·  Multiple public CSPs (coming soon) – ability to manage AWS, Azure and Google for single platform

Platform Coverage ·   Must develop means to create a single view and the ability to manage and start/stop ASG’s

·   Must develop means to create, manage and start/stop logical groups

·  Ability to manage & park Auto-scaling Groups

·  Ability to create, manage and park Logical Groups

·  Global view of ALL AWS Regions and Availability Zones in a single pane of glass

Always ‘off’ Scheduling ·   Must develop a process to enable on-demand access to stopped instances in off hours

·   Must be able to re-apply schedule when off hour work is done

·   Must do this across multiple accounts and CSPs

·  Ability to temporarily suspend parking schedules during off-hours to enable ad hoc instance control
Cost Visibility ·  Need to develop custom application to determine cost savings based upon application of automation or removal of schedules (to date we have not encountered anyone who has developed such an application)

·  Would need ability for ad hoc reports over arbitrary date ranges

·  Forecasts & displays future savings based upon selected schedules

·  Displays real-time actual month-to-date savings

·  Generates & distribute ad hoc detailed cost and savings reports

Policy Engine ·   Hard to enforce consistent and standardized policies within organization within decentralized structures where different automation tools are being used

·   This would need to be done across all CSP accounts and across CSPs

·   Difficult to build something like Never Park or Snooze Only

·   Enterprise-wide policies based on Tags to auto enforce actions (automate parking schedule assignment, Never Park for production instances, & assignment of instances to teams)

 

Resolution

As you can see, there is a technical advantage of purchasing software that’s been purpose-built with a dedicated development team over a long period of time. You’ll get more functionality for less money.

This year, we resolve not to “build” when we should “buy.”

Do you?

 

Read more ›

Cloud applications in 2017: How long until full cloud takes over?

clouds-take-overWe were recently asked about our vision for cloud applications in 2017: are we still seeing ported versions of legacy on-premises Software-as-a-Service (SaaS) applications? Or are most applications – even outside of pure-play startups – being built and hosted in the cloud? In other words, how long until full cloud takes over?

Actually, it already has.

Native cloud applications like ours – an 18-month-old startup – that have been built, tested, and run in the cloud are no longer the fringe innovators, but the norm. In fact, outside of a printer, we have no infrastructure at all – we are BYOD, and every application we use for development, marketing, sales and finance is a SaaS-based, cloud-hosted solution that we either use for free or rent and pay month-to-month or year-to-year.

This reliance on 100% cloud solutions has allowed us to rapidly scale our entire business – the cloud, and cloud-based SaaS solutions, have provided ParkMyCloud with the agility, speed, and cost control needed to manage to an OpEx model rather than a CapEx model.

We were able to rapidly prototype our technology, test it, iterate, and leverage “beta” communities in the cloud in a matter of months. We even outsource our development efforts, and seamlessly run agile remotely using the cloud and cloud-based tools. For a peek into the process, here’s a sampling of software development tools we use in a cloud-shrouded nutshell:

  • Amazon Web Service (AWS) for development, test, QA and production
  • VersionOne for agile management
  • Skype for scrum and video communication
  • GitHub for version control
  • Zoho for customer support
  • LogEntries for log integration
  • Confluence for documentation
  • Swagger for API management

And I could repeat the same for our Marketing, Sales, and Finance process and tools – the cloud has truly taken over.

We don’t know if these applications are built and run in the public cloud or the private cloud – that’s irrelevant to us, what’s important is they solve a problem, are easily accessible, and meet our price point. We do know that these are all cloud-based SaaS offerings – we don’t use any on premise, traditional software.

The net net is that many companies are just like ParkMyCloud. The question is no longer about how us newbies will enter the world – the question is, how fast will legacy enterprises migrate ALL their applications to cloud? And where will they strike the balance between public and private cloud?

Read more ›

Why ParkMyCloud Uses ParkMyCloud: A Story of the Importance of Drinking Your Own Champagne

drinking-our-own-champagneI think most people can agree that eating your own dog food – or drinking your own champagne, to the glass-half-full crowd – is a hallmark of a business that has created a successful product. The opposite is clearly true: when Alan Mullaly was brought in to Ford, he knew there was a problem when he was picked up from the airport in a Land Rover rather than a Ford car – and when he couldn’t find a single Ford vehicle in the executive parking garage.

For those of us in the software world, there’s another piece to that picture. To tell you how we discovered this for ourselves, I’m going to tell you a story.

It was six weeks after ParkMyCloud’s founding. We had the very first beta version of the product at our fingertips – but before sending it out to beta testers, we gathered the ParkMyCloud team in a conference room to do a bit of usability testing for ourselves. I created a ParkMyCloud user account and hooked up our AWS account so there would be instances to display.

“Now try it out, and let me know if you see any problems,” I told the group.

Heads down, focused on laptops, everyone diligently began to click around, playing with the first generation dashboard and parking schedule interface. For a moment, the room was quiet. Then a chorus went around.

“Hey, what happened?”

“Is anyone else getting this error?”

All at once, everyone around the table lost access to the application. It was gone. For a minute, we were left scratching our heads.

“Okay, what was everyone doing just before it shut down? Did anyone park anything?”

Finally, a sheepish marketing contractor spoke up. “I may have parked an instance.”

As it turned out, he had parked a production server. In particular, the production server running the ParkMyCloud application. D’oh!

Apparently, we needed governance. And we needed it fast. We got to work, and soon after, we released a version of ParkMyCloud that allowed for multiple users and teams for each ParkMyCloud account, all governed with role-based access control (RBAC).

We still use those roles today (incidentally, the “demo” team does not have access to production servers).

The lesson here is that using your application for yourself uncovers important usability issues. Some of these can’t be discovered as quickly as the one above, but only over time – like awkward flows, and reports that skip over meaningful data.

But of course, we also get the same benefits that the product gives to our customers – like saving money. In fact, after the approach was suggested to us by one of our customers, we adopted an “always off” schedule for ourselves. All of our non-production servers are parked 24×7. When our developers need to use them, they log in to ParkMyCloud and “snooze” the schedules for the length of time they need to use them.

This eliminates the need for central schedules, which works especially well for our multi-time-zone development team. Using this schedule, we save about 81% on our non-production servers.

I would encourage anyone who creates products to lead by example and use your product internally — and I assure potential ParkMyCloud customers that we drink our own champagne every day.

Read more ›

How one startup used AWS tools to build an MVP in 7 sprints

Below is the transcript of an interview with our friend Jonathan Chashper of Product Savvy about his experience in rapidly building an app, Wolfpack, using various AWS tools. From getting his team in a room and unpacking laptops, to releasing a minimum viable product (MVP) for beta testing took 14 weeks, which Jonathan attributes not only to the skill of his team but to the ease-of-use and agility they gained from AWS.

Thanks for speaking with us, Jonathan! First of all, can you tell us a little bit about Wolfpack? What is it, and why did you decide to start it?

wolfpackI am a motorcycle rider. A few years ago, I went on a group ride, and very quickly, the group broke apart. Some people missed a turn, some people got stuck at a red light, and a group of six suddenly became three groups of two. It took us about half an hour to figure out where everyone was, since you need to pull over, call everyone, and then – since everyone is riding their motorcycles – wait for them to pull over and call you back. It’s one big mess.

So I thought, there has to be a technical solution to this. I decided we should build a system that would allow me to track everyone I’m riding with, so I could see where the people riding with me are at any given time. If I got disconnected from the group, I could see where they are and pull over to gather back together. This was Eureka #1

Eureka #2 was understanding that communication is the second big problem for moving in groups. When you ride in a group,  on motorcycles, you’re usually riding in a column. Let’s say you’re rider #4 and you need gas. You cannot just pull over into a gas station, because you will get separated from the group. So usually what happens is that you speed up, you try to signal to the guy at the head of the column, and you point to the gas tank, you hope he understands and actually pulls into a gas station. It’s dangerous. So this is the second problem that people have when they move in packs, and these are the two problems that Wolfpack is solving: Keeping the group together and allowing for communication during the ride.

Wolfpack is a system for moving in groups. It doesn’t have to be motorcycles, but that’s the first niche we’re releasing it for. It’s also relevant for a group of cars, or even walking on foot with ten people around you, people get separated, and so on.

So we built a system that allows you as a user to install an app on a mobile device (both iOS and Android), that will allow you to manage the groups you want to travel with. Then, once you have the groups defined, you can define a trip with a starting point and an ending point. Everyone in the group then gets a map, and everyone can hop on it and start traveling together.

Here’s WolfPack’s About video, if you’re interested:

What AWS tools did you leverage when building Wolfpack?

Wolfpack is built on AWS, and we’re using CloudFront, we’re using SNS, we’re using S3 buckets, we’re using RDS, and of course EC2 instances, load balancing, Auto Scaling Groups, all the pretty buzzwords. We use them all – even AWS IoT, actually.

Have you had any interaction with AWS?

No, we’ve done it 100% ourselves. We’ve never talked to any solutions architects or anyone at AWS. It’s that easy to use.

What Amazon is doing is unbelievable. Things that used to take months or years to accomplish, you can now accomplish in days by clicking a couple of buttons and writing a little bit of code.

Why did you choose to develop on AWS?

The ecosystem they’ve created. This is why I think AWS is awesome: they’ve identified the pain points for people who want to build software.

The basic problem they identified is the need to buy servers. That’s the very basic solution they’ve given you: you can stand up a server in two minutes, you don’t need to buy or pay ten thousand dollars out of pocket, and so on and so forth, these are the good old EC2 Instances.

Then they went step by step and they said, okay, the next problem is managing databases. Before RDS, I had to have my own database from Oracle, and you’d have to buy a solution for load balancing, a solution for failover, back-up, recovery, etc., and this would cost tens of thousands, if not hundreds of thousands of dollars. AWS took that pain away by providing RDS.

The next step was message queues. Again, in the past, we would go to IBM, we would go to Oracle, back in the day, and you would use their message queues. It was complex, one message queue didn’t work with the other, and it was a mess. So  AWS created the  SNS to solve that.

And so on and so forth, like a domino. They have the buckets to solve the storage issue. Now the newest thing is IoT, where they understand that there’s billions of devices out there trying to send messages to each other, and very quickly, you clog the system. So AWS said, “okay, we’ll solve that problem now.” And they created the AWS IoT system which allows you to connect any device you want, very quickly, and support, I don’t know, probably billions and billions of messages. Almost for free, it doesn’t really cost anything. It’s a great system.

Have you had any challenges with AWS so far?

No, actually, no technological challenges so far. What they offer is really easy to use and understand. The one thing we do want to do is pay as little as we can for the EC2 servers, which is where we’re using ParkMyCloud to schedule on/off times for our non-production servers.

Are you using any other tools for automation and DevOps?

Yes, we are using Jenkins – we have a continuous integration machine. Our testing is still manual, unfortunately.

Continuous integration is the idea that every time someone completes a piece of code, they submit that to a repository. Jenkins has a script that takes that out of the repository, compiles everything, and deploys it. So at any given time, every time someone submits something, it’s immediately ready for my QA guy to test. The need for “Integration Sessions” went down, drastically. .

How long has the development taken?

From the minute we put the team together, until we had an MVP, we had seven sprints, which is just 14 weeks. And when I say “putting the team together,” I mean they went into a room and unpacked their laptops on March 1st. Now, fourteen weeks later, we had our MVP, which we’re now using for beta testing.

And did your team have deep AWS experience, or were some of them beginners?

Some of them had a little bit of AWS experience, but most of it came from us as on-the-job training. If you’re a software engineer, it’s really easy to get it.

On your non-production servers, where you’re using ParkMyCloud, do you know what percent of savings you’re getting?

We’re running those instances 12 hours a day, 5 days a week. So we’re running them 60 hours a week, so, let’s see, we’re getting about 65% savings. That’s pretty awesome.

Thanks so much for speaking with us, Jonathan.

Thank you!

Read more ›

Stop Using Your DevOps People to Clean Toilets | A Rant Against Unnecessary AWS Scripting

friends don't let devops friends clean toilets. or waste time with unnecessary aws scripting.I have nothing but the utmost respect for DevOps (development operations) people. They are unsung heroes in my opinion. Living in that precarious place between the developers, IT operations, and the business people, their job is to streamline and stabilize operations related to rollout of new applications and code updates to support the business.

When everything is working well, most people forget they are there. Much like offensive linemen in football, the only time people seem to notice them is on those rare occasions when something goes wrong. It doesn’t seem fair, but such is the life of DevOps.

To achieve near continuous deployment for applications, a high degree of automation is essential from the time new code changes hit the source code repository until they are pushed through test, QA, staging and into production. To accomplish that, DevOps teams require a working knowledge of their applications at a system level, as well as a deep understanding of the IT infrastructure (servers, storage, databases and network), to properly marry the two.

Inherent in this process is constant optimization to streamline the process and keep costs low. They are constantly evaluating build vs. buy for the tools they use in their trade. The preference is to use commercial off-the-shelf products if they are more cost-effective. This frees up their team to focus on keeping the “main thing the main thing”.

/* Begin Shameless Plug */

Parkingcalendar-croppedThe whole idea of ParkMyCloud is to help out that part of the DevOps community, who run their environments in Amazon Web Services (AWS).

With ParkMyCloud, you can schedule on/off times for development, testing, QA and staging environments without AWS scripting for as little as $1-$2 per instance per month.

A number of our larger customers have walked away from their own scripted solutions to do this in favor of ParkMyCloud for a few reasons:

  1. It was costing their team more to maintain their AWS scripts
  2. The time spent working on those scripts, was time that could have been spent on mainline business applications (a huge opportunity cost)
  3. Their scripts provided no reporting on cost savings, so that had they no idea whether they were getting a return on their investment. (With ParkMyCloud, the payback is usually within 2-3 months.)

get real aws savings

/* End Shameless Plug*/

 

/* Begin Rant */

So, I told you all of that to air a real pet peeve that I have.

Imagine my surprise when I still talk to potential customers, bent on writing their own AWS scripts to turn instances on & off. It just doesn’t make sense.

When they tell me, “Well, we can do that.” Then my response is, “Does your DevOps team also clean toilets?”

Then they give me this weird look (kind of like the look on your face right now), and respond, “Well, no.”

“Why not?” I ask. “Are they not smart enough to clean toilets?”

“Well of course they are smart enough, but it is not worth their time. We hire a janitorial service to clean our restrooms.”

“So, let me get this straight: You are enlightened enough to realize that cleaning toilets would be a waste of your team’s time, so you hired a janitorial service. Why on earth would you waste your precious DevOps resources to do the moral equivalent of this in IT, by having them waste time writing scripts to schedule on/off times for EC2 instances?”

“They should be spending that time on your main business applications. Leave that to us!”

Increasingly, they get point.

/* End Rant */

In closing, please remember: Friends don’t let DevOps friends waste time on AWS scripting for things not related to application delivery (especially when there are more cost-effective commercial products available to help save time and money).  Friends do tell their DevOps friends about ParkMyCloud.

Read more ›

Mr. Bobvious Realizes: Developers are like teenagers and idle AWS instances are like light bulbs

Our hero, Mr. Bobvious, the IT Ops guy who automatically turns off idle AWS instances using ParkMyCloud, was texting with his teenage son not long ago. Afterwards he realized that the challenge of getting his company’s developers to remember to turn off their AWS instances was the same as…well read on and you’ll see:

texting-with-lights

Mr. Bobvious: Jake? Are you home?

Teen: Sup.

Mr. Bobvious: What’s sup?

Teen: Not much, howboutchoo?

Mr. Bobvious: No I mean what does sup mean?

Teen: What’s up?

Mr. Bobvious: Can you just give me a straight answer pls?

Teen: sup means what is up

Mr. Bobvious: Oh, sorry. Are you home?

Teen: Why?

Mr. Bobvious: Just make sure you turn the lights off in your room, the bathroom and the hall before you leave.

Mr. Bobvious: And the kitchen, mudroom and any other room you were in today

Teen: Oh. I’m not home. Sorry.

Mr. Bobvious: Did you turn any lights off before you left?

Teen: Um.

Teen: No.

Teen: Sir.

Mr. Bobvious: How many times do we have to discuss this? Electricity is not free.

Mr. Bobvious: What if I left your iPad on all day and the battery was drained when you got home?

Teen: I’d plug it in. I guess.

Mr. Bobvious: Anyway.

Teen: Dad, I’m just a teen. Teens aren’t wired to turn stuff off.

Mr. Bobvious: You know our software developers leave our computer servers on all night.

Mr. Bobvious: Do you know what my boss would do to me if I let that happen?

Teen: Fire you?

Mr. Bobvious: No, no. He’d just be mad that I’m wasting electricity and money.

Teen: So what’d ya do about the computers?

Mr. Bobvious: I bought software that turns off the computers automatically. We’re saving a fortune.

Teen: Lit!

Mr. Bobvious: Thanks!

Teen: Are you on the way home?

Mr. Bobvious: Why?

Teen: Just thinkin about how good some Chipotle would taste right about now.

 

Read more ›
Copyright © ParkMyCloud 2016. All rights reserved|Privacy Policy