August 2017 - ParkMyCloud

Cloud Cost Management Tool Comparison

Not only has it become apparent that public cloud is here to stay, it’s also growing faster as time goes on (by 2020, it is estimated that more than 40% of enterprise workloads will be in the cloud). IT infrastructure has changed permanently, and enterprise organizations are coming to terms with some of the side effects of this shift.  One of those side effects is the need for tools and processes (and even teams in larger organizations) dedicated to cloud cost management and cost control.  Executives from all teams within an organization want to see costs, projections, usage, savings, and quantifiable efforts to save the company money while maximizing IT throughput as enterprises shift to resources to the cloud.  

There’s a variety of tools to solve some of these problems, so let’s take a look at a few of the major ones.  All of the tools mentioned below support Amazon AWS, Microsoft Azure, and Google Cloud Platform.

CloudHealth

CloudHealth provides detailed analytics and reporting on your overall cloud spend, with the ability to slice-and-dice that data in a variety of ways.  Recommendations about your instances are made based on a score driven by instance utilization and cloud provider best practices. This data is collected from agents that are installed on the instances, along with cloud-level information.  Analysis and business intelligence tools for cloud spend and infrastructure utilization are featured prominently in the dashboard, with governance provided through policies driven by teams for alerts and thresholds.  Some actions can be scripted, such as deleting elastic IPs/snapshots and managing EC2 instances, but reporting and dashboards are the main focus.

Overall, the platform seems to be a popular choice for large enterprises wanting cost and governance visibility across their cloud infrastructure.  Pricing is based on a percentage of your monthly cloud spend.

CloudCheckr

Cloudcheckr provides visibility into governance, security, compliance, and cost problems based on doing analytics and checks against logic built into their platform. It relies on non-native tools and integrations to take action on the recommendations, such as Spotinst, Ansible, or Chef.  CloudCheckr’s reports cover a wide range of topics, including inventory, utilization, security, costs, and overall best-practices. The UI is simple and is likely equally well regarded by technical and non-technical users.

The platform seems to be a popular choice with small and medium sized enterprises looking for greater overall visibility and recommendations to help optimize their use of cloud.  Given their SMB focus customers are often provided this service through MSPs. Pricing is based on your cloud spend, but a free tier is also available.

Cloudyn

Cloudyn (recently acquired by Microsoft) is focused on providing advice and recommendations along with chargeback and showback capabilities for enterprise organizations. Cloud resources and costs can be managed through their hierarchical team structure.  Visibility, alerting, and recommendations are made in real time to assist in right-sizing instances and identifying outlying resources.  Like CloudCheckr, it relies on external tools or people to act upon recommendations and lacks automation

Their platform options include supporting MSPs in the management of their end customer’s cloud environments as well as an interesting cloud benchmarking service called Cloudyndex.  Pricing for Cloudyn is also based on your monthly cloud spend.  Much of the focus seems to be on current Microsoft Azure customers and users.

ParkMyCloud

Unlike the other tools mentioned, ParkMyCloud focuses on actions and automated scheduling of resources to provide optimization and immediate ROI.  Reports and dashboards are available to show the cost savings provided by these schedules and recommendations on which instances to park.  The schedules can be manually attached to instances, or automatically assigned based on tags or naming schemes through its Policy Engine.  It pairs well with the other previously mentioned recommendation-based tools in this space to provide total cost control through both actions and reporting.

ParkMyCloud is widely used by DevOps and IT Ops in organizations from small startups to global multinationals, all who are keen to automate cost control by leveraging ParkMyCloud’s native API and pre-built integration with tools like Slack, Atlassian, and Jenkins.  Pricing is based on a cost per-instance, with a free tier available.

Conclusion

Cloud cost management isn’t just a “should think about” item, it’s a “must have in place” item, regardless of the size of a company’s cloud bill.  Specialized tools can help you view, manage, and project your cloud costs no matter which provider you choose.  The right toolkit can supercharge your IT infrastructure, so consider a combination of some of the tools above to really get the most out of your AWS, Azure, or Google environment.

Read more ›

Cloud Webhooks – Notification Options for System Level Alerts to Improve your Cloud Operations

Webhooks are user-defined HTTP POST callbacks. They provide a lightweight mechanism for letting remote applications receive push notifications from a service or application, without requiring polling. In today’s IT infrastructure that includes monitoring tools, cloud providers, DevOps processes, and internally-developed applications, webhooks are a crucial way to communicate between individual systems for a cohesive service delivery. Now, in ParkMyCloud, webhooks are available for even more powerful cost control.

For example, you may want to let a monitoring solution like Datadog or New Relic know that ParkMyCloud is stopping a server for some period of time and therefore suppress alerts to that monitoring system for the period the server will be parked, and vice versa enable the monitoring once the server is unparked (turned on). Another example would be to have ParkMyCloud post to a chatroom or dashboard when schedules have been overridden by users. We do this by enabling systems notifications to our cloud webhooks.

Previously only two options were provided when configuring system level and user notifications in ParkMyCloud: System Errors and Parking Actions. We have added 3 new notification options for both system level and user notifications. Descriptions for all five options are provided below:

  • System Errors – These are errors occurring within the system itself such as discovery errors, parking errors, invalid credential permissions, etc.
  • System Maintenance and Updates – These are the notifications provided via the banner at the top of the dashboard.
  • User Actions – These are actions performed by users in ParkMyCloud such as manual resource state toggles, attachment or detachment of schedules, credential updates, etc.
  • Parking Actions – These are actions specifically related to parking such as automatic starting or stopping of resources based on defined parking schedules.
  • Policy Actions – These are actions specifically related to configured policies in ParkMyCloud such as automatic schedule attachments based on a set rule.

We have made the options more granular to provide you better control on events you want to see or not see.

These options can be seen when adding or modifying a channel for system level notifications (Settings > System Level Notifications). In the image shown below, a channel is being added.

Note: For additional information regarding these options, click on the Info Icon to the right of Notify About.

The new notification options are also viewable by users who want to set up their own notifications (Username > My Profile).  These personal notifications are sent via email to the address associated with your user.  Personal notifications can be set up by any user, while Webhooks must be set up by a ParkMyCloud Admin.

After clicking on Notifications, you will see the above options and may use the checkboxes to select the notifications you want to receive. You can also set each webhook to handle a specific ParkMyCloud team, then set up multiple webhooks to handle different parts of your organization.  This offers maximum flexibility based on each team’s tools, processes, and procedures. Once finished, click on Save Changes. Any of these notifications can be sent then to your cloud webhook and even Slack to ensure ParkMyCloud is integrated into your cloud management operations.

 

Read more ›

Saving Money on Batch Workloads in Public Cloud

batch workloads

Large companies have traditionally had an impressive list of batch workloads, which run at night, when people have gone home for the day. These include such things as application and database backup jobs; extraction, transform, and load (ETL) jobs; disaster recovery (DR) environment checks and updates; online analytical processing (OLAP) jobs; and monthly/ quarterly billing updates or financial “close”, to name a few.

Traditionally, with on-premise data centers, these workloads have run at night to allow the same hardware infrastructure that supports daytime interactive workloads to be repurposed, if you will, to run these batch workloads at night. This served a couple of purposes:

  • It avoided network contention between the two workloads (as both are important), allowing the interactive workloads to remain responsive.
  • It avoided data center sprawl by using the same infrastructure to run both, rather than having dedicated infrastructure for interactive and batch.

Things Are Different with Public Cloud

As companies move to the public cloud, they are no longer constrained by having to repurpose the same infrastructure. In fact, they can spin up and spin down new resources on demand in AWS, Azure or Google Cloud Platform (GCP), running both interactive and batch workloads whenever they want.

Network contention is also less of concern, since the public cloud providers typically have plenty of bandwidth. The exception of course is where batch workloads use the same application interfaces or APIs to read/write data.

So, moving to public cloud offers a spectrum of possibilities, and you can use one or any combination of them:

  • You can run batch nightly using similar processes as you do in your online data centers, but on separate provisioned instances/virtual machines. This probably results in the least effort to moving batch to the public cloud, the least change to your DevOps processes, and perhaps saves you some money by having instances sized specifically for the workloads and being able to leverage cloud cost savings options (e.g.,  reserved instances);
  • You can run batch on separately provisioned instances/virtual machines, but concurrently with existing interactive workloads. This will likely result in some additional work to change your DevOps processes, but offers more freedom and similar benefits mentioned above. You will still need to pay attention to application interfaces/APIs the workloads may have in common; or
  • At the extreme end of the cloud adoptions spectrum, you could use cloud provider platform as a service (PaaS) offerings, such as AWS Batch, Microsoft Azure Batch or GCP Cloud Dataflow, where batch is essentially treated as a “black box”. A detailed comparison of these services is beyond the scope of this blog. However, in summary, these are fully managed services, where you queue up input data in an S3 bucket, object blob or volume along with a job definition, appropriate environment variables and a schedule and you’re off to races. These services employ containers and autoscaling/resource groups/instance groups where appropriate, with options to use less expensive compute in some cases. (For example, with AWS Batch, you have the option of using spot instances.)

The advantage of this approach is potentially faster time to implement and (maybe) less expensive monthly cloud costs, because the compute services run only at the times you specify. The disadvantages of this approach may be the degree of operational/configuration control you have; the fact, that these services may be totally foreign to your existing DevOps folks/processes (i.e., there is a steep learning curve); and it may tie you to that specific cloud provider.

A Simple Alternative

If you are looking to minimize impact to your DevOps processes (that is, the first two approaches mentioned above), but still save money, then ParkMyCloud can help.

Normally, with the first two options, there are cron jobs scheduled to kick-off batch jobs at the appropriate times throughout the day, but the underlying instances must be running for cron to do its thing. You could use ParkMyCloud to put parking schedules on these resources, such they are turned OFF for most of the day, but are turned ON just-in-time to still allow the cron jobs to execute.

We have been successfully using this approach in our own infrastructure for some time now, to control a batch server used to do database backups. This would, in fact, provide more savings than AWS reserved instances.

Let’s look at specific example in AWS. Suppose you have an m4.large server you use run batch jobs. Assuming Linux pricing in us-east-1, this server costs $0.10 per hour, or about $73 per month. Suppose you have configured cron to start batch jobs at midnight UTC and that they normally complete 1 to 1-½ hours later.

You could purchase a Reserved Instance for that server, where you either pay nothing upfront or all upfront and your savings would be 38%-42%.

Or, you could put a ParkMyCloud schedule where the instance is only ON from 11 pm-1 am UTC, allowing enough time for the cron jobs to start and run. The savings in that case would be 87.6% (including the cost of ParkMyCloud) without the need for a one year commitment. Depending on how many batch servers you run in your environment and their sizes, that could be some hefty savings.

Conclusion

Public cloud will offer you a lot of freedom and some potentially attractive cost savings as you move batch workloads from on premise. You are no longer constrained by having the same infrastructure serve two vastly different types of workloads — interactive and batch. The savings you can achieve by moving to public cloud can vary, depending on the approach you take and the provider/service you use.

The approach you take, depends on the amount of process change you’re willing to absorb in your DevOps processes. If you are willing to throw caution to the wind, the cloud provider PaaS offerings for batch can be quite compelling.

If you wish to take a more cautious approach, then we engineered ParkMyCloud to park servers without the need for scripting, or the need for you to be a DevOps expert. This approach allows you to achieve decent savings, with minimal change to your DevOps batch processes and without the need for Reserved Instances.

Read more ›

New: Cloud Savings Dashboard Now Available in ParkMyCloud

We’re happy to introduce ParkMyCloud’s new reporting dashboard! There’s now easy to access reports that provide greater insight into information regarding cloud costs, team rosters, and more. Details on this update can be found in our support portal

cloud savings

Dashboard Details

Now, when you click Reports in the left navigational panel, instead of getting the option to download a full savings report, you’ll see your ParkMyCloud reporting dashboard. This provides a quick view of cloud provider, team and resource costs, and information regarding your ParkMyCloud savings. At the top of the reporting dashboard, two drop-down menus are provided for selecting the report type and the time period. The default selections are Dashboard and Trailing 30 Days, which is what you see after clicking on reporting in the left navigational menu. Click on a drop-down menu to choose other available options.

Underneath the Report Type drop-down menu, you will see several options that are broken down into additional sections (Financial, Resource, Administrative, etc.) Click on an option in the menu to view that specific report within the dashboard. These reports can also be shown using a variety of time periods. Reports may be exported as an CSV or Excel file by clicking on the desired option on the right of the Report and Time Period drop-down menus as well.

Click on Legacy if you would prefer to still use the previous reporting functionality rather than the new reporting dashboard in ParkMyCloud. A pop-up window will appear for selecting the start and end date along with the type of legacy report. As part of this change, we have also moved Audit Logs underneath reporting. To access this option, you will need to select Reports in the left navigational panel and then Audit Log.

Check It Out

If you don’t yet use ParkMyCloud, you can try it now for free. We offer a 14-day free trial of all ParkMyCloud features, after which you can choose to subscribe to a premium plan or continue parking your instances using ParkMyCloud’s free tier forever.

If you already use ParkMyCloud, you’ll instantly see a visual representation of your cloud savings just by logging in to the platform. We challenge you to use this as a scoreboard, and try to drive your monthly savings as high as you can!

Read more ›

Exploring AWS RDS Pricing and Features

AWS RDS savings

Traditional systems administration of servers, applications, and databases used to be a little simpler when it came to choices and costs.  For a long time, there was no other choice than to hook up a physical server, put on your desired OS, and install the database or application software that you needed.  Eventually, you could choose to install your OS on a physical server or on a virtual machine running on a hypervisor.  Then, large companies started running their own hypervisor and allowed you to rent your VM for as long as you needed it on their servers.  In 2009, Amazon started offering the ability to rent databases directly, without having to worry about the underlying OS in a platform as a service (PaaS) offering called Relational Database Service (RDS).  This added another layer of complexity to your choices when managing your infrastructure.  Let’s explore AWS RDS pricing a little bit, and examine some of the features that comes with it.

RDS Basics

AWS RDS offers the ability to directly run and manage a relational database without managing the infrastructure that the database is running on, or a having to worry about patching of the database software itself.  Amazon currently offers RDS in the form of MySQL, Aurora (MySQL on steroids), Oracle, Microsoft SQL Server, PostgreSQL, and MariaDB.  The database sizes are grouped into 3 categories: Standard (m4), Memory Optimized (r3), and Micro (t2).  Each family has multiple sizes that have varying numbers of vCPUs, GiBs of memory, levels of network performance, and can be input/output optimized.

Each RDS instance can be set up to be “multi-AZ”, leveraging replicas of the database in a different availability zones within AWS.  This is often used for production databases. If a problem arises in one availability zone, failover to one of replica databases happens automatically behind the scenes. You don’t have to manage it. .  Along with multi-AZ deployments, Amazon offers “Aurora”, which has more fault tolerance and self healing beyond multi-AZ,  as well as additional performance features.

RDS Pricing

RDS is essentially a service running on top of EC2 instances, but you don’t have access to the underlying instances. Therefore, Amazon has set the pricing for RDS instances in a very similar way to EC2 instances, which will be familiar once you’ve gotten a grasp on the structure that is already in place for compute.  There are multiple components to the price of an instance, including: the underlying instance size , storage of data, multi-AZ capability, and sending data out (sending data in is free for the transfer).  To add another layer of complexity, each database type (MySQL, Oracle, etc) has different prices for each of the factors.  Aurora also charges for I/O on top of the other costs.

When you add all this up, the cost of an RDS instance can go through the roof for a high-volume database.  It also can be hard to predict the usage, storage, and transfer needs of your database, especially for new applications.  Also, the raw performance might be a lot less than what you might expect running on your own hardware or even on your own instances. What makes the price worth it?

RDS vs. Installing a Database on EC2

Frequently, the choice comes down to using RDS for your database backend, or installing your own database on an EC2 instance the “traditional” way.  From a purely financial perspective, installing your own database is almost guaranteed to be cheaper if you focus on AWS direct costs alone.  However, there’s more to the decision than just the cost of the services.

What often gets lost in the use of a service is the time-to-value savings (which includes your time and potentially opportunity cost/benefit for bringing services online, faster).  For example , by using RDS instead of your own database, you avoid the need to install and manage the OS and database software, as well as the ongoing patching of those.  You also get automatic backups and recovery through the AWS console or AWS API.  You avoid having to configure storage LUNs and worrying about optimizing striping for better I/O. Resizing instances is much simpler with RDS, both going smaller or bigger if necessary.  High-availability (either cold or warm) is available at the click of a button.  All of this means less management for you and faster deployment times, though at a higher price point. If your company competes in a highly competitive market, these faster deployment times can make all the difference in the world to your bottom line.

One downside of just about every PaaS offering (and RDS was no exception) is that there typically is no “OFF” switch. This means that in non-production environments you are paying for the service, whether your devops folks are using it or not.  For RDS that was changed recently by AWS.  RDS instances in dev/test environments can now be stopped. .  

ParkMyCloud has made “parking” public cloud compute resources as simple as possible. We also natively support parking RDS instances as well, helping you save money on non-production databases.  

By using our  Logical Groups feature, you can create a simple “stack” containing both compute instances and RDS databases to represent a particular application. The start/stop times can be sequenced within the group and a single schedule can be used on the group for simplified management.

Conclusion

AWS RDS pricing can get a bit tricky, and really requires you to know the details of your database in order to accurately predict the bill.  However, there are a ton of benefits to using the service, and can really help streamline your systems administration by handling the management and deployment of your backend database.  For companies that are moving to the cloud (or born in the cloud), RDS might be your choice when compared to running on a separate compute instance or on your own hypervisor, as it allows you to focus on your business and application, not on being a database administrator. For larger, established companies with a large team of DBAs and well established automation or for IO-intensive applications, RDS might not be the right fit for your business. By knowing the features, benefits, drawbacks, and factors in the cost, you can make the most informed decision for your database needs.

Read more ›

Interview: DevOps in AWS – How to Automate Cloud Cost Savings

automate cloud cost savings

We chatted with Ryan Alexander, DevOps Engineer at Decision Resources Group (DRG) about his company’s use of AWS and how they automate cloud cost savings. Below is a transcript of our conversion.

Hi Ryan, thanks for speaking with us. To start out, can you please describe what your company does?

Decision Resources Group offers market information and data for the medtech industry. For example, let’s say a medical graduate student is doing a thesis on Viagra use in the Boston area. They can use our tool to see information such as age groups, ethnicities, number of hospitals, and number of people who were issued Viagra in the city of Boston.

What does your team do within the company? What is your role?

I’m a DevOps engineer on a team of two. We provide infrastructure automation to the other teams in the organization. We report to senior tech management, which makes us somewhat of an island within the organization.

Can you describe how you are using AWS?

We have an infrastructure team internally. Once a server or infrastructure is built, we take over to build clusters and environments for what’s required. We utilize pretty much every tool AWS offers — EBS, ELB, RDS, Aurora, CloudFormation, etc.

What prompted you to look for a cost control solution?

When I joined DRG in December, there was a new cost saving initiative developing within the organization. It came from our CTO, who knew we could be doing better and wanted to see where we might be leaving money on the table.

How did you hear about ParkMyCloud?

One of my colleagues actually spoke with your CTO, Dale, at AWS re:Invent, and I had also heard about ParkMyCloud at DevOpsDays Toronto 2016. We realized it could help solve some of our cloud cost control problems and decided to take a look.

What challenges were contributing to the high costs? How has ParkMyCloud helped you solve them?

We knew we had a problem where development, staging, and QA environments were only used for 8 hours a day – but they were running for 24 hours a day. We wanted to shut them down and save money on the off hours, which ParkMyCloud helps us do automatically.

We also have “worker” machines that are used a few times a month, but they need to be there. It was tedious to go in and shut them down individually. Now with ParkMyCloud, I put those in a group and shut them down with one click. It is really just that easy to automate cloud cost savings with ParkMyCloud.

We also have security measures in place, where not everyone has the ability to sign in to AWS and shut down instances. If there was a team that needed them started on demand, but they’re in another country and I’m sleeping, they have to wait until I wake up the next morning, or I get up at 2 AM. Now that we set up Single Sign-On, I can set up the guys who use those servers, and give them the rights to startup and shutdown those servers. This has been more efficient for all of us. I no longer have to babysit and turn those on/off as needed, which saves time for all of us.

With ParkMyCloud, we set up teams and users so they can only see their own instances, so they can’t cause a cascading failure because they can only see the servers they need.

Were there any unexpected benefits of ParkMyCloud?

When I started, I deleted 3 servers that were sitting there doing nothing for a year and costing the company lots of money. With ParkMyCloud, that kind of stuff won’t happen, because everything gets sorted into teams. We can see the costs by team and ask the right questions, like, “why is your team’s cost so expensive right now? Why are you ignoring these recommendations from ParkMyCloud to park these instances?”

 

We rely on tagging to do all of this. Tagging is life in DevOps.

Read more ›

Interview: Atlassian Bamboo Automation + ParkMyCloud for AWS Cost Savings

Atlassian Bamboo automation

We talked with Travis Rehl, Director of Application and Engineering at Siteworx, about how his team is using ParkMyCloud in conjunction with Atlassian Bamboo automation in order to improve governance and optimize their AWS cloud infrastructure. Below is a transcript of our conversation.

Can you start by telling us about SiteWorx and what you guys do?

Sure, so Siteworx is a company that does digital transformations for clients, and my particular piece of it is Managed Services Hosting. We host ecommerce and content management systems for clients, generally Fortune 500 Companies or larger. We host specific products in AWS, and we’re moving into Azure as well.

What is your role in the company?

I am the Director of Application and Engineering here at Siteworx. I run the Siteworx services group which includes our hosting department as well as our application development team which supports our “run” phase of an engagement with a client.

Who in your organization is using ParkMyCloud?

We are currently using it for our Siteworx internal infrastructure, both EC2 and RDS, but I have some ideas to add it as a part of our managed services offering.

In the app we have maybe 5 or 6 users. They are team leads or engineering managers who have identified the scheduling that is appropriate for those particular instances and AWS accounts. This gives them the ability to group different servers together by environment levels for different clients.  One person from our finance team has access to it for billing and reporting.

My team in particular that is using ParkMyCloud is our engineering and operations group. There are two different teams who are the main ParkMyCloud users: our Operations team is 24/7, our Engineering team is generally 9-5 Eastern. They use ParkMyCloud to reduce costs, and have implemented it in such a way that will give the ability for our Development teams to turn servers back on as needed. If they have a project or demo that is occurring at an off hour, they are able to hit a button through our automation system — we’re using Atlassian Bamboo automation — to turn on the servers and utilize them.

Can you tell us more about that Atlassian Bamboo automation system?

If a team member wants to deploy code to a server during off hours, they will have a button within Bamboo to press to turn the server on via the ParkMyCloud API. Then they can hit a second set of buttons to send their code changes out to it. We utilize the calendar “snooze” function that PMC offers.

What were you looking for when you found ParkMyCloud?

I was looking for a technology that would allow us to optimize and automate our AWS cloud management. Internally, we have an agenda of trying to branch out to as many cloud platforms as necessary. So I was looking into many different services that manage your cloud-based servers and are compatible with different providers. That is when ParkMyCloud was suggested to me by a friend. We started a free trial, and got in touch with you all.

I am all in on ParkMyCloud, and I think we have a lot of use for it and down the road we plan to work with our clients to incorporate into our service offering.

Do you have any other cost control measures in place for AWS?

We evaluate server performance using Trusted Advisor in AWS or other services that say that you could scale down. The issue with those other services is that they are sometimes inaccurate because they use average CPU usage that does not take into account server down time. We try to evaluate and scale down as necessary based on the CPU usage when it is active.

How did the evaluation with ParkMyCloud go?

After we did some initial research on ParkMyCloud and other tools, we got in touch with PMC, started a free trial, did a demo, and a few questions we needed clarified – the entire process took just a couple weeks. The platform is entirely self service, and the ROI is immediate and verifiable.

Read more ›

How X-Mode Deals with Rising AWS Costs

rising AWS costs

We sat down with Josh Anton, CEO of X-Mode, a technology app that has been experiencing rapid growth and rising AWS costs. We asked him about his company, what cloud services he uses, and how he goes about mitigating those costs.

Can you start by telling us about X-Mode and what you guys do?

X-Mode is a location platform that currently maps out 5-10% of the U.S. population on a monthly basis and 1-2% of the U.S. population daily, which is about 3-6 million active daily users | 15M to 20M users monthly. X-Mode collect location based data from applications and platforms used by these consumers, and then develop consumer segments or attribution where our customers basically use the data to determine if their advertising is effective and to develop target profiles. For example, based on the number and types of coffee shops a person has visited, we can assume they are this type of coffee drinker. Or a company like McDonald’s will determine if their advertising is effective if they see that an ad is run in a certain area, and a person visits that restaurant in the next few days. The data has many applications.

How did you get this idea, Josh?

We started off as an app called Drunk Mode, which was founded and built while I was at the University of Virginia studying Marketing and IT. After about a year and half our app grew to about 1.5 million users by leveraging influencer marketing via Trend Pie and student campus rep program at 50+ universities. In September of 2016, we realized that if we developed a location-based technology platform we could monetize and capitalize on the location data we collected from the Drunk Mode app. Along with input from our advisors, we developed a strategy to help out other small apps by aggregating their data, crunching it, and packaging it up in real-time to sell to ad agencies and retailers, acting almost as a data wholesaler and helping these small app plays monetize their data as a secondary source of income.

Who’s cloud services are you using and how does X-Mode work?

We use Amazon Web Services (AWS) for all of our cloud infrastructure and primarily use their EC2, RDS, and Elastic Beanstalk services. Our technology works by collecting and aggregating location data based on when and where people go on a daily basis. it is collected locally by iOS and Android devices, and passed to AWS’s cloud using their API gateway function. The cool thing is that we are able to pinpoint a person’s location within feet of a retail location. The location data is batched and sent to our servers every 12 hours and we package it up and license the data out to our vendors. We are processing around 10 to 12 billion location based records per month, and have some proprietary algorithms which make our processing very fast and we have almost no burn on the phone’s battery. Our customers are sent the data daily and we use services like Lambda, RDS and Elastic Beanstalk to make this as efficient as possible. We are now developing the functionality to better triangulate beacons so that we can pinpoint locations even better, and send location data within the hour, rather than within the day.

Why did you pick AWS?

We chose AWS because when X-Mode joined Fishbowl Labs (a startup accelerator run and sponsored by AOL in Northern Virginia), we were given $15,000 in free AWS credits. The free credits have made me very loyal to Amazon’s service and now the switching costs would be fairly high in terms of effort and dollars to move away from Amazon. So even though it’s expensive, we are here to stay and adopting more of AWS’s advanced services in order to improve our platform performance and take advantage of their technology advances. Another reason we stay with AWS is that we know it is going to be there, we previously used another service called Parse.com that was acquired by Facebook and a few years later, they shut down the service, for us performance and stability (the server service existing 10 years from now) are very important to us.

Are you still using AWS credits?

No, we used those up many months ago. We have gone from spending a few hundred dollars a month to spending $25,000 or more a month. While that is a cost, it’s also a blessing in that X-Mode is rapidly growing and scaling. Outside of the cost of people, this is our biggest monthly expense. ParkMyCloud was an easy choice, given 75% or more of our AWS spend is on EC2 and RDS services, and ParkMyCloud’s ability to ‘park’ each service and their flexible governance model for our remote engineering team. So we are very excited about the savings ParkMyCloud will produce for us, along with some of the new design work we will be doing to make our platform even more efficient.

Are there other ways you are looking to optimize your AWS spend?  

We believe that we have to re-architect the system. We have actually done that three times given our rapid platform growth, but it is all about making sure that we are optimizing our import/export process. We are running our servers at maximum capacity to help get information to people, and are continually looking to make our operation more efficient. Along with using ParkMyCloud, we are focusing on general platform optimization to make sure we keep costs down, improve performance and innovate at a rapid pace.

What other tools do you use as part of your DevOps process?

Let’s keep in mind we are startup, but we are getting more and more organized in terms of development cycles and have a solid delivery process. And yes, we use tools like Slack, Jira, BaseCamp, Bitbucket, and Google Drive. Everything is SaaS-based and everything is in the cloud, and we follow an agile development process. On the Sales and Marketing side we are solely a millennial workforce and work in office but our development team is basically stay at home dads distributed around the country, so planning and communication are keys to our success. That’s where Slack and Jira come into play. In terms of processes, we are trying to implement a better QA process so we deliver very vetted code to our end users. We do a lot of development planning and mapping each quarter, so all of this is incredibly important to the growth of the X-Mode platform and to the success of our organization.

Read more ›

Trends in Cloud Computing – ParkMyCloud Turns Two, What’s New?

trends in cloud computing

It’s not hard to start a company but it’s definitely hard to grow and scale a company, so two years later we thought we would discuss trends in cloud computing that shape our growth and vision – what we see and hear as we talk to enterprises, MSP’s and industry pundits on a daily basis. First, and foremost we need to thank our customers, both free and paid, who use ParkMyCloud, save millions a year, and actively engage with us in defining our roadmap, and have helped us develop the best damn cloud cost control solution in the market. And the bloggers, analysts, and writers who share our story, given we have customers on every continent (except Antarctica) this has been extremely beneficial to us.

Observation Number One: the public cloud is here to stay. Given the CapEx investment needed to build and operate data centers all over the world, only the cash rich companies will succeed at scale so you need to figure out if you want to be a single cloud / multi-region, or multi-cloud user. We discussed that in detail recently in this blog and it really boils down to risk mitigation. Most companies we talk to are single cloud BUT do ask if we support multi-cloud in case they diversify (we are, we support AWS, Azure, and Google).

Observation Number Two: AWS is king, duh – well they are, and they continue to innovate and grow at a record setting pace. AWS just hit $4bn in quarterly revenue – that’s $16bn in run rate. It’s like the new IBM – what CIO or CTO is going to get fired for moving their infrastructure to AWS’ cloud to improve agility, attract millennial developers who want to innovate in the cloud, leverage the cloud ecosystem, and lower cost (we will address this one in a bit). We released support for Azure and Google in 2017, and yet 75% or more of the new trials and customers we get use AWS, and their environments are almost always larger than those on Azure and Google. There is a reason Microsoft and Google do not release IaaS statistics. And for IBM and Oracle, they are the way back IaaS time machine.

Observation Number Three: Cloud Cost Control is a real thing. It’s something enterprises really care about, and optimizing their cloud spend as their bills grow is becoming increasingly more important to the CFO and CIO. This is mainly focused on buying capacity in advance (which kind of defeats the purpose of the pay as you go model), rightsizing servers as developers have a tendency to over provision for their needs, turning stuff off when it’s not being used, and finding orphaned resources that are ‘lost’ in the cloud. As 65% of a bill is spent on compute (servers / instances) the focus is usually directed there first and foremost as a reduction there is the largest impact on a bill.

Observation Number Four: DevOps and IT Ops are responsible for cloud cost control, not Finance. Now, Finance (or the CFO) might provide a directive to  IT or Engineering that their cloud costs must be brought under control and that they need to look at ways to optimize, but at the end of the day DevOps and IT Ops are responsible for evaluating and selecting tools to help their companies immediately reduce their cloud costs. When we talk to the technical teams during a demo they have been told to they need to reduce their cloud spend or there is a cost control initiative in place, and then they research technologies to help them solve this problem (SEO is key here). Here’s a great example of a FinTech customer of ours and how their cost control decision went down.

Observation Number Five: It’s all about automation, DevOps and self-service. As mentioned, the technical folks are responsible for implementing a cost control platform to optimize their cloud spend, and as such it’s all about show me, not pretty reports and graphs. What we mean here is that as an action oriented platform they want us to be able to easily integrate into their continuous integration and delivery processes through a fully functional API, but also provide a simple UI for the non-techies to ensure self-service. And at the infrastructure layer it’s about what you can do with and through DevOps tools like Slack, Atlassian, and Jenkins, and at the enterprises level with SSO providers such as Ping, Okta and Microsoft, repeating themes over and over again regardless of the cloud provider.

Observation Number Six: Looking ahead, it’s about Stacks. As the idea of microservices continues to take hold, more developers are utilizing multiple instances or services to deploy a single application or environment. In years past, the bottleneck for implementing such groups of servers or databases was the deployment time, but modern configuration management tools (like Chef, Puppet, and Ansible) make this a common strategy by turning the infrastructure into code.  However, managing these environments for humans can remain challenging. ParkMyCloud already allows logical groupings of instances for one-click scheduling, but we’re planning on taking this a step further by integrating with the deployment solutions to really tie it all together.

Obviously the trends in cloud computing we touch on have a mix of macro and micro, and are generally looked at through a cost control lens, but they do provide insights into the day to day of what we see and hear from the folks that operate and use cloud from multinational enterprises to startups. By tracking these trends over time, we can help you keep on top of cloud best-practices to optimize your IT budget, and we look forward to what the next 2 years of cloud computing will bring us.

Read more ›
Copyright © ParkMyCloud 2016. All rights reserved|Privacy Policy