Cloud Services Archives - ParkMyCloud

5 Favorite AWS Training Resources

When it comes to AWS training resources, there’s no shortage of information out there. Considering the wide range of videos, tutorials, blogs, and more, it’s hard knowing where to look or how to begin. Finding the best resource depends on your learning style, your needs for AWS, and getting the most updated information available. With this in mind, we came up with our 5 favorite AWS training resources, sure to give you the tools you need to learn AWS:

1. AWS Self-Paced Labs

What better way to learn that at your own pace? AWS self-paced labs give you hands-on learning in a live AWS environment, with AWS services, and actual scenarios you would encounter in the cloud. Among the recommended labs you’ll find an Introduction to Amazon Elastic Compute Cloud (EC2), and for more advanced users, a lab on Creating Amazon EC2 Instances with Microsoft Windows. If you’re up for an adventure, enroll in a learning quest and immerse yourself in a collection of labs that will help you master any AWS scenario at your own pace. Once completed, you will earn a badge that you can boast on your resume, LinkedIn, website, etc.  

2. AWS Free Tier

Sometimes the best way to learn something is by jumping right in. With the AWS Free Tier, you can try AWS services for free. This is a great way to test out AWS for your business, or for the developers out there, to try services like AWS CodePipeLine, AWS Data Pipeline, and more. While you are still getting a hands-on opportunity to learn a number of AWS services, the only downside is that there are certain usage limits. You can track your usage with a billing alarm to avoid unwanted charges, or you can try ParkMyCloud and park your instances when they’re not in use to get the most out of your free tier experience. In fact, ParkMyCloud started its journey by using AWS’ free tier – we eat our own dog food!

3. AWS Documentation

AWS Documentation is like a virtual encyclopedia of tools, terms, training, and everything AWS. You’ll find white papers, case studies, tutorials, cloud computing basics, and so much more. This resource is a one-stop-shop for all of your AWS documentation needs, whether you’re a beginner or advanced user. No matter where you are in your AWS training journey, AWS documentation is always a useful reference and certainly deserves a spot in your bookmarks.

4. YouTube

So far, we’ve gone straight to the source for 3 out of 5 of our favorite AWS training resources. Amazon really does a great job of providing hands-on training, tutorials, and documentation for users with a range of experience. However, YouTube opens up a whole new world of video training that includes contributions from not only Amazon, but other great resources as well. Besides the obvious Amazon Web Services channel, there are also popular and highly rated videos by Edureka, Simplilearn, Eli the Computer Guy, and more.

5. Bloggers

As cloud technology usage continues to expand and evolve, blogs are a great way to stay up to speed with AWS and the world of cloud computing. Of course, in addition to labs, a free-trial, extensive documentation, and their own YouTube channel, AWS also has their own blog. Since AWS actually has a number of blogs that vary by region and technology, we recommend that you start by following Jeff Barr – Chief Evangelist at Amazon Web Services, and primary contributor. Edureka was mentioned in our recommended YouTube channels, they also have a blog that covers plenty of AWS topics. In addition, the CloudThat blog is an excellent resource for AWS and all things cloud, and was co-founded by Bhaves Goswami – a former member of the AWS product development team.

 


There’s plenty of information out there when it comes to AWS training resources. We picked our 5 favorite resources for their reliability, quality, and range of information. Whether you’re new to AWS or consider yourself an expert, these resources are sure to help you find what you’re looking for.

Read more ›

ParkMyCloud Appoints Veteran Tech Leader Bill Supernor as CTO

Cloud Cost Optimization Platform Vendor Gears Up for Rapid Expansion with New Hire

October 16, 2017 (Dulles, VA) – ParkMyCloud, the leading enterprise platform for continuous cost control in public cloud, announced today that Bill Supernor has joined the team as Chief Technology Officer (CTO). His more than 20 years of leadership experience in engineering and management have included scaling teams and managing enterprise-grade software products, including KoolSpan’s TrustCall secure call and messaging system.

At ParkMyCloud, Supernor will be responsible for product development and software engineering as ParkMyCloud expands its platform, which currently helps enterprises like McDonald’s, Unilever, and Fox control costs on Amazon Web Services (AWS), Microsoft Azure, and Google Cloud, to more clouds and continues to add more services and integrations.

“Bill’s experience in the software industry will be a boon to us as we scale and grow the business,” said ParkMyCloud CEO Jay Chapel. “His years in the software and IT space will be a huge advantage as we grow our engineering team and continue to innovate upon the cost control platform that cloud users need.”

“This is a fast-moving company in a really hot space,” said Supernor. “I’m excited to be working with great people who have passion about what they do.”

Prior to joining ParkMyCloud, Supernor was the CTO of KoolSpan, where he led the development of a globally deployed secure voice communication system for smartphones. He has also served in engineering leadership positions at Trust Digital, Cognio, Symantec, and McAfee/Network Associates, and as an officer in the United States Navy.

About ParkMyCloud

ParkMyCloud is a SaaS platform that helps enterprises optimize their public cloud spend by automatically reducing resource waste — think “Nest for the cloud”. ParkMyCloud has helped customers such as McDonald’s, Capital One, Unilever, Fox, and Sage Software dramatically cut their cloud bills by up to 65%, delivering millions of dollars in savings on Amazon Web Services, Microsoft Azure, and Google Cloud Platform. For more information, visit http://www.parkmycloud.com.

Contact

Katy Stalcup

kstalcup@parkmycloud.com

(571) 334-3291

Read more ›

3 Enterprise Cloud Management Challenges You Should Be Thinking About

Enterprise cloud management is a top priority. As the shift towards multi-cloud environments continues, so has the need to consider the potential challenges. Whether you already use the public cloud, or are considering making the switch, you probably want to know what the risks are. Here are three you should be thinking about.

1. Multi-Cloud Environments

As the ParkMyCloud platform supports AWS, Azure, and Google, we’ve noticed that multi-cloud strategies are becoming increasingly common among enterprises. There are a number of reasons why it would be beneficial to utilize more than one cloud provider. We have discussed risk mitigation as a common reason, along with price protection and workload optimization. As multi-cloud strategies become more popular, the advantages are clear. However, every strategy comes with its challenges, and it’s important for CIOs to be aware of the associated risks.

Without the use of cloud management tools, multi-cloud management is complex and sometimes difficult to navigate. Different cloud providers have different price models, product features, APIs, and terminology. Compliance requirements are also a factor that must be considered when dealing with multiple providers. Meeting and maintaining requirements for one cloud provider is complicated enough, let alone multiple. And don’t forget you need a single pane to view your multi-cloud infrastructure.

2. Cost Control

Cost control is a first priority among cloud computing trends. Enterprise Management Associates (EMA) conducted a research study and identified key reasons why there is a need for cloud cost control, among them were inefficient use of cloud resources, unpredictable billing, and contractual obligation or technological dependency.

Managing your cloud environment and controlling costs requires a great deal of time and strategy, taking away from the initiatives your enterprise really needs to be focusing on. The good news is that we offer a solution to cost control that will save 65% or more on your monthly cloud bills – just by simply parking your idle cloud resources. ParkMyCloud was one of the top three vendors recommended by EMA as a Rapid ROI Utility. If you’re interested in seeing why, we offer a 14-day free trial.

3. Security & Governance

In discussing a multi-cloud strategy and its challenges, the bigger picture also includes security and governance. As we have mentioned, a multi-cloud environment is complex, complicated, and requires native or 3rd party tools to maintain vigilance. Aside from legal compliance based on the industry your company is in, the cloud also comes with standard security issues and of course the possibility of cloud breaches. In this vein, as we talk to customers they often worry about too many users being granted console access to create and terminate cloud resources which can lead to waste. A key here is limiting user access based on roles or Role-based Access Controls (RBAC). At ParkMyCloud we recognize that visibility and control is important in today’s complex cloud world. That’s why in designing our platform, we provide the sysadmin the ability to delegate access based on a user’s role and the ability to authenticate leveraging SSO using SAML integration . This approach brings security benefits without losing the appeal of a multi-cloud strategy.

Our Solution

Enterprise cloud management is an inevitable priority as the shift towards a multi-cloud environment continues. Multiple cloud services add complexity to the challenges of IT and cloud management. Cost control is time consuming and needs to be automated and monitored constantly. Security and governance is a must and it’s necessary to ensure that users and resources are optimally governed. As the need for cloud management continues to grow, cloud automation tools like ParkMyCloud provide a means to effectively manage cloud resources, minimize challenges, and save you money.

Read more ›

How to Get the Cheapest Cloud Computing

Are you looking for the cheapest cloud computing available? Depending on your current situation, there are a few ways you might find the least expensive cloud offering that fits your needs.

If you don’t currently use the public cloud, or if you’re willing to have infrastructure in multiple clouds, you’re probably looking for the cheapest cloud provider. If you have existing infrastructure, there are a few approaches you can take to minimize costs and ensure they don’t spiral out of control.

Find the Cloud Provider that Offers the Cheapest Cloud Computing

There are a variety of small cloud providers that attempt to compete by dropping their prices. If you work for a small business and prefer a no-frills experience, perhaps one of these is right for you.

However, there’s a reason that the “big three” cloud providers – Amazon Web Services (AWS), Microsoft Azure, and Google Cloud – dominate the market. They offer a wide range of product lines, and are continually innovating. They have a low frequency of outages, and their scale requires a straightforward onboarding process and plenty of documentation.

Whatever provider you decide on, ensure that you’ll have access to all the services you need – is there a computing product, storage, databases? How good is the customer support?

For more information about the three major providers’ pricing, please see this whitepaper on AWS vs. Google Cloud Pricing and this article comparing AWS vs. Azure pricing.

Locked In? How to Get the Cheapest Cloud Computing from Your Current Provider

Of course, if your organization is already locked into a cloud computing provider, comparing providers won’t do you much good. Here’s a short checklist of things you should do to ensure you’re getting the cheapest cloud computing possible from your current provider:

  • Use Reserved Instances for production – Reserved instances can save money – as long as you use them the right way. More here. (This article is about AWS RIs, but similar principles apply to Azure’s RIs and Google’s Committed Use discounts.)
  • Only pay for what you actually need – there are a few common ways that users inadvertently waste money, such as using larger instances than they need, and running development/testing instances 24/7 rather than only when they’re needed. (Here at ParkMyCloud, we’re all about reducing this waste – try it out.)
  • Ask – it never hurts to contact your provider and ask if there’s anything you could be doing to get a cheaper price. If you use Microsoft Azure, you may want to sign up for an Enterprise License Agreement. Or maybe you qualify for AWS startup credits.

Get Credit for Your Efforts

While finding the cheapest cloud computing is, of course, beneficial to your organization’s common good, there’s no need to let your work in spending reduction go unnoticed. Make sure that you track your organization’s spending and show your team where you are reducing spend.

We’ve recently made this task easier than ever for ParkMyCloud users. Now, you can not only create and customize reports of your cloud spending and savings, but you can also schedule these reports to be emailed out. Users are already putting this to work by having savings reports automatically emailed to their bosses and department heads, to ensure that leadership is aware of the cost savings gained… and so users can get credit for their efforts.

 

 

Read more ›

Why We Love the AWS IoT Button

The AWS IoT button is a simple wi-fi device with endless possibilities. If you’re an Amazon Prime member, you’re probably familiar with the hardware that inspired the IoT button – the Amazon Dash button. The wi-fi connected Dash Button can be used to reorder your favorite Amazon products automatically, making impulse buys with the click of a button. The Dash Button makes ordering fast and easy, products are readily accessible, and you’ll never run out of toilet paper again. The AWS IoT button can do that and so much more. A lot more.  

Beyond the singular function of making Amazon Prime purchases, the IoT button can be used to control just about anything that uses the internet. Based on the Amazon Dash Button hardware, the IoT button is programmable, easy to configure, and can be integrated with virtually any internet-connected device. It was designed for developers to help them get acquainted with Amazon Web Services like AWS IoT, AWS Lambda, Amazon DynamoDB, and more, without the need to write device-specific code.

How to Use the AWS IoT button

  • Configure the button to connect to your wi-fi network
  • Provision the button with an AWS IoT certificate and private key
  • From there, the button connects to AWS IoT and publishes a message on a topic when clicked
  • Use the rules engine to set up a rule – configure single-click, double-click, or long-press events to be routed to any AWS service
  • Configure the button to send notifications through Amazon SNS, store clicks in an Amazon DynamoDB table, or code custom logic in an AWS Lambda function
  • Configure the function to connect to third-party services or AWS IoT-powered devices

 

 

What You Can Do with It

The AWS IoT button can be made to set a variety of actions. With incredible potential for what you can do, it’s hard knowing to know where to begin. Rest assured, Amazon has a few suggestions:  

  • Count or track items
  • Call or alert someone
  • Start or stop something
  • Order devices
  • Remotely control home appliances

With this in mind, here are some ways that creative developers are using the AWS IoT button:

A Challenge

The AWS IoT button opens the door for developers to create an unlimited number of functions. You can use it to do just about anything on the internet – including parking your instances.

So here’s our challenge: create a function to park your instances (or perhaps, to snooze your parking schedules) using the AWS IoT button in configuration with ParkMyCloud. If you do, tell us about it and we’ll send you some ParkMyCloud swag.

Read more ›

Managing Microsoft Azure VMs with ParkMyCloud

Microsoft has made it easy for companies to get started using Microsoft Azure VMs for development and beyond. However, as an organization’s usage grows past a few servers, it becomes necessary to manage both costs and users and can become complex quickly. ParkMyCloud simplifies cloud management of Microsoft Azure VMs by giving you options to create teams of users, groups of instances, and schedule resources easily.

Consider the case of a large Australian financial institution that uses Microsoft Azure as its sole cloud provider. In this case, they currently they have 125 VMs, costing them over $100k on their monthly cloud bill with Microsoft. Their compute spend is about 95% of their total Azure bill.

Using one Azure account for the entire organization, they chose to split it into multiple divisions, such as DEV, UAT, Prod, and DR. These divisions are then split further into multiple applications that run within each division. In order for them to use ParkMyCloud to best optimize their cloud costs, they created teams of users (one per division). They gave each team permissions in order to allow shutdown and startup of individual applications/VMs. A few select admin users have the ability to control all VMs, regardless of where the applications are placed.

The organization also required specific startup/shutdown ordering for their servers. How would ParkMyCloud handle this need? This looks like a perfect use case for logical groups in ParkMyCloud.

For detailed instructions on how to manage logical groups with ParkMyCloud, see our user guide.

Putting this into context, let’s say that you have a DB and a web server grouped together. You want the DB to start first and stop last, therefore you would need to set the DB to have a start delay of 0 and a stop delay of 5. For the web server, you would set a start delay of 5 and stop delay of 0.

Of course, you could also manage logical groups of Microsoft Azure VMs with tags, scripts, and Azure automation. However, we know firsthand that the alternative solution involves complexities and requires constant upkeep – and who wants that?

ParkMyCloud offers the advantage of not only to cutting your cloud costs, but also making cloud management simpler, easier, and more effective. To experience all great the benefits of our platform, start a free trial today!  

Read more ›

7 AWS Security Best Practices with ParkMyCloud

Besides cost control, one of the biggest concerns from IT administrators is utilizing AWS security best practices to keep their infrastructure safe.  While there are some great tools that specialize in cloud and information security, there are some security benefits of ParkMyCloud that are not often considered when hardening a cloud infrastructure.

1. Keep Instances Off When Not In Use

Scheduling your instances to be turned off on nights and weekends when you aren’t using them saves you a ton of money on your cloud bill, but  also provides security and protection.  Leaving servers and databases on 24/7 is just asking for someone to try to break in and connect to servers within your infrastructure, especially during off-hours when you don’t have as many IT staff keeping an eye on things.  By aggressively scheduling your resources to be off as much as possible, you minimize the opportunity for outside attacks on those servers.

2. User Governance

Your users are trustworthy and need to access lots of servers to do their job, but why give them more access than necessary?  Limiting what servers, databases, and auto scaling groups everyone can see to only what they need keeps accidents from happening and limits mistakes.  ParkMyCloud lets you separate users into teams, with designated Team Leads to manage the individual Team Members and limits their control to just start / stop.

3. Single Sign On

In addition to governing user access to resources, ParkMyCloud integrates with all major SSO providers for SAML authentication for your users.  This includes Okta, Ping Identity, OneLogin, Centrify, Azure AD, ADFS, and Google Apps.  By using one of these providers, you can keep identity management centralized and offer multi-factor authentication through those SAML connections.

4. Audit Logs and Notifications

Every user action in ParkMyCloud is tracked in an Audit Log that is available to super admins.  These audit logs can also be downloaded as a CSV if you want to import them into something like Splunk or Logstash for log management.  Audit logs can help you see when schedules are snoozed or changed, policies are updated, or teams are created or changed.

In addition, those audit log entries can be sent as notifications to Slack channels, email addresses, or through webhooks to other tools.  This lets you keep an eye on either specific teams or the entire organization within ParkMyCloud.

5. Minimal Connection Permissions

ParkMyCloud connects to AWS through an IAM Role (preferred) or an IAM User.  The AWS policy that is required uses the bare minimum of necessary actions, which boils down to Describe, Start, and Stop for each resource type (EC2, ASG, and RDS). This means you don’t have to worry about ParkMyCloud doing something to your AWS account that you don’t intend.  For Azure connections, ParkMyCloud requires a similarly-limited Limited Access Role, and the connection to Google Cloud requires a limited Service Account.

6. Restrict Scheduling Based on Names or Tags

The ParkMyCloud policy engine is a powerful way to automate your resource scheduling and team management, but it can also be used to prevent schedules from being applied to certain systems. For instance, if you have a prod database that you want to keep up 24/7, you can use a policy to never let any user apply a schedule (even if they wanted to).  These policies can be applied based on tags, naming conventions, AWS regions, or account names.

7. Full Cloud Visibility

One great benefit of ParkMyCloud is the ability to see across all of your cloud providers (AWS, Microsoft Azure, and Google Cloud), cloud accounts, and regions within a cloud. This viewability not only provides management benefits, but helps with security by keeping all resources in one list. This prevents rogue instances from running in regions you don’t normally look at, and can help you identify resources that don’t need to be running or even stopped.

Conclusion

As you continue to strive to follow AWS security best practices, consider adding ParkMyCloud to your security toolkit.  While you’re saving money for your team, you can also get these 7 benefits to help secure your infrastructure and sleep better at night.  Start a free trial of ParkMyCloud today to start reaping the benefits!

Read more ›

Reduce RDS Costs with ParkMyCloud

Thanks to the ability to shut down instances with a start/stop scheduler, users of Amazon’s database service can finally save time and reduce RDS costs. Until June 2017, the only way to accomplish this feat was by copying and deleting instances, running the risk of losing transaction logs and automatic backups. While Amazon’s development of the start/stop scheduler is useful and provides a level of cost savings, it also comes with issues of its own.

For one, the start/stop scheduler is not foolproof. The process for stopping and starting non-production RDS instances is manual, relying on the user to create and consistently manage the schedule. Having to manually switch off when instances are not in use, and then restart when access is needed again, is a helpful advantage but also leaves room for human error. Complicating things further, RDS instances that have been shut down will automatically be restarted after seven days, again relying on the user to switch those instances back off if they’re not needed at the time.

Why Scripting is not the Best Answer

One way of minimizing potential for error is by automating the stop/start schedule yourself with writing your own scripts. While that could work, you would need to consider the number of non-production instances deployed on AWS RDS, and plan for a schedule that would allow developers to have access when needed, which could very well be at varying times throughout the day. All factors considered, the process of writing and maintaining scheduling scripts is one that takes extra time and costs money as well. Ultimately, setting up and maintaining your own schedule could increase your cloud spend more than it reduces RDS costs.

When you start thinking about the cost of paying developers, the amount of scripts that would have to be written, and the ongoing maintenance required, buying into an automated scheduling process is a no-brainer.

How ParkMyCloud Reduces RDS Costs

Automated Scheduling

ParkMyCloud saves you time and money by automating the scheduling process of stopping and starting AWS RDS instances (in addition to Microsoft Azure VMs and Google Cloud Compute instances, but that’s another post). At the same time, you get total visibility and full autonomy over your account.

The process is simple. With you as the account manager, ParkMyCloud conducts a discovery of all the company accounts, and determines which instances are most suitable for parking. From there, you have the option of implementing company-wide schedules for non-production instances, or giving each development team the ability to create schedules of their own.

Flexible Parking

ParkMyCloud takes saving on RDS costs to a whole new level with parking schedules. Different schedules can be applied to different instances, or they can be parked permanently and put on “snooze” when access is needed. Amazon’s seven-day automatic restart of switched off instances is a non-issue with our platform, and snoozed instances can be re-parked when access is no longer needed, so there’s no more relying on the user to do it manually.

For the most part, we find that companies will want to park their non-production instances outside the normal working hours of Monday to Friday, let’s say from 8:00am to 8:00pm. By parking your instances outside of those days and hours, ParkMyCloud can reduce your cloud spend by 65% – even more if you implement a parking schedule and use the snooze option.

Valuable Insight

Because you have total visibility over the account, you can reduce RDS costs even further by having a bird’s eye view of your company’s cloud use. You’ll be able to tell which of your instances are underused, terminate them, and possibly benefit further from selecting a cheaper plan (really soon). You’ll be able to see all RDS instances across all regions and AWS accounts in one simple view. You can also view the parking schedules for each instance and see how much each schedule is saving, potentially reducing costs even further. The viewability of your account and access to information provides a great resource for budgeting and planning.

Conclusion

The AWS start/stop scheduler is useful, but has to be done manually. Writing your own scripts sounds helpful, but it’s actually time consuming, and not fully cost-effective. ParkMyCloud automates the process while still putting you in control, reducing RDS costs and saving you time and money.

See the benefits of ParkMyCloud for yourself by taking advantage of our two-week free trial. Test our cloud cost control platform in your own environment, without any need for a credit card or signed contract, and see why our simple, cost-effective tool is the key to reducing RDS costs. We offer a variety of competitive pricing plans to choose from, or a limited-function version that you can continue to use for free after the trial ends.

To start your free trial today, sign up here.

Read more ›

Cloud Optimization Tools = Cloud Cost Control (Part II)

A couple of weeks ago in Part 1 of this blog topic we discussed the need for cloud optimization tools to help enterprises with the problem of cloud cost control. Amazon Web Services (AWS) even goes as far as suggesting the following simple steps to control their costs (which can also be applied  to Microsoft Azure and Google Cloud Platform, but of course with slightly different terminology):

    1. Right-size your services to meet capacity needs at the lowest cost;
    2. Save money when you reserve;
    3. Use the spot market;
    4. Monitor and track service usage;
    5. Use Cost Explorer to optimize savings; and
    6. Turn off idle instances (we added this one).

A variety of third-party tools and services have popped up in the market over the past few years to help with cloud cost optimization – why? Because upwards of $23B was spent on public cloud infrastructure in 2016, and spending continues to grow at a rate of 40% per year. Furthermore, depending on who you talk to, roughly 25% of public cloud spend is wasted or not optimized — that’s a huge market! If left unchecked, this waste problem is supposed to triple to over $20B by 2020 – enter the vultures (full disclosure, we are also a vulture, but the nice kind). Most of these tools are lumped under the Cloud Management category, which includes subcategories like Cost Visibility and Governance, Cost Optimization, and Cost Control vendors – we are a cost control vendor to be sure.

Why do you, an enterprise, care? Because there are very unique and subtle differences between the tools that fit into these categories, so your use case should dictate where you go for what – and that’s what I am trying to help you with. So, why am I a credible source to write about this (and not just because ParkMyCloud is the best thing since sliced bread)?

Well, yesterday we had a demo with a FinTech company in California that was interested in Cost Control, or thought they were. It turns out that what they were actually interested in was Cost Visibility and Reporting; the folks we talked to were in Engineering Finance, so their concerns were primarily with billing metrics, business unit chargeback for cloud usage, RI management, and dials and widgets to view all stuff AWS and GCP billing related. Instead of trying to force a square peg into a round hole, we passed them on to a company in this space who’s better suited to solve their immediate needs. In response, the Finance folks are going to put us in touch with the FinTech Cloud Ops folks who care about automating their cloud cost control as part of their DevOps processes.

This type of situation happens more often than not. We have a lot of enterprise customers using ParkMyCloud along with CloudHealth, CloudChekr, Cloudability, and Cloudyn because in general, they provide Cost Visibility and Governance, and we provide actionable, automated Cost Control.

As this is our blog, and my view from the street – we have 200+ customers now using ParkMyCloud, and we demo to 5-10 enterprises per week. Based on a couple of generic customer uses cases where we have strong familiarity, here’s what you need to know to stay ahead of the game:

  • Cost Visibility and Governance: CloudHealth, CloudChekr, Cloudability and Cloudyn (now owned by Microsoft)
  • Reserved Instance (RI) management – all of the above
  • Spot Instance management – SpotInst
  • Monitor and Track Usage: CloudHealth, CloudChekr, Cloudability and Cloudyn
  • Turn off (park) Idle Resources – ParkMyCloud, Skeddly, Gorilla Stack, BotMetric
  • Automate Cost Control as part of your DevOps Process: ParkMyCloud
  • Govern User Access to Cloud Console for Start/Stop: ParkMyCloud
  • Integrate with Single Sign-On (SSO) for Federated User Access: ParkMyCloud

To summarize, cloud cost control is important, and there are many cloud optimization tools available to assist with visibility, governance, management, and control of your single or multi-cloud environments. However, there are very few tools which allow you to set up automated actions leveraging your existing enterprise tools like Ping, Okta, Atlassian, Jenkins, and Slack.  Make sure you are not only focusing on cost visibility and recommendations, but also on action-oriented platforms to really get the best bang for your buck.

Read more ›

How to Optimize Cloud Spend with ParkMyCloud

The focus on how to optimize cloud spend is now as relentless as the initial surge was to migrate workloads from ‘on-prem’ to public cloud. A lot of this focus, and resultant discussions, were in regards to  options related to the use of Reserved Instances (RI’s), Spot Instances,or other pre-pay options. The pay-up-front discount plan makes sense  when you have some degree of visibility on future needs, and when there is no ‘turn-if-off’ option, which we here at ParkMyCloud call “parking”.

When it comes to the ability to ‘park instances’ we like to divide the world into two halves. There are those Production Systems, which typically need to be running 24/7/365, and then there are Non-Production Systems, which at least in theory have the potential to be parked when not in use. The former are typically  your end-customer or enterprise facing systems, which need to be online and available at all times.In this case, RI’s typically make sense. When it comes to those non-production systems, that’s where a tool such as ParkMyCloud comes into play. Here you have an opportunity to review the usage patterns and needs of your organization and how to optimize cloud spend accordingly. For example, you may well discover that your QA team never works on weekends, so you can turn their EC2 instances off on a Friday night and turn them back on first thing on Monday morning. Elsewhere, you might find other workloads that can be turned off in the small hours or even workloads which can be left off for extended periods.

Our customers typically like to view both their production and non-production systems in our simple dashboard. Here they can view all their public cloud infrastructure and  simply lock those production systems which cannot be touched. Once within the dashboard the different non-production workloads can then be reviewed and either centrally managed by an admin or have their management delegated to individual business units or teams.

Based on our customer usage we track, we see these non-production systems typically accounting for about 50% of what the companies spend on compute (i.e. instances / VMs). We then see those who aggressively manage these non-production instances saving up to 65% of their cost, which then makes a large dent in their overall cloud bill.

So, when you are thinking about how to optimize cloud spend, there’s a lot more opportunities than just committing to purchase in advance, especially for your non-production workloads.

Read more ›

Shutting Down RDS Instances in AWS – Introducing the Start/Stop Scheduler

Users of Amazon’s database service have been clamoring for a solution to shutting down RDS instances with an automatic schedule ever since 2009, when the PaaS service was first released.  Once Amazon announced the ability to power off and on RDS instances earlier this year, AWS users started planning out ways to schedule these instances using scripts or home-grown tools.  However, users of ParkMyCloud were happy to find out that support for RDS scheduling was immediately available in the platform.  If you were planning on writing your own scripts for RDS parking, let’s take a look at some of the additional features that ParkMyCloud could provide for you.

Schedule EC2 and ASG in addition to RDS

Very few AWS users are utilizing RDS databases without simultaneously running EC2 instances as compute resources.  This means that writing your own scheduling scripts for shutting down RDS instances would involve scheduling EC2 instances as well.

ParkMyCloud has support for parking EC2 resources, RDS databases, and Auto Scaling Groups all from the same interface, so it’s easy to apply on/off schedules to all of your cloud resources.

Logical Groups to tie instances together

Let’s say you have a QA environment with a couple of RDS databases and multiple EC2 instances running a specific version of your software. With custom scripts, you have to implement logic that will shut down and start up all of those instances together, and potentially in a specific order.  ParkMyCloud allows users to create Logical Groups, which shows up as one logical entity in the interface but is scheduling multiple instances behind it.  You can also set start or stop delays within the Logical Group to customize the order, so if databases need to be started first and stopped last, then you can set that level of granularity.

Govern user access to databases

If your AWS account includes RDS databases that relate to dev, QA, staging, production, test, and UAT, then you’ll want to allow different users to access different databases based on their role or current project.  Implementing user governance in your own scripts can be a huge hassle, but ParkMyCloud makes it easy to split your user base into teams.  Users can be part of multiple teams if necessary, but by default they will only see the RDS databases that are in the teams they have access to.

High visibility into all AWS accounts and regions

Scripting your own schedules can be a challenge with a single region or account, but once you’re using RDS databases from around the world or across AWS accounts, you’re in for a challenge.  ParkMyCloud pulls all resources from all accounts and all AWS regions into one pane of glass, so it’s easy to apply schedules and keep an eye on all your RDS databases.

RDS DevOps automation

It can be a challenge to integrate your own custom scripts with your devops processes.  With ParkMyCloud, you have multiple options for automation.  With the Policy Engine, RDS instances can have schedules applied automatically based on tags, names, or locations.  Also, the ParkMyCloud API makes it easy to override schedules and toggle instances from your Slack channels, CI/CD tools, load-testing apps, and any other automated processes that might need a database instance powered on for a brief time.

Conclusion

Shutting down RDS instances is a huge money-saver.  Anyone who is looking to implement their own enterprise-grade AWS RDS start/stop scheduler is going to run into many challenges along the way.  Luckily, ParkMyCloud is on top of things and has implemented RDS parking alongside the other robust feature set that you already used for cost savings.  Sign up for a free trial today to supercharge your RDS database scheduling!

Read more ›

Interview: Hybrid Events Group + ParkMyCloud to Automate EC2 Instance Scheduling and Optimize AWS Infrastructure

We talked with Jedidiah Hurt, DevOps and technical lead at Hybrid Events Group, about how his company is using ParkMyCloud to automate EC2 instance scheduling, saving hours of development work. Below is a transcript of our conversation.

Appreciate you taking the time to speak with us today. Can you start off by giving us some background on your role, what Hybrid Events Group does, and why you got into doing what you do?

I do freelance work for Hybrid Events Group and am now moving into the role of technical lead. We had a big client we were working with this spring and we needed to fire up several EC2 instances. We were doing live broadcasting events across the U.S., which is what the company specializes in – events A/V services. So we do live webcasting, and we can do CapturePro, another service we offer where we basically just show up to any event that someone would want to record, which usually is workshops and keynotes at tech conferences, and we record on video and also capture the presenter’s presentation in video in real time.

ParkMyCloud, what we used it for, was just to automate EC2 instances for doing live broadcasts.

Was there any reason you chose AWS over others like Azure or Google Cloud, out of curiosity?

I just had the most experience with AWS; I was using AWS before Azure and Google Cloud existed. So I haven’t, or I can’t say that I’ve actually really given much of a trial to Azure or Google Cloud. I might have to give them a look here sometime in the future.

Do you use any PaaS services in AWS, or do you focus on compute databases and storage?

Yeah, not a whole lot right now. Just your basic S3, EC2, and I think we are probably going to move into elastic load balancing and auto scaling groups within the next few months or so as we build out our platform.

Do you use Agile development process to build out your platform and provide continuous delivery?

So, I am an agile practitioner, but we are just kind of brown fielding the platform. We are in the architecture stage right now, so we will be doing all of that, as far as continuous deployment, and hopefully continuous integration where we actually have some automated testing.

As far as tools, I’m the only developer on the team right now, so we won’t really have a full Agile or be fully into Agile. We haven’t got boards and prints and planning, weekly meetings, and all those things, because it’s just me. But we integrate portions of it, as far as having stakeholders kind of figuring out what our minimum viable product is.

What drove you to look for something like ParkMyCloud, and how did you come across it?

ParkMyCloud enabled us to automate a process that we were going to do manually, or that I was going to have to write scripts for and maintain. I think initially I was looking into just using the AWS CLI, and some other kind of test scheduler, to bring up the instances and then turn them off after our daily broadcast session was over. I did a little bit of googling to see if there were any time-based solutions available and found ParkMyCloud, and this platform does exactly what’s needed and more.

And you are using the free tier ParkMyCloud, correct?

Yes. I don’t remember what the higher tiers offered, but this was all we really needed. We just had three or four large EC2 instances that we wanted to bring up for four to five hours a day, Monday through Friday, so it had all the core features that we currently need.

Anything that stood out for you in terms of using the product?

I’d say on the plus side I was a little bit concerned at the beginning as far as the reliability of the tool, because we would have been in big trouble with our client if ParkMyCloud failed to bring up an instance at a scheduled start time. We used it, or I guess I would say we relied on it, every day for 2 months solid, and never saw any issues as far as instances not coming up when they were supposed to, or shutting down when they were not supposed to. I was really pleased with, what I would say, the reliability of the tool – that definitely stuck out to me.

From an ROI standpoint, are you satisfied with savings and the way the information is presented to you?

Yeah, absolutely. And I think for us, the ROI wasn’t so much the big difference between having the instances running all the time, or having the instances on a schedule. The ROI was more from the fact that I didn’t have to build the utility to accomplish that because you guys already did that. So in that sense, it probably saved me many hours of development work.

Also, that kind of uneasy feeling you get when you hack up a little script and put it into production versus having a well-tested, fully-automated platform. I’m really happy that we found ParkMyCloud, it has definitely become an important part of our infrastructure management over last few months.

As our final question, how much overhead or time did you have to spend in getting ParkMyCloud set up to manage your environment, and did you have to do anything on a daily or weekly basis to maintain it?

So, as I said, our particular use case was very basic, so it ended up being three instances that we needed to bring up for three or four hours a day and then shut them down. I’d say it took me ten to fifteen minutes to get rolling with ParkMyCloud and automate EC2 instance scheduling. And now we save thousands of dollars per month on our AWS bill.

Read more ›

Cloud Optimization Tools = Cloud Cost Control

Over the past couple of years we have had a lot of conversations with large and small enterprises regarding cloud management and cloud optimization tools, all of whom were looking for cost control. They wanted to reduce their bills, just like any utility you might run at home — why spend more than you need to? Amazon Web Services (AWS) actively promotes optimizing cloud infrastructure, and where they lead, others follow. AWS even goes so far as to suggest the following simple steps to control AWS costs:

  1. Right-size your services to meet capacity needs at the lowest cost;
  2. Save money when you reserve;
  3. Use the spot market;
  4. Monitor and track service usage;
  5. Use Cost Explorer to optimize savings; and
  6. Turn off idle instances (we added this one).

Its interesting to note use of the word ‘control’ even though the section is labeled Cost Optimization.

So where is all of this headed? It’s great that AWS offers their own solutions but what if you want automation into your DevOps processes, multi-cloud support (or plan to be multi cloud), real-time reporting on these savings, and to turn stuff off when you are not using it? Well then you likely need to use a third-party tool to help with these tasks.

Let’s take a quick look at a description of each AWS recommendation above, and get a better understanding of each offering. Following this we will then explore if these cost optimization options can be automated as part of a continuous cost control process:

  1. Right-sizing – Both the EC2 Right Sizing solution and AWS Trusted Advisor analyze utilization of EC2 instances running during the prior two weeks. The EC2 Right Sizing solution analyzes all instances with a max CPU utilization less than 50% and determines a more cost-effective instance type for that workload, if available.
  2. Reserved Instances (RI) – For certain services like Amazon EC2 and Amazon RDS, you can invest in reserved capacity. With RI’s, you can save up to 75% over equivalent ‘on-demand’ capacity. RI’s are available in three options – (1) All up-front, (2) Partial up-front or (3) No upfront payments.
  3. Spot – Amazon EC2 Spot instances allow you to bid on spare Amazon EC2 computing capacity. Since Spot instances are often available at a discount compared to On-Demand pricing, you can significantly reduce the cost of running your applications, grow your application’s compute capacity and throughput for the same budget, and enable new types of cloud computing applications.
  4. Monitor and Track Usage – You can use Amazon CloudWatch to collect and track metrics, monitor log files, set alarms, and automatically react to changes in your AWS resources. You can also use Amazon CloudWatch to gain system-wide visibility into resource utilization, application performance, and operational health.
  5. Cost Explorer – AWS Cost Explorer gives you the ability to analyze your costs and usage. Using a set of default reports, you can quickly get started with identifying your underlying cost drivers and usage trends. From there, you can slice and dice your data along numerous dimensions to dive deeper into your costs.
  6. Turn off Idle Instances – To “park” your cloud resources by assigning them schedules of operating hours they will run or be temporarily stopped – i.e. parked. Most non-production resources (dev, test, staging, and QA) can be parked at nights and on weekends, when they are not being used. On the flip side of this, some batch processing or load testing type applications can only run during non-business hours, so they can be shut down during the day.

Many of these AWS solutions offer recommendations, but do require manual efforts to gain the benefits. This is why third party solutions have have seen widespread adoption and include cloud management, cloud governance and visibility, and cloud optimization tools. In part two of this this blog we will have a look at some of those tools, the benefits of each, approach and the level of automation to be gained.

Read more ›

Cloud Cost Management Tool Comparison

Not only has it become apparent that public cloud is here to stay, it’s also growing faster as time goes on (by 2020, it is estimated that more than 40% of enterprise workloads will be in the cloud). IT infrastructure has changed permanently, and enterprise organizations are coming to terms with some of the side effects of this shift.  One of those side effects is the need for tools and processes (and even teams in larger organizations) dedicated to cloud cost management and cost control.  Executives from all teams within an organization want to see costs, projections, usage, savings, and quantifiable efforts to save the company money while maximizing IT throughput as enterprises shift to resources to the cloud.  

There’s a variety of tools to solve some of these problems, so let’s take a look at a few of the major ones.  All of the tools mentioned below support Amazon AWS, Microsoft Azure, and Google Cloud Platform.

CloudHealth

CloudHealth provides detailed analytics and reporting on your overall cloud spend, with the ability to slice-and-dice that data in a variety of ways.  Recommendations about your instances are made based on a score driven by instance utilization and cloud provider best practices. This data is collected from agents that are installed on the instances, along with cloud-level information.  Analysis and business intelligence tools for cloud spend and infrastructure utilization are featured prominently in the dashboard, with governance provided through policies driven by teams for alerts and thresholds.  Some actions can be scripted, such as deleting elastic IPs/snapshots and managing EC2 instances, but reporting and dashboards are the main focus.

Overall, the platform seems to be a popular choice for large enterprises wanting cost and governance visibility across their cloud infrastructure.  Pricing is based on a percentage of your monthly cloud spend.

CloudCheckr

Cloudcheckr provides visibility into governance, security, compliance, and cost problems based on doing analytics and checks against logic built into their platform. It relies on non-native tools and integrations to take action on the recommendations, such as Spotinst, Ansible, or Chef.  CloudCheckr’s reports cover a wide range of topics, including inventory, utilization, security, costs, and overall best-practices. The UI is simple and is likely equally well regarded by technical and non-technical users.

The platform seems to be a popular choice with small and medium sized enterprises looking for greater overall visibility and recommendations to help optimize their use of cloud.  Given their SMB focus customers are often provided this service through MSPs. Pricing is based on your cloud spend, but a free tier is also available.

Cloudyn

Cloudyn (recently acquired by Microsoft) is focused on providing advice and recommendations along with chargeback and showback capabilities for enterprise organizations. Cloud resources and costs can be managed through their hierarchical team structure.  Visibility, alerting, and recommendations are made in real time to assist in right-sizing instances and identifying outlying resources.  Like CloudCheckr, it relies on external tools or people to act upon recommendations and lacks automation

Their platform options include supporting MSPs in the management of their end customer’s cloud environments as well as an interesting cloud benchmarking service called Cloudyndex.  Pricing for Cloudyn is also based on your monthly cloud spend.  Much of the focus seems to be on current Microsoft Azure customers and users.

ParkMyCloud

Unlike the other tools mentioned, ParkMyCloud focuses on actions and automated scheduling of resources to provide optimization and immediate ROI.  Reports and dashboards are available to show the cost savings provided by these schedules and recommendations on which instances to park.  The schedules can be manually attached to instances, or automatically assigned based on tags or naming schemes through its Policy Engine.  It pairs well with the other previously mentioned recommendation-based tools in this space to provide total cost control through both actions and reporting.

ParkMyCloud is widely used by DevOps and IT Ops in organizations from small startups to global multinationals, all who are keen to automate cost control by leveraging ParkMyCloud’s native API and pre-built integration with tools like Slack, Atlassian, and Jenkins.  Pricing is based on a cost per-instance, with a free tier available.

Conclusion

Cloud cost management isn’t just a “should think about” item, it’s a “must have in place” item, regardless of the size of a company’s cloud bill.  Specialized tools can help you view, manage, and project your cloud costs no matter which provider you choose.  The right toolkit can supercharge your IT infrastructure, so consider a combination of some of the tools above to really get the most out of your AWS, Azure, or Google environment.

Read more ›

Cloud Webhooks – Notification Options for System Level Alerts to Improve your Cloud Operations

Webhooks are user-defined HTTP POST callbacks. They provide a lightweight mechanism for letting remote applications receive push notifications from a service or application, without requiring polling. In today’s IT infrastructure that includes monitoring tools, cloud providers, DevOps processes, and internally-developed applications, webhooks are a crucial way to communicate between individual systems for a cohesive service delivery. Now, in ParkMyCloud, webhooks are available for even more powerful cost control.

For example, you may want to let a monitoring solution like Datadog or New Relic know that ParkMyCloud is stopping a server for some period of time and therefore suppress alerts to that monitoring system for the period the server will be parked, and vice versa enable the monitoring once the server is unparked (turned on). Another example would be to have ParkMyCloud post to a chatroom or dashboard when schedules have been overridden by users. We do this by enabling systems notifications to our cloud webhooks.

Previously only two options were provided when configuring system level and user notifications in ParkMyCloud: System Errors and Parking Actions. We have added 3 new notification options for both system level and user notifications. Descriptions for all five options are provided below:

  • System Errors – These are errors occurring within the system itself such as discovery errors, parking errors, invalid credential permissions, etc.
  • System Maintenance and Updates – These are the notifications provided via the banner at the top of the dashboard.
  • User Actions – These are actions performed by users in ParkMyCloud such as manual resource state toggles, attachment or detachment of schedules, credential updates, etc.
  • Parking Actions – These are actions specifically related to parking such as automatic starting or stopping of resources based on defined parking schedules.
  • Policy Actions – These are actions specifically related to configured policies in ParkMyCloud such as automatic schedule attachments based on a set rule.

We have made the options more granular to provide you better control on events you want to see or not see.

These options can be seen when adding or modifying a channel for system level notifications (Settings > System Level Notifications). In the image shown below, a channel is being added.

Note: For additional information regarding these options, click on the Info Icon to the right of Notify About.

The new notification options are also viewable by users who want to set up their own notifications (Username > My Profile).  These personal notifications are sent via email to the address associated with your user.  Personal notifications can be set up by any user, while Webhooks must be set up by a ParkMyCloud Admin.

After clicking on Notifications, you will see the above options and may use the checkboxes to select the notifications you want to receive. You can also set each webhook to handle a specific ParkMyCloud team, then set up multiple webhooks to handle different parts of your organization.  This offers maximum flexibility based on each team’s tools, processes, and procedures. Once finished, click on Save Changes. Any of these notifications can be sent then to your cloud webhook and even Slack to ensure ParkMyCloud is integrated into your cloud management operations.

 

Read more ›

Saving Money on Batch Workloads in Public Cloud

batch workloads

Large companies have traditionally had an impressive list of batch workloads, which run at night, when people have gone home for the day. These include such things as application and database backup jobs; extraction, transform, and load (ETL) jobs; disaster recovery (DR) environment checks and updates; online analytical processing (OLAP) jobs; and monthly/ quarterly billing updates or financial “close”, to name a few.

Traditionally, with on-premise data centers, these workloads have run at night to allow the same hardware infrastructure that supports daytime interactive workloads to be repurposed, if you will, to run these batch workloads at night. This served a couple of purposes:

  • It avoided network contention between the two workloads (as both are important), allowing the interactive workloads to remain responsive.
  • It avoided data center sprawl by using the same infrastructure to run both, rather than having dedicated infrastructure for interactive and batch.

Things Are Different with Public Cloud

As companies move to the public cloud, they are no longer constrained by having to repurpose the same infrastructure. In fact, they can spin up and spin down new resources on demand in AWS, Azure or Google Cloud Platform (GCP), running both interactive and batch workloads whenever they want.

Network contention is also less of concern, since the public cloud providers typically have plenty of bandwidth. The exception of course is where batch workloads use the same application interfaces or APIs to read/write data.

So, moving to public cloud offers a spectrum of possibilities, and you can use one or any combination of them:

  • You can run batch nightly using similar processes as you do in your online data centers, but on separate provisioned instances/virtual machines. This probably results in the least effort to moving batch to the public cloud, the least change to your DevOps processes, and perhaps saves you some money by having instances sized specifically for the workloads and being able to leverage cloud cost savings options (e.g.,  reserved instances);
  • You can run batch on separately provisioned instances/virtual machines, but concurrently with existing interactive workloads. This will likely result in some additional work to change your DevOps processes, but offers more freedom and similar benefits mentioned above. You will still need to pay attention to application interfaces/APIs the workloads may have in common; or
  • At the extreme end of the cloud adoptions spectrum, you could use cloud provider platform as a service (PaaS) offerings, such as AWS Batch, Microsoft Azure Batch or GCP Cloud Dataflow, where batch is essentially treated as a “black box”. A detailed comparison of these services is beyond the scope of this blog. However, in summary, these are fully managed services, where you queue up input data in an S3 bucket, object blob or volume along with a job definition, appropriate environment variables and a schedule and you’re off to races. These services employ containers and autoscaling/resource groups/instance groups where appropriate, with options to use less expensive compute in some cases. (For example, with AWS Batch, you have the option of using spot instances.)

The advantage of this approach is potentially faster time to implement and (maybe) less expensive monthly cloud costs, because the compute services run only at the times you specify. The disadvantages of this approach may be the degree of operational/configuration control you have; the fact, that these services may be totally foreign to your existing DevOps folks/processes (i.e., there is a steep learning curve); and it may tie you to that specific cloud provider.

A Simple Alternative

If you are looking to minimize impact to your DevOps processes (that is, the first two approaches mentioned above), but still save money, then ParkMyCloud can help.

Normally, with the first two options, there are cron jobs scheduled to kick-off batch jobs at the appropriate times throughout the day, but the underlying instances must be running for cron to do its thing. You could use ParkMyCloud to put parking schedules on these resources, such they are turned OFF for most of the day, but are turned ON just-in-time to still allow the cron jobs to execute.

We have been successfully using this approach in our own infrastructure for some time now, to control a batch server used to do database backups. This would, in fact, provide more savings than AWS reserved instances.

Let’s look at specific example in AWS. Suppose you have an m4.large server you use run batch jobs. Assuming Linux pricing in us-east-1, this server costs $0.10 per hour, or about $73 per month. Suppose you have configured cron to start batch jobs at midnight UTC and that they normally complete 1 to 1-½ hours later.

You could purchase a Reserved Instance for that server, where you either pay nothing upfront or all upfront and your savings would be 38%-42%.

Or, you could put a ParkMyCloud schedule where the instance is only ON from 11 pm-1 am UTC, allowing enough time for the cron jobs to start and run. The savings in that case would be 87.6% (including the cost of ParkMyCloud) without the need for a one year commitment. Depending on how many batch servers you run in your environment and their sizes, that could be some hefty savings.

Conclusion

Public cloud will offer you a lot of freedom and some potentially attractive cost savings as you move batch workloads from on premise. You are no longer constrained by having the same infrastructure serve two vastly different types of workloads — interactive and batch. The savings you can achieve by moving to public cloud can vary, depending on the approach you take and the provider/service you use.

The approach you take, depends on the amount of process change you’re willing to absorb in your DevOps processes. If you are willing to throw caution to the wind, the cloud provider PaaS offerings for batch can be quite compelling.

If you wish to take a more cautious approach, then we engineered ParkMyCloud to park servers without the need for scripting, or the need for you to be a DevOps expert. This approach allows you to achieve decent savings, with minimal change to your DevOps batch processes and without the need for Reserved Instances.

Read more ›

New: Cloud Savings Dashboard Now Available in ParkMyCloud

We’re happy to introduce ParkMyCloud’s new reporting dashboard! There’s now easy to access reports that provide greater insight into information regarding cloud costs, team rosters, and more. Details on this update can be found in our support portal

cloud savings

Dashboard Details

Now, when you click Reports in the left navigational panel, instead of getting the option to download a full savings report, you’ll see your ParkMyCloud reporting dashboard. This provides a quick view of cloud provider, team and resource costs, and information regarding your ParkMyCloud savings. At the top of the reporting dashboard, two drop-down menus are provided for selecting the report type and the time period. The default selections are Dashboard and Trailing 30 Days, which is what you see after clicking on reporting in the left navigational menu. Click on a drop-down menu to choose other available options.

Underneath the Report Type drop-down menu, you will see several options that are broken down into additional sections (Financial, Resource, Administrative, etc.) Click on an option in the menu to view that specific report within the dashboard. These reports can also be shown using a variety of time periods. Reports may be exported as an CSV or Excel file by clicking on the desired option on the right of the Report and Time Period drop-down menus as well.

Click on Legacy if you would prefer to still use the previous reporting functionality rather than the new reporting dashboard in ParkMyCloud. A pop-up window will appear for selecting the start and end date along with the type of legacy report. As part of this change, we have also moved Audit Logs underneath reporting. To access this option, you will need to select Reports in the left navigational panel and then Audit Log.

Check It Out

If you don’t yet use ParkMyCloud, you can try it now for free. We offer a 14-day free trial of all ParkMyCloud features, after which you can choose to subscribe to a premium plan or continue parking your instances using ParkMyCloud’s free tier forever.

If you already use ParkMyCloud, you’ll instantly see a visual representation of your cloud savings just by logging in to the platform. We challenge you to use this as a scoreboard, and try to drive your monthly savings as high as you can!

Read more ›

Exploring AWS RDS Pricing and Features

AWS RDS savings

Traditional systems administration of servers, applications, and databases used to be a little simpler when it came to choices and costs.  For a long time, there was no other choice than to hook up a physical server, put on your desired OS, and install the database or application software that you needed.  Eventually, you could choose to install your OS on a physical server or on a virtual machine running on a hypervisor.  Then, large companies started running their own hypervisor and allowed you to rent your VM for as long as you needed it on their servers.  In 2009, Amazon started offering the ability to rent databases directly, without having to worry about the underlying OS in a platform as a service (PaaS) offering called Relational Database Service (RDS).  This added another layer of complexity to your choices when managing your infrastructure.  Let’s explore AWS RDS pricing a little bit, and examine some of the features that comes with it.

RDS Basics

AWS RDS offers the ability to directly run and manage a relational database without managing the infrastructure that the database is running on, or a having to worry about patching of the database software itself.  Amazon currently offers RDS in the form of MySQL, Aurora (MySQL on steroids), Oracle, Microsoft SQL Server, PostgreSQL, and MariaDB.  The database sizes are grouped into 3 categories: Standard (m4), Memory Optimized (r3), and Micro (t2).  Each family has multiple sizes that have varying numbers of vCPUs, GiBs of memory, levels of network performance, and can be input/output optimized.

Each RDS instance can be set up to be “multi-AZ”, leveraging replicas of the database in a different availability zones within AWS.  This is often used for production databases. If a problem arises in one availability zone, failover to one of replica databases happens automatically behind the scenes. You don’t have to manage it. .  Along with multi-AZ deployments, Amazon offers “Aurora”, which has more fault tolerance and self healing beyond multi-AZ,  as well as additional performance features.

RDS Pricing

RDS is essentially a service running on top of EC2 instances, but you don’t have access to the underlying instances. Therefore, Amazon has set the pricing for RDS instances in a very similar way to EC2 instances, which will be familiar once you’ve gotten a grasp on the structure that is already in place for compute.  There are multiple components to the price of an instance, including: the underlying instance size , storage of data, multi-AZ capability, and sending data out (sending data in is free for the transfer).  To add another layer of complexity, each database type (MySQL, Oracle, etc) has different prices for each of the factors.  Aurora also charges for I/O on top of the other costs.

When you add all this up, the cost of an RDS instance can go through the roof for a high-volume database.  It also can be hard to predict the usage, storage, and transfer needs of your database, especially for new applications.  Also, the raw performance might be a lot less than what you might expect running on your own hardware or even on your own instances. What makes the price worth it?

RDS vs. Installing a Database on EC2

Frequently, the choice comes down to using RDS for your database backend, or installing your own database on an EC2 instance the “traditional” way.  From a purely financial perspective, installing your own database is almost guaranteed to be cheaper if you focus on AWS direct costs alone.  However, there’s more to the decision than just the cost of the services.

What often gets lost in the use of a service is the time-to-value savings (which includes your time and potentially opportunity cost/benefit for bringing services online, faster).  For example , by using RDS instead of your own database, you avoid the need to install and manage the OS and database software, as well as the ongoing patching of those.  You also get automatic backups and recovery through the AWS console or AWS API.  You avoid having to configure storage LUNs and worrying about optimizing striping for better I/O. Resizing instances is much simpler with RDS, both going smaller or bigger if necessary.  High-availability (either cold or warm) is available at the click of a button.  All of this means less management for you and faster deployment times, though at a higher price point. If your company competes in a highly competitive market, these faster deployment times can make all the difference in the world to your bottom line.

One downside of just about every PaaS offering (and RDS was no exception) is that there typically is no “OFF” switch. This means that in non-production environments you are paying for the service, whether your devops folks are using it or not.  For RDS that was changed recently by AWS.  RDS instances in dev/test environments can now be stopped. .  

ParkMyCloud has made “parking” public cloud compute resources as simple as possible. We also natively support parking RDS instances as well, helping you save money on non-production databases.  

By using our  Logical Groups feature, you can create a simple “stack” containing both compute instances and RDS databases to represent a particular application. The start/stop times can be sequenced within the group and a single schedule can be used on the group for simplified management.

Conclusion

AWS RDS pricing can get a bit tricky, and really requires you to know the details of your database in order to accurately predict the bill.  However, there are a ton of benefits to using the service, and can really help streamline your systems administration by handling the management and deployment of your backend database.  For companies that are moving to the cloud (or born in the cloud), RDS might be your choice when compared to running on a separate compute instance or on your own hypervisor, as it allows you to focus on your business and application, not on being a database administrator. For larger, established companies with a large team of DBAs and well established automation or for IO-intensive applications, RDS might not be the right fit for your business. By knowing the features, benefits, drawbacks, and factors in the cost, you can make the most informed decision for your database needs.

Read more ›

Interview: DevOps in AWS – How to Automate Cloud Cost Savings

automate cloud cost savings

We chatted with Ryan Alexander, DevOps Engineer at Decision Resources Group (DRG) about his company’s use of AWS and how they automate cloud cost savings. Below is a transcript of our conversion.

Hi Ryan, thanks for speaking with us. To start out, can you please describe what your company does?

Decision Resources Group offers market information and data for the medtech industry. For example, let’s say a medical graduate student is doing a thesis on Viagra use in the Boston area. They can use our tool to see information such as age groups, ethnicities, number of hospitals, and number of people who were issued Viagra in the city of Boston.

What does your team do within the company? What is your role?

I’m a DevOps engineer on a team of two. We provide infrastructure automation to the other teams in the organization. We report to senior tech management, which makes us somewhat of an island within the organization.

Can you describe how you are using AWS?

We have an infrastructure team internally. Once a server or infrastructure is built, we take over to build clusters and environments for what’s required. We utilize pretty much every tool AWS offers — EBS, ELB, RDS, Aurora, CloudFormation, etc.

What prompted you to look for a cost control solution?

When I joined DRG in December, there was a new cost saving initiative developing within the organization. It came from our CTO, who knew we could be doing better and wanted to see where we might be leaving money on the table.

How did you hear about ParkMyCloud?

One of my colleagues actually spoke with your CTO, Dale, at AWS re:Invent, and I had also heard about ParkMyCloud at DevOpsDays Toronto 2016. We realized it could help solve some of our cloud cost control problems and decided to take a look.

What challenges were contributing to the high costs? How has ParkMyCloud helped you solve them?

We knew we had a problem where development, staging, and QA environments were only used for 8 hours a day – but they were running for 24 hours a day. We wanted to shut them down and save money on the off hours, which ParkMyCloud helps us do automatically.

We also have “worker” machines that are used a few times a month, but they need to be there. It was tedious to go in and shut them down individually. Now with ParkMyCloud, I put those in a group and shut them down with one click. It is really just that easy to automate cloud cost savings with ParkMyCloud.

We also have security measures in place, where not everyone has the ability to sign in to AWS and shut down instances. If there was a team that needed them started on demand, but they’re in another country and I’m sleeping, they have to wait until I wake up the next morning, or I get up at 2 AM. Now that we set up Single Sign-On, I can set up the guys who use those servers, and give them the rights to startup and shutdown those servers. This has been more efficient for all of us. I no longer have to babysit and turn those on/off as needed, which saves time for all of us.

With ParkMyCloud, we set up teams and users so they can only see their own instances, so they can’t cause a cascading failure because they can only see the servers they need.

Were there any unexpected benefits of ParkMyCloud?

When I started, I deleted 3 servers that were sitting there doing nothing for a year and costing the company lots of money. With ParkMyCloud, that kind of stuff won’t happen, because everything gets sorted into teams. We can see the costs by team and ask the right questions, like, “why is your team’s cost so expensive right now? Why are you ignoring these recommendations from ParkMyCloud to park these instances?”

 

We rely on tagging to do all of this. Tagging is life in DevOps.

Read more ›

Interview: Atlassian Bamboo Automation + ParkMyCloud for AWS Cost Savings

Atlassian Bamboo automation

We talked with Travis Rehl, Director of Application and Engineering at Siteworx, about how his team is using ParkMyCloud in conjunction with Atlassian Bamboo automation in order to improve governance and optimize their AWS cloud infrastructure. Below is a transcript of our conversation.

Can you start by telling us about SiteWorx and what you guys do?

Sure, so Siteworx is a company that does digital transformations for clients, and my particular piece of it is Managed Services Hosting. We host ecommerce and content management systems for clients, generally Fortune 500 Companies or larger. We host specific products in AWS, and we’re moving into Azure as well.

What is your role in the company?

I am the Director of Application and Engineering here at Siteworx. I run the Siteworx services group which includes our hosting department as well as our application development team which supports our “run” phase of an engagement with a client.

Who in your organization is using ParkMyCloud?

We are currently using it for our Siteworx internal infrastructure, both EC2 and RDS, but I have some ideas to add it as a part of our managed services offering.

In the app we have maybe 5 or 6 users. They are team leads or engineering managers who have identified the scheduling that is appropriate for those particular instances and AWS accounts. This gives them the ability to group different servers together by environment levels for different clients.  One person from our finance team has access to it for billing and reporting.

My team in particular that is using ParkMyCloud is our engineering and operations group. There are two different teams who are the main ParkMyCloud users: our Operations team is 24/7, our Engineering team is generally 9-5 Eastern. They use ParkMyCloud to reduce costs, and have implemented it in such a way that will give the ability for our Development teams to turn servers back on as needed. If they have a project or demo that is occurring at an off hour, they are able to hit a button through our automation system — we’re using Atlassian Bamboo automation — to turn on the servers and utilize them.

Can you tell us more about that Atlassian Bamboo automation system?

If a team member wants to deploy code to a server during off hours, they will have a button within Bamboo to press to turn the server on via the ParkMyCloud API. Then they can hit a second set of buttons to send their code changes out to it. We utilize the calendar “snooze” function that PMC offers.

What were you looking for when you found ParkMyCloud?

I was looking for a technology that would allow us to optimize and automate our AWS cloud management. Internally, we have an agenda of trying to branch out to as many cloud platforms as necessary. So I was looking into many different services that manage your cloud-based servers and are compatible with different providers. That is when ParkMyCloud was suggested to me by a friend. We started a free trial, and got in touch with you all.

I am all in on ParkMyCloud, and I think we have a lot of use for it and down the road we plan to work with our clients to incorporate into our service offering.

Do you have any other cost control measures in place for AWS?

We evaluate server performance using Trusted Advisor in AWS or other services that say that you could scale down. The issue with those other services is that they are sometimes inaccurate because they use average CPU usage that does not take into account server down time. We try to evaluate and scale down as necessary based on the CPU usage when it is active.

How did the evaluation with ParkMyCloud go?

After we did some initial research on ParkMyCloud and other tools, we got in touch with PMC, started a free trial, did a demo, and a few questions we needed clarified – the entire process took just a couple weeks. The platform is entirely self service, and the ROI is immediate and verifiable.

Read more ›
Page 1 of 212
Copyright © ParkMyCloud 2016. All rights reserved|Privacy Policy