Google Hangouts & Microsoft Teams Integrations for Cloud Server Monitoring

New in ParkMyCloud: we’ve released integrations with chat clients Google Hangouts and Microsoft Teams to make cloud server monitoring easier and integrated into your day. Now, ParkMyCloud users can get notifications when their resources are about to turn on or off, when a user overrides a schedule, and more.

We created these integrations based on popular demand! ParkMyCloud has had a Slack integration since last summer. Now, we’re encountering more and more teams that set themselves up as pure Google or pure Microsoft shops, hence the need. If your team only uses Google tools – Google Cloud Platform for cloud, Google OAuth for SSO, and Google Hangouts for chat — you can use ParkMyCloud with all of these. Same with Microsoft: ParkMyCloud integrates with Microsoft Azure, ADFS, and Microsoft Teams.  

ParkMyCloud notifications in Google Hangouts – note the “view resource” link will take you straight to the resource in ParkMyCloud

Here’s what actions ParkMyCloud admins can get notified on through a chat client for better cloud server monitoring:

  • Resource Shutdown Warning – Provides a 15-minute warning before an instance is scheduled to be parked due to a schedule or expiring schedule override.
  • User Actions – These are actions performed by users in ParkMyCloud such as manual resource state toggles, attachment or detachment of schedules, credential updates, etc.
  • Parking Actions – These are actions specifically related to parking such as automatic starting or stopping of resources based on defined parking schedules.
  • Policy Actions – These are actions specifically related to configured policies in ParkMyCloud such as automatic schedule attachments based on a set rule.
  • System Errors – These are errors occurring within the system itself such as discovery errors, parking errors, invalid credential permissions, etc.
  • System Maintenance and Updates – These are the notifications provided via the banner at the top of the dashboard.

There are a few ways these can be useful. If you’re an IT administrator and you see your users toggling resource states frequently, the notifications may help you determine the best parking schedule for the users’ needs.

Or let’s say you’re a developer deep in a project and you get a notification that your instance is about to be shut down — but you still need that instance while you finish your work. Right in your Microsoft Teams window, you can send an override command to ParkMyCloud to keep the instance running for a couple more hours.

ParkMyCloud notifications in Microsoft Teams

These integrations give ParkMyCloud users a better perspective into cloud server monitoring, right in the same workspaces they’re using every day. Feedback? Comment below or shoot us an email – we are happy to hear from you!

P.S. We also just created a user community on Slack! Feel free to join here for cloud cost, automation, and DevOps discussions.

Cloud User Management Comparison: AWS vs. Azure vs. GCP vs. Alibaba Cloud

Cloud User Management Comparison: AWS vs. Azure vs. GCP vs. Alibaba Cloud

When companies move from on-prem workloads to the cloud, common concerns arise around costs, security, and cloud user management. Each cloud provider handles user permissions in a slightly different way, with varying terminology and roles available to assign to each of your end users. Let’s explore a few of the differences in users and roles within Amazon Web Services (AWS), Microsoft Azure, Google Cloud Platform, and Alibaba Cloud.

AWS IAM Users and Roles

AWS captures all user and role management within IAM, which stands for “Identity and Access Management”. Through IAM, you can manage your users and roles, along with all the permissions and visibility those users and service accounts have within your AWS account. There are a couple different IAM entities:

  • Users – used when an actual human will be logging in
  • Roles – used when service accounts or scripts will be interacting with resources

Both users and roles can have IAM policies attached, which give specific permissions to operate or view any of the other AWS services.

Azure RBAC

Azure utilizes the RBAC system within Resource Manager for user permissions, which stands for “Role Based Access Control”. Granting access to Azure resources starts with creating a Security Principal, which can be one of 3 types:

  • User – a person who exists in Azure Active Directory
  • Group – a collection of users in Azure Active Directory
  • Service Principal – an application or service that needs to access a resource

Each Security Principal can be assigned a Role Definition, which is a collection of permissions that they can utilize to view or access resources in Azure. There are a few built-in Role Definitions, such as Owner, Contributor, Reader, and User Access Administrator, but you can also create custom role definitions as well depending on your cloud user management needs.  Roles may be assigned on a subscription by subscription basis.

Google Cloud Platform IAM

Google Cloud Platform also uses the term IAM for their user permissions. The general workflow is to grant each “identity” a role that applies to each resource within a project. An identity can be any of the following:

  • Google account – any user with an email that is associated with a Google account
  • Service account – an application that logs in through the Google Cloud API
  • Google group – a collection of Google accounts and service accounts
  • G Suite domain – all Google accounts under a domain in G Suite
  • Cloud Identity domain – all Google accounts in a non-G-Suite organization

Roles in Google Cloud IAM are a collection of permissions. There are some primitive roles (Owner, Editor, and Viewer), some predefined roles, and the ability to create custom roles with specific permissions through an IAM policy.

Alibaba Cloud RAM

Alibaba Cloud has a service called RAM (Resource Access Management) for managing user identities. These identities work in slightly different ways than the other cloud service providers, though they have similar names:

  • RAM-User – a single real identity, usually a person but can also be a service account
  • RAM-Role – a virtual identity that can be assigned to multiple real identities

RAM users and roles can have one or more authorization policies attached to them, which in turn can each have multiple permissions in each policy. These permissions then work similarly to other CSPs, where a User or Role can have access to view or act upon a given resource.

Cloud User Management – Principles to Follow, No Matter the Provider

As you can see, each cloud service provider has a way to enable users to access the resources they need in a limited scope, though each method is slightly different. Your organization will need to come up with the policies and roles you want your users to have, which is a balancing act between allowing users to do their jobs and not letting them break the bank (or your infrastructure). The good news is that you will certainly have the tools available to provide granular access control for your cloud user management, regardless of the cloud (or clouds) you’re using.

6 Types of Overprovisioned Resources Wasting Money on Your Cloud Bill

6 Types of Overprovisioned Resources Wasting Money on Your Cloud Bill

In our ongoing discussion on cloud waste, we recently talked about orphaned resources eating away at your cloud budget, but there’s another type of resource that’s costing you money needlessly and this one is hidden in plain sight – overprovisioned resources. When you looked at your initial budget and made your selection of cloud services, you probably had some idea of what resources you needed and in what sizes. Now that you’re well into your usage, have you taken the time to look at those metrics and analyze whether or not you’ve overprovisioned?

One of the easiest ways to waste money is by paying for more than you need and not realizing it. Here are 6 types of overprovisioned resources that contribute to cloud waste.  

Unattached/Underutilized Volumes

As a rule of thumb, it’s a good idea to delete volumes that are not attached to instances or VMs. Take the example of AWS EBS volumes unattached to EC2 instances – if you’re not using them, then all they’re doing is needlessly accruing charges on your monthly bill. And even if your volume is attached to an instance, it’s billed separately, so you should also make a practice of deleting volumes you no longer need (after you backup the data, of course).

Underutilized database warehouses

Data warehouses like Amazon Redshift, Google Cloud Datastore, and Microsoft Azure SQL Data Warehouse were designed as a simple and cost-effective way to analyze data using standard SQL and your existing Business Intelligence (BI) tools. But to get the most cost savings benefits, you’ll want to identify any clusters that appear to be underutilized and rightsize them to lower costs on your monthly bill.

Underutilized relational databases

Relational databases such as Amazon RDS, Azure SQL, and Google Cloud SQL offer the ability to directly run and manage a relational database without managing the infrastructure that the database is running on or having to worry about patching of the database software itself.

As a best practice, Amazon recommends that you check the configuration of your RDS for any idle DB instances. You should consider a DB instance idle if it has not had a connection for a prolonged period of time, and proceed by deleting the instance to avoid unnecessary charges. If you need to keep storage for data on the instance, there are other cost-effective alternatives to deleting altogether, like taking snapshots. But remember – manual snapshots are retained, taking up storage and costing you money until you delete them.

Underutilized Instances/VMs

We often preach about idle instances and how they waste money, but sizing your instances incorrectly is just as detrimental to your monthly bill. It’s easy to overspend on large instances or VMs that are you don’t need. With any cloud service, whether it’s AWS, Azure, or GCP, you should always “rightsize” your instances and VMs by picking the instance size that is optimized for the size of your workload – be it compute optimized, memory optimized, GPU optimized, or storage optimized.

Once your instance has been running for some time, you’ll have a better idea of whether not the chosen size is optimal. Review your usage and make cost estimates with AWS Management Console, Amazon CloudWatch, and AWS Trusted Advisor if you’re using AWS. Azure users can review their metrics from Azure Monitor data, and Google users can import GCP metrics data for GCP virtual machines. Use this information to find under-utilized resources that can be resized to better optimize costs

Inefficient Containerization

Application containerization allows multiple applications to be distributed across a single host operating system without requiring their own VM, which can lead to significant cost savings. It’s possible that developers will launch multiple containers and fail to terminate them when they are no longer required, wasting money. Due to the number of containers being launched compared to VMs, it will not take long for container-related cloud waste to match that of VM-related cloud waste.

The problem with controlling cloud spend using cloud management software is that many solutions fail to identify unused containers because the solutions are host-centric rather than role-centric.  

Idle hosted caching tools (Redis)

Hosted caching tools like Amazon ElastiCache offer high performance, scalable, and cost-effective caching. ElastiCache also supports Redis, an open source (BSD licensed), in-memory data structure store, used as a database, cache and message broker. While caching tools are highly useful and can save money, it’s important to identify idle cluster nodes and delete them from your account to avoid accruing charges on your monthly bill. Be cognizant of average CPU utilization and get into the practice of deleting the node if your average utilization is under designated minimum criteria that you set.

How to Combat Overprovisioned Resources (and lower your cloud costs)

Now that you have a good idea of ways you could be overprovisioning your cloud resources and needlessly running up your cloud bill – what can you do about it? The end-all-be-all answer is “be vigilant.” The only way to be sure that your resources are cost-optimal is with constant monitoring of your resources and usage metrics. Luckily, optimization tools can help you identify and automate some of these best practices and do a lot of the work for you, saving time and money.

New in ParkMyCloud: Park Azure Scale Sets

New in ParkMyCloud: Park Azure Scale Sets

Today, we are happy to announce that you can now park Azure scale sets – allowing you to optimize costs for these groups of Microsoft Azure virtual machines.

Use other public clouds? You can park those scale groups, too. A few weeks ago, we announced GCP Managed Instance Group support, and we have supported AWS auto scaling group for some time.

Back to Azure – let’s take a look at the new functionality.

How You Can Park Azure Scale Sets

In ParkMyCloud, you can now manage and park Azure scale sets, both with and without autoscaling, to turn them off or to a “low” state when not needed to save money. When you set a parking schedule on a scale set, you have the option to set a straightforward “on/off” schedule — when parked, the maximum number of resources is 0 and therefore the group is fully parked. Or if you prefer, set your own preferred number of resources for a “low” rather than “off” state.

While we’re talking to our Microsoft fans — don’t miss the Microsoft Teams bot we made so you can control ParkMyCloud right from your chat window! ChatOps is fun, and this bot can streamline your workday by saving you a trip to the ParkMyCloud console.

ParkMyCloud Users: Enable Scale Sets and Get Parking

Existing users: in order to use Azure scale sets, you must update Azure Service Account permissions, as detailed in the ParkMyCloud User Guide.

Once you’ve done that, you can start parking scale sets. You can filter your dashboard to show only scale groups – on the left menu under “Resources” click “Auto Scaling Groups” to filter to just that type of resource. You can select a group and put a parking schedule on it, just like an individual instance.

As mentioned above, you can customize the amount of resources in the group in the high/low states. For the selected group, click the arrow on the far right to open the resource detail screen. You will be able to set a “desired” value of resources for the group at start and at stop.

Note that if your scale sets have multiple scaling profiles, they won’t be parkable and will be denoted with the “unparkable” icon. The number of “Autoscale Profiles” assigned to an Azure scale set is listed on the resource details screen.

New Users: Get Started

If you don’t use ParkMyCloud yet, it’s easy to get started and start saving 65% or more on your cloud costs. We recently upgraded our 14-day free trial to provide Enterprise tier access, so you’ll get to try out everything from user import/export feature to database parking to SmartParking, with unlimited users, teams, and cloud credentials. Get started now.

New Microsoft Teams Bot to Control Cloud Costs

New Microsoft Teams Bot to Control Cloud Costs

Today we’d like to announce a new Microsoft Teams bot that allows you to fully interact with ParkMyCloud directly through your chat window, without having to access the web GUI. By combining this chatbot with a direct notifications feed of any ParkMyCloud activities through our webhook integration, you can manage your continuous cost control from the Microsoft Teams channels you live in every day — making it easy to save 65% or more on your instance costs.

Organizations who are utilizing DevOps principles are increasingly utilizing ChatOps to manipulate their environments and provide a self-service platform to access the servers and databases they require for their work. There are a few different chat systems and bot platforms available – we also have a chat bot for Slack – but one that is growing rapidly in popularity is Microsoft Teams.

By setting up the Microsoft Teams bot to interact with your ParkMyCloud account, you can allow users to:

  • Assign schedules
  • Temporarily override schedules on parked instances
  • Toggle instances to turn off or on as needed

Combine this with notifications from ParkMyCloud, and you can have full visibility into your cost control initiatives right from your standard Microsoft Teams chat channels. Notifications allow you to have ParkMyCloud post messages for things like schedule changes or instances that are being turned off automatically.

Now, with the new ParkMyCloud Teams bot, you can reply back to those notifications to:

  • Snooze the schedule
  • Turn a system back on temporarily
  • Assign a new schedule.

The chatbot is open-source, so you can feel free to modify the bot as necessary to fit your environment or use cases. It’s written in NodeJS using the botbuilder library from Microsoft, but even if you’re not a NodeJS expert, we tried to make it easy to edit the commands and responses. We’d love to have you send your ideas and modifications back to us for rapid improvement.

If you haven’t already signed up for ParkMyCloud to help save you 65% on your cloud bills, then start a free trial and get the Microsoft Teams bot hooked up for easy ChatOps control. You’ll find that ParkMyCloud can make continuous cost control easy and help reduce your cloud spend, all while integrating with your favorite DevOps tools.

 

Why Your Spring Cleaning Should Include Unused Cloud Resources

Why Your Spring Cleaning Should Include Unused Cloud Resources

Given that spring is very much in the air – at least it is here in Northern Virginia – our attention has turned to tidying up the yard and getting things in good shape for summer. While things are not so seasonally-focused in the world of cloud, the metaphor of taking time out to clean things up applies to unused cloud resources as well. We have even seen some call this ‘cloud pruning’ (not to be confused with the Japanese gardening method).

Cloud pruning is important for improving both cost and performance of your infrastructure. So what are some of the ways you can go about cleaning up, optimizing, and ensuring that our cloud environments are in great shape?

Delete Old Snapshots

Let’s start with focusing on items that we no longer need. One of the most common types of unused cloud resources is old Snapshots. These are your old EBS volumes on AWS, your storage disks (blobs) on Azure, and persistent disks on GCP. If you have had some form of backup strategy then it’s likely that you will understand the need to manage the number of snapshots you keep for a particular volume, and the need to delete older, unneeded snapshots. Cleaning these up immediately helps save on your storage costs and there are a number of best practices documenting how to streamline this process as well as a number of free and paid-for tools to help support this process.

Delete Old Machine Images

A Machine Image provides the information required to launch an instance, which is a virtual server in the cloud. In AWS these are called AMIs, in Azure they’re called Managed Images, and in GCP Custom Images. When these images are no longer needed, it is possible to deregister them. However, depending on your configuration you are likely to continue to incur costs, as typically the snapshot that was created when the image was first created will continue to incur storage costs. Therefore, if you are finished with an AMI, be sure to ensure that you also delete its accompanying snapshot. Managing your old AMIs does require work, but there are a number of methods to streamline these processes made available both by the cloud providers as well as third-party vendors to manage this type of unused cloud resources.

Optimize Containers

With the widespread adoption of containers in the last few years and much of the focus on their specific benefits, few have paid attention to ensuring these containers are optimized for performance and cost. One of the most effective ways to maximize the benefits of containers is to host multiple containerized application workloads within a single larger instance (typically large or x-large VM) rather than on a number of smaller, separate VMs. In particular, this is something you could utilize in your dev and test environments rather than in production, where you may just have one machine available to deploy to. As containerization continues to evolve, services such as AWS’s Fargate are enabling much more control of the resources required to run your containers beyond what is available today using traditional VMs. In particular, the ability to specify the exact CPU and memory your code requires (and thus the amount you pay) scales exactly with how many containers you are running.

So alongside pruning your trees or sweeping your deck and taking care of your outside spaces this spring, remember to take a look around your cloud environment and look for opportunities to remove unused cloud resources to optimize not only for cost, but also performance.

How to Turn AWS Utilization Data
into Automated Cost Control

 

 

 

Learn how your AWS utilization data in CloudWatch
can be harnessed to optimize your cloud costs.

Register now for a chance to win
a $100 Amazon.com gift card!

June 26th | 2 PM ET