SysAdmin vs. DevOps: 4 Ways That the Cloud is Redefining IT

SysAdmin vs. DevOps: 4 Ways That the Cloud is Redefining IT

SysAdmin vs. DevOps? IT Operations Management vs. Cloud Operations Management? Unless your head has been under a rock, you’re probably aware that the cloud has been rapidly reshaping and redefining IT as we know it — from the language we use to describe it to the management models and infrastructure itself.

Cloud providers like Amazon Web Services, Google Cloud Platform, and Microsoft Azure have transformed cloud computing, giving businesses access to IT resources anytime, anywhere. At the same time, this rapid migration to off-premise cloud has been reshaping the needs and roles in the IT department.

Here are 4 ways that the cloud is redefining IT roles and operations:  

Sysadmin vs. DevOps

When you compare sysadmin vs. DevOps, you’ll find that they’re similar roles, but uniquely distinct. A System Administrator, or sysadmin, is the person responsible for configuring, operating, and maintaining computer systems – servers in particular. This jack-of-all-trades IT role handles everything from installations and upgrades to security, troubleshooting, technical support and more.  

And then we have the evolution of DevOps, which could very well be the biggest gamechanger to the IT process. Under the DevOps umbrella, a team of software developers, IT operations, and product management people must combine strengths to effectively streamline and stabilize operations for rolling out new apps and updating code to support and improve the whole business.  

With the cloud taking over and without the need for physical, on-prem servers, a large portion of the sysadmin role has become lost to automation. As this change was occurring, sysadmins remained effective as their role shifted towards the support of developers, combining efforts and thus giving birth to to the term DevOps. So can you truly compare sysadmin vs. DevOps? Well, the roles are similar in the sense that sysadmins can do a lot of what DevOps guys do, but not the other way around, making DevOps the newer, bigger jack of all trades.

IT Operations Management vs. Cloud Operations Management

IT Operations Management is responsible for the efficiency and performance of IT processes, which can include anything from administrative processes to hardware and software support, and for both internal and external clients. IT management sets the standard policies and procedures for how service and support is carried out and how issues are resolved.

Thanks to the cloud, IT management has also given way to automation and outsourcing. Cloud operational processes are now a more efficient way of using resources, providing services, and meeting compliance requirements. In the same way that ITOP manages IT processes, Cloud Operations Management is doing so in a cloud environment with resource capacity planning and cloud analytics that provide vital intelligence into how to control resources and run them cost effectively (speaking of, check out our recent partnership aimed at making this easier for you).  

IT Service Management vs. Cloud Service Management

Traditional IT service management (ITSM) dealt with strategizing in the design, delivery, management and innovation of the way an organization is using IT. This involved developing, implementing, and monitoring IT governance and management through the use of frameworks like COBIT, Microsoft Operations Framework, Six Sigma, and ITIL, for example.  

As the cloud became a better option for operational management, companies have turned to cloud computing to transform their business model via service providers like Amazon Web Services, Google Cloud Platform, and Microsoft Azure to outsource IT for more efficient, scalable cloud services.

Since cloud computing resources are hosted as off-site VMs and managed externally, ITSM has grown more complex, introducing Cloud Service Management (CSM) as an extension of ITSM, pushing in three core areas: automated service provisioning, DevOps, and asset management. And as ITSM shifts towards CSM, the concerns lie in cloud adoption strategy and the approach for designing, deploying, and running of cloud services.

From Finance and Operations vs. DevFinOps

In a world where IT projects are known to exceed budgets and coming up with cost estimates is no easy feat, how can businesses break down a reasonable overall estimate for projects where we develop, build and run applications on a utility? The answer is to make estimates little by little as parts of the work get completed, integrating financial planning directly into IT development operations. In other words: DevFinOps.  

IT asset management merges the financial, contractual, and inventory components of an IT project to support life cycle management and strategic decision making. The strategy involves both software and hardware inventory and the decision making process for purchases and redistribution. DevFinOps expands and builds upon ITAM by fixing financial cost and value of IT assets directly into IT infrastructure, updating calculations in real time and simplifying the budgeting process.

What This Means For You

Cell phones, self-driving cars, DevOps — cloud computing is yet another evolution in technology, albeit a huge one, and IT is simply going through a metamorphosis. The best way of looking it at is that cloud is not killing IT, it’s redefining IT, and enterprises are following suit as they shift towards the cloud and change or update traditional IT roles. As IT evolves,  the cloud is paving the way for opportunities for those who adapt and evolve their roles with it.

 

Cloud Operations Management: Is the cloud really making operations easier?

Cloud Operations Management: Is the cloud really making operations easier?

As cloud becomes more mature, the need for cloud operations management becomes more pervasive. In my world, it seems pretty much like IT Operations Management (ITOM) from decades ago. In the way-back machine I used to work at Micromuse, the Netcool company, which was acquired by IBM Tivoli, the Smarter Planet company, which then turned Netcool into Smarter Cloud … well you get the drift. Here we are 10+ years later, and IT = Cloud (and maybe chuck in some Watson).

Cloud operations management is the process concerned with designing, overseeing, controlling, and subsequently redesigning cloud operational processes.  This involves management of both hardware and software as well as network infrastructures to promote an efficient and lean cloud.

Analytics is heavily involved in cloud operations management and used to maximize visibility of the cloud environment, which gives the organization the intelligence required to control the resources and running services confidently and cost-effectively.

Cloud operations management can:

  • Improve efficiency and minimize the risk of disruption
  • Deliver the speed and quality that users expect and demand
  • Reduce the cost of delivering cloud services and justify your investments

Since ParkMyCloud helps enterprises control cloud costs, we mostly talk to customers about the part of cloud operations concerned with running and managing resources. We are all about that third bullet – reducing the cost of delivering cloud services and justifying investments. We strive to accomplish that while also helping with the first two bullets to really maximize the value the cloud brings to an enterprise.

So what’s really cool is when we get to ask people what tools they are using to deploy, secure, govern, automate and manage their public cloud infrastructure, as those are the tools that they want us to integrate into as part of their cost optimization efforts, and we need to understand the roles operation folks now play in public cloud (CloudOps).

And, no it’s not easier to manage cloud. In fact I would say it’s harder. The cloud provides numerous benefits – agility, time to market, OpEx vs. CapEx, etc. – but you still have to automate, manage and optimize all those resources. The pace of change is mind boggling – AWS advertises 150+ services now, from basic compute to AI, and everything in between.

So who are these people responsible for cloud operations management? Their titles tend to be DevOps, CloudOps, IT Ops and Infrastructure-focused, and they are tasked with operationalizing their cloud infrastructure while teams of developers, testers, stagers, and the like are constantly building apps in the cloud and leveraging a bottoms-up tools approach. Ten years ago, people could not just stand up a stack in their office and have at it, but they sure as hell can now.

So what does this look like in the cloud? I think KPMG did a pretty good job with this graphic and generally hits on the functional buckets we see people stick tools into for cloud operations management.

So how should you approach your cloud operations management journey? Let’s revisit the goals from above.

  1. Efficiency – Automation is the name of the game. Narrow in on the tools that provide automation to free up your team’s development time.
  2. Deliverability – See the bullet above. When your team has time, they can focus on delivering the best possible product to your customers.
  3. Cost control – Think of “continuous cost control” as a companion to continuous integration and continuous delivery. This area, too, can benefit from automated tools – learn more about continuous cost control.

 

Webinar Recap: ParkMyCloud + CloudHealth for Hybrid Cloud Governance and Cost Optimization

Webinar Recap: ParkMyCloud + CloudHealth for Hybrid Cloud Governance and Cost Optimization

We recently announced that ParkMyCloud and CloudHealth Technologies have joined forces, merging cloud cost optimization with hybrid cloud governance and bringing you the best of both worlds. To demonstrate the value of this partnership, we held a joint webinar last week to discuss the customer case study of Connotate – a business that specializes in providing web data extraction solutions, and was able to optimize their cloud environment and maximize their ROI thanks to ParkMyCloud and CloudHealth, together.  

The webinar panel included Chris Parlette – Director of Cloud Solutions at ParkMyCloud, JP Nahmias – Director of Product Development at CloudHealth, and Andrew Dawson – Solutions Architect at CloudHealth and the person who worked directly with Reed Savory – Director of IT at Connotate, to help in adopting the two platforms. You can replay the entire webinar, or if you prefer, check out the recap below:

CloudHealth Technologies: Leader in Cloud Management & Hybrid Cloud Governance

CloudHealth Technologies manages $4.1 billion worth of spend in cloud management – just above a quarter of Amazon’s total in AWS. CloudHealth boasts a robust infrastructure, incurring about $100 million in monthly RI purchases from their clients.

CloudHealth works by collecting data from various different cloud providers (AWS, Azure, Google) and consolidating the data into various different tables, taking the information and evaluating your cloud environment based on the different metrics you’re interested in, such as cost, usage, optimization, etc. CloudHealth reports metrics directly to you, but also takes a step further by optimizing your infrastructure, making recommendations, and providing hybrid cloud governance. As the customer takes the recommended actions, CloudHealth automates the environment for them.

ParkMyCloud: Leader in Cloud Cost Optimization

ParkMyCloud was created  to help cloud users realize cost savings in a world of tools that give cloud users visibility into their environment without helping them take action.

ParkMyCloud automates cloud cost savings by integrating into DevOps processes. The need for such a tool came to fruition after enterprises migrated to the cloud thinking it was supposed to be cheaper, but when their bill came at the end of the month, they noticed that something wasn’t right. That something is called cloud waste.  

The Cloud Waste Problem

  1. Always on means always paying. Cloud services are like any other public utility, you pay for what you use. If you leave them running, you continue paying whether you’re actually using them or not.
    • 44% of workloads are classified as non-production (i.e. test, development, staging, etc.) and don’t need to run 24/7
  2. Over-provisioning. Are you using more than you need with oversized resources?
    • 55% or all public cloud resources are not correctly sized for their workloads.  
  3. Inventory waste.   
    • 15% of spend on paying resources which are no longer used.

Read more about cloud waste in this recent blog post.

ParkMyCloud Automates Continuous Cost Control

Not only does ParkMyCloud cut your cloud costs and eliminate waste, we make it easy through automation. Some of the ways we automate the process include:

  • Visibility and control across multiple clouds (AWS, Azure, and Google), accounts, and regions in a single UI
  • User governance – RBAC and SSO for multi-tenant user control and enterprise security
  • DevOps Integration – Policy engine, REST API, and Slack integration to automate continued cost control
  • Actionable cost control – policy driven, automated cost control for compute and database resources

ParkMyCloud Integrates Cost Control into CloudOps

ParkMyCloud also integrates with DevOps tools and into various DevOps tool kits, including:

    • Single sign-on – including Okta, Ping, ADSS, Centrify, and more, which as you go further into identity management, is a requirement for quite a few platforms.
    • DevOps & CI/CD tools – such as CHEF, Puppet, and Atlassian Bamboo
    • Chat & notification – notifications through chat services like Slack and HipChat, or via email. 
    • IT Service Management – integrates with ITSM tools to provide a one-stop shop for costs and savings information
    • Monitoring & Logging – pushing to monitoring tools, like Splunk or DataDog, but also reading information from those tools.

Connotate: A ParkMyCloud & CloudHealth Success Story

Connotate is a provider of web data extraction solutions. They make the internet a database for customers to use and ingest, harvesting data for things like price comparisons and financial analysis. Connotate differentiates from competitors because of its ease of use; anyone can easily go in and highlight a web page in a browser to extract data, and they’re able to extract from both static and dynamic web content.

Connotate had a legacy deployment in AWS along with a ton of data centers all over the U.S. Last year they decided to shut down all those data centers and go from a small AWS footprint to moving everything to Amazon. With thousands of VMs and physical servers that needed to migrate to the cloud, Connotate quickly realized that planning the migration themselves was taking extensive work and planning.

Connotate needed something to help smooth over their migration to the cloud, a tool that would give them financial visibility and predictability to model their cloud costs with. They used CloudHealth’s migration assessment to run analysis against all their data center workloads, which suggested what AWS instance to use and predicted the cost to run it, giving transparency for how to run migration before it actually happens.

Migrating to the cloud with confidence

A few months after moving their infrastructure to the cloud, Connotate found that CloudHealth’s predicted numbers for cloud costs were within $400 of their actual spend after the migration. After seeing the results, they continued doing all of their Amazon cloud monitoring through CloudHealth.  

Saving money automatically in AWS

To help control cloud costs, Connotate turned to ParkMyCloud. One of their business models involves data harvesting for customers that don’t want to do it themselves, and to do this, they use hundred virtual machines through Amazon for data retrieval and sending data back to customers. Essentially they were spinning up servers, harvesting the data, and then spinning them back down. Virtual machines were also being used by the sales team and other non-tech people for the purpose of running a demo. The servers were left running 24/7, resulting in waste for the organization. Connotate needed a tool for the scheduling servers that was user-friendly enough for non-technical people, but technical enough to be a tool that could also be used efficiently for the DevOps team in the organization.

ParkMyCloud checked all the boxes they needed to turn off servers after they were spun up, and the simple web UI really helped the non-tech people know how and when to use it, as well as not be afraid to use it. ParkMyCloud is now a big portion of Connotate’s business for cost control, and one of the major methods they use today for containing cloud sprawl.

ParkMyCloud & CloudHealth: Better Together for Efficient Cloud Management

Now that Connotate has fully migrated to Amazon, they’re making common use of CloudHealth’s rightsizing capabilities and using rightsizing reporting to understand how to better utilize their servers, as well as ParkMyCloud to park unnecessary test servers that shouldn’t be left on 24/7. Using both tools in tandem results in more efficiency, cost control, and transparency into the entire cloud and on-premise environment – a win for all!

A technical integration is in the works, which will encompass:

  • Importing cost and savings data from ParkMyCloud into CloudHealth, so you can see what you’ve been doing in ParkMyCloud within CloudHealth
  • Using CloudHealth recommendations to trigger parking actions in ParkMyCloud
  • And more, based on customer feedback

With ParkMyCloud and CloudHealth together, users can maximize hybrid cloud governance, cost visibility, cost savings, and make their CFOs happy. For a live demo of ParkMyCloud and more information about the integration – watch the entire webinar.   

Microsoft’s Start/Stop VM Solution vs. ParkMyCloud

Microsoft’s Start/Stop VM Solution vs. ParkMyCloud

Microsoft recently released a preview of their Start/Stop VM solution in the Azure Marketplace. Users of Azure took notice and started looking into it, only to find that it was lacking some key functionality that they required for their business. Let’s take a look at what this Start/Stop tool offers and what it lacks, then compare it to ParkMyCloud’s comprehensive offering.

Azure Start/Stop VM Solution

The crux of this solution is the use of a few Azure services, specifically Automation and Log Analytics to schedule the VMs and SendGrid to let you know when a system was shut down or started via email. This use of native tools within Azure can be useful if you’re already baked into the Azure ecosystem, but can be prohibitive to exploring other cloud options.

This solution does cost money, but it’s not very easy to estimate the cost (but does that surprise you?). The total cost is based on the underlying services (Automation, Log Analytics, and SendGrid), which means it could be very cheap or very expensive depending on what else you use and how often you’re scheduling resources. The schedules can be based on time, but only for a single start and stop time. The page claims it can be based on utilization, but in the initial setup there is no place to configure that. It also needs to be set up for 4 hours before it can show you any log or monitoring information.

The interface for setting up schedules and automation is not very user-friendly. It requires creating automation scripts that are either for stopping or starting only, and only have one time attached. To create new schedules, you have to create new scripts, which makes the interface confusing for those who aren’t used to the Azure portal. At the end of the setup, you’ll have at least a dozen new objects in your Azure subscription, which only grows if you have any significant number of VMs.

How it stacks up to ParkMyCloud

So if the Start/Stop VM Solution from Microsoft can start and stop VMs, what more do you need? Well, we at ParkMyCloud have heard from customers (ranging from day-1 startups to Fortune 100 companies) that there are features necessary for a cloud cost optimization tool if it is going to get widespread adoption. Here are some of the features ParkMyCloud has that are missing from the Microsoft tool:

  • Single Pane of Glass – ParkMyCloud can work with multiple clouds, multiple accounts within each cloud, and multiple regions within each account, all in one easy-to-use interface.
  • Easy to change or override schedules – Users can change schedules or temporarily “snooze” them through the UI, our API, our Slackbot, or through our iOS app.
  • User Management – Admins can delegate access to users and assign Team Leads to manage sub-groups within the organization, providing user governance over schedules and VMs.
  • No Azure-specific knowledge needed – Users don’t need to know details about setting up Automation Scripts or Log Analytics to get their servers up and running. Many ParkMyCloud administrators provide access to users throughout their organizations via the ParkMyCloud RBAC. This is useful for users who may need to, say, start and stop a demo environment on demand, but who do not have the knowledge necessary to do this through the Azure console.
  • Enterprise features – Single sign-on, savings reports, notifications straight to your email or chat group, and full support access helps your large organization save money quickly.

As you can tell, the Start/Stop VM solution from Microsoft can be useful for very specific cases, but most customers will find it lacking the features they really need to make cloud cost savings a priority. ParkMyCloud offers these features at a low cost, so try out the free trial now to see how quickly you can cut your Azure cloud bill.

AWS Neptune Preview – Amazon’s Graph Database Service

AWS Neptune Preview – Amazon’s Graph Database Service

At the AWS DC Meetup we organized last week, we got a preview of AWS Neptune, Amazon’s new managed graph database service. It was announced at AWS re:Invent 2017, is currently in preview and will launch for general availability this summer.

What is a graph database?

A graph database is a database optimized to store and process highly connected data – in short, it’s about relationships. The data structure for these databases consists of vertices and direct links called edges.

Use cases for such highly-connected data include social networking, restaurant recommendations, retail fraud detection, knowledge graphs, life sciences, and network & IT ops. For a restaurant recommendations use case, for example, you may be interested in the relationships between various users, where those users live, what types of restaurants those users like, where the restaurants are located, what sort of cuisine they serve, and more. With a graph database, you can use the relationships between these data points to provide contextual restaurant recommendations to users.

Tired of SQL?

If you’re tired of SQL, AWS Neptune may be for you. A graph database is fundamentally different from SQL. There are no tables, columns, or rows – it feels like a NoSQL database. There are only two data types: vertices and edges, both of which have properties stored as key-value pairs.

AWS Neptune is fully managed, which means that database management tasks like hardware provisioning, software patching, setup, configuration, and backups are taken care of for you.

It’s also highly available and shows up in multiple availability zones. This is very similar to Aurora, the relational database from Amazon, in its architecture and availability.

Neptune supports Property Graph and W3C’s RDF. You can use these to build your own web of data sets that you care about, and build networks across the data sets in the way that makes sense for your data, not with arbitrary presets. You can do this using the graph models’ query languages: Apache TinkerPop Gremlin and SPARQL.

There is no cost to use Neptune during the preview period. Once it’s generally available, pricing will rely on On Demand EC2 instances – which means ParkMyCloud will be looking into ways to assist Neptune users with cost control.

If you’re interested in the new service, you can check out more about AWS Neptune and sign up for the preview.

Want tips, tricks, and insights for an optimized cloud?

No, I like wasting time and money.