7 Favorite AWS Training Resources

7 Favorite AWS Training Resources

When it comes to AWS training resources, there’s no shortage of information out there. Considering the wide range of videos, tutorials, blogs, and more, it’s hard knowing where to look or how to begin. Finding the best resource depends on your learning style, your needs for AWS, and getting the most updated information available. Whether you’re just getting started in AWS or consider yourself an expert, there’s an abundance of resources for every learning level. With this in mind, we came up with our 7 favorite AWS training resources, sure to give you the tools you need to learn AWS:

1. AWS Self-Paced Labs

What better way to learn that at your own pace? AWS self-paced labs give you hands-on learning in a live AWS environment, with AWS cloud services, and actual scenarios you would encounter in the cloud. There are two different ways to learn with these labs. You can either take an individual lab or follow a learning quest. Individual labs are intended to help users get familiar with an AWS service as quickly as 15 minutes. Learning quests guide you through a series of labs so you can master any AWS scenario at your own pace. Once completed, you will earn a badge that you can boast on your resume, LinkedIn, website, etc. 

Whatever your experience level may be, there are plenty of different options offered. Among the recommended labs you’ll find an Introduction to Amazon Elastic Compute Cloud (EC2), and for more advanced users, a lab on Maintaining High Availability with Auto Scaling (for Linux).

2. AWS Free Tier

Sometimes the best way to learn something is by jumping right in. With the AWS Free Tier, you can try AWS services for free. This is a great way to test out AWS for your business, or for the developers out there, to try services like AWS CodePipeLine, AWS Data Pipeline, and more. While you are still getting a hands-on opportunity to learn a number of AWS services, the only downside is that there are certain usage limits. You can track your usage with a billing alarm to avoid unwanted charges, or you can try ParkMyCloud and park your instances when they’re not in use so you get the most out of your free tier experience. In fact, ParkMyCloud started its journey by using AWS’s free tier!

3. AWS Documentation and Whitepapers

AWS Documentation is like a virtual encyclopedia of tools, terms, training, and everything AWS. You’ll find case studies, tutorials, cloud computing basics, and so much more. This resource is a one-stop-shop for all of your AWS documentation needs, whether you’re a beginner or advanced user. No matter where you are in your AWS training journey, AWS documentation is always a useful reference and certainly deserves a spot in your bookmarks. 

Additionally, you’ll find whitepapers that give users access to technical AWS content that is written by AWS and individuals from the AWS community, to help further your knowledge of their cloud. These whitepapers include things from technical guides, reference material, and architecture diagrams.

4. YouTube

So far, we’ve gone straight to the source for 3 out of 7 of our favorite AWS training resources. Amazon really does a great job of providing hands-on training, tutorials, and documentation for users with a range of experience. However, YouTube opens up a whole new world of video training that includes contributions from not only Amazon, but other great resources as well. Besides the obvious Amazon Web Services channel, there are also popular and highly rated videos by Edureka, Simplilearn, Eli the Computer Guy, and more.

5. Bloggers

As cloud technology usage continues to expand and evolve, blogs are a great way to stay up to speed with AWS and the world of cloud computing. Of course, in addition to aws labs, a free-trial, extensive documentation, and their own YouTube channel, AWS also has their own blog. Since AWS actually has a number of blogs that vary by region and technology, we recommend that you start by following Jeff Barr – Chief Evangelist at Amazon Web Services, and primary contributor. Edureka was mentioned in our recommended YouTube channels, they also have a blog that covers plenty of AWS topics. The CloudThat blog is an excellent resource for AWS and all things cloud, and was co-founded by Bhaves Goswami – a former member of the AWS product development team. Additionally, AWS Insider is a great source for all things AWS. Here you’ll find blogs, webcasts, how-to, tips, tricks, news articles and even more hands-on guidance for working with AWS. If you prefer newsletters straight to your inbox, check out Last Week in AWS and Inside Cloud

6. Online Learning Platforms

As public cloud computing continues to grow – and AWS continues to dominate the market – people have become increasingly interested in this CSP and what it has to offer. In the last 8-10 years, two massive learning platforms were developed, Coursera and Udemy. These platforms offer online AWS courses, specializations, training, and degrees. The abundance of courses that these platforms provide can help you learn all things AWS and give you a wide array of resources to help you train for different AWS certifications and degrees. 

7. GitHub

GitHub is a developer platform where users work together to review and host code, build software and manage projects. This platform has access to a number of materials that can help further your AWS training. In fact, here’s a great list of AWS training resources that can help you prepare for an Amazon Cloud certification. The great thing about this site is the collaboration among the users. The large number of people in this community brings together people from all different backgrounds so they are able to provide knowledge about their own specialties and experiences. With access to everything from ebooks, video courses, free lectures, and sample tests, posts like these can help you get on the right certification track. 


There’s plenty of information out there when it comes to AWS training resources. We picked our 7 favorite resources for their reliability, quality, and range of information. Whether you’re new to AWS or consider yourself an expert, these resources are sure to help you find what you’re looking for.

Will Robotic Process Automation Save Your Company Time and Money in the Cloud?

Will Robotic Process Automation Save Your Company Time and Money in the Cloud?

After hearing a lot of buzz about this concept in AI, we decided to see what’s next for robotic process automation. The promise of the technology is that it can automate processes that employees are doing manually, saving your employees’ time and potentially reducing operational costs. While robotic process automation (RPA) interest has been high for a while, actual adoption is now catching up and will only continue to grow in the future. Organizations are understanding the power of process automation, so in turn, more industries are expected to deploy more RPA bots to eliminate manual repetitive actions performed by employees. 

RPA software is en route to becoming a billion-dollar category in 2020. Last year, Gartner projected that spending on RPA software was expected to hit $1.3 billion. However, there are still some growing pains to address with RPA and is not exactly a 100 percent perfect, but it fits right in with the current trends in cloud computing toward optimization. And, since, we’re all about saving time and money – let’s recap on this trend to see how it can help to do these things. 

What is Robotic Process Automation?

To recount, RPA, whether it’s called “intelligent automation” or “cognitive automation” in the future, is a way to automate business processes by creating software robots paired with artificial intelligence (AI) and machine learning capabilities to perform manual and mundane work-tasks. It allows users to configure within an application and gives them the capability to handle a variety of repetitive tasks by processing, employing, generating and communicating information automatically. For example, you might program RPA bots to do first-level customer support tasks by searching for answers; copy and paste data from one system to another for invoicing or expense management or issue refunds. This video from IBM shows an example in action. 

RPA software is not part of an organization’s IT infrastructure. Instead, it sits on top of it, enabling a company to implement the technology quickly and efficiently. Furthermore, RPA tools can be trained to make judgments about future outputs. Many users appreciate its non-intrusive nature and the ability to integrate within infrastructures without causing disruption to systems already in place.

How can you use Robotic Process Automation?

RPA technology can help organizations on their digital transformation journeys by:

  • Enabling better customer service.
  • Ensuring business operations and processes comply with regulations and standards.
  • Allowing processes to be completed much more rapidly.
  • Providing improved efficiency by digitizing and auditing process data.
  • Creating cost savings for manual and repetitive tasks.
  • Enabling employees to be more productive.

Companies like Walmart, AT&T, and Walgreens are adopting the use of RPA. Clay Johnson, the CIO of Walmart, says they use RPA bots to automate pretty much anything from answering employee questions to retrieving useful information from audit documents. The CIO of American Express Global Business Travel, David Thompson, says they implement the use of RPA to automate the process for canceling an airline ticket and issuing refunds. In addition, Thompson is looking to use RPA to facilitate automatic rebooking recommendations, and to automate certain expense management tasks in the company.

But more specific to cloud computing and IT, one great application for RPA is in automated software testing. If testing involves multiple applications and monotonous work, RPA can replace workers’ time spent testing. Automated tests can run repeatedly at any time of day. This approach fits in with continuous testing as well as continuous integration (CI) and continuous delivery (CD) software development practices. Additionally, RPA can be used to automate processes in monolithic legacy systems that are not worth developers’ time to update, to bring automation while work on newer microservices systems is in progress. 

Is Robotic Process Automation the Best Way to Automate Cost Control?

A study found that not all automation is achievable with RPA. In the study, they conclude that only three percent of organizations have managed to scale RPA to a high level. Additionally, Gartner placed RPA tools at the “Peak of Inflated Expectations” in their Hype Cycle guide for artificial intelligence  – another vote for more buzz than potential. In reality, it is only as efficient as the person configuring the automation flow and organizations that have overly idealized expectations of the technology’s capabilities. Those that don’t have a solid grasp of their own processes may find it difficult to find the right tool to automate jobs.

However, RPA is expected to deliver tangible results to organizations that make automation a key component of their digital transformation as the collaboration between digital workers and human talent become more efficiently aligned in the future.

So can it save you time and money? If employees at your company are spending a large percentage of their time on repetitive tasks that require little to no decision making, then yes, it probably can. It’s also important because it will free up developer time that is spent on automatable tasks, like scripting, so they can focus on creating value for your business. 

For complex and long-term automation, though, purpose-built software is a better solution. If there is already a solution to your automation needs on the market, it will probably serve you better than RPA because there won’t be an upfront period needed to program bots, you won’t need to make frequent changes to your processes like many RPA bots will require, and it’s a better solution for the long run. 

How Cloud has affected the Centralization vs Decentralization of IT

How Cloud has affected the Centralization vs Decentralization of IT

Every week, we find ourselves having a conversation about cost optimization with a wide variety of enterprises. In larger companies, we often talk to folks in the business unit that most people traditionally refer to as Information Technology (IT). These meetings usually include discussions about the centralization vs decentralization of IT and oftentimes they don’t realize it, as we are discussing cloud and how it’s built, run and managed in the organization. 

Enterprises traditionally organized their IT team as a single department under the leadership of the CIO. The IT team works across organizational departments and supports the enterprise to meet various tooling and project needs requested by other business units or the executive team.  Although there are significant efficiencies from this type of approach, there are some risks that can affect the entire organization, in particular, one that seems to stem from the ‘need for speed’ (agility). The LOB depends on IT to deliver services, hardware, software, and other ‘tools’, but this is not always done quickly and efficiently, mostly due to internal processes.

Benefits of Centralized IT Structures

The benefits of this type of organizational structure were often associated with increased purchasing power, improved information flow between IT team members, skilled hiring efficiencies, and a watchful view of the enterprise’s technical infrastructure from both an operational network and security perspective. Let’s dig into these in a bit more detail.  

  • Lowered expenses and increased purchasing power – the centralized environment will always provide a business with more buying power at a lower cost by combining all of the needs of the business into a centralized buying pool.
  • Improved productivity for IT staff – IT teams are like any other team, they thrive with collaboration and mutual understanding and respect for each other’s skillsets. It also makes installations and technical resolution(s) easier as you’re addressing a centralized resource.
  • Enterprise-wide information dissemination – the centralized organization will build its network from the center out – LOBs will typically share the same networked resources – such as an ERP or CRM. This avoids the dangers of siloed information that could be critical to another LOB, but without access, there’s no visibility into the information that is available.

Despite the benefits stated above, a centralized team has several limitations and challenges – one of those challenges with the greatest enterprise-wide exposure is how best to prioritize project requests from each of the LOBs – enter decentralization and cloud — IaaS, PaaS and SaaS.

Decentralization is a type of organizational structure in which daily operations and decision-making responsibilities are delegated by top management to middle and lower-level managers and their respective business units. This frees up top management to focus more on major decisions. For a small business, growth may create the need to decentralize to continue efficient operations. Decentralization offers several advantages and is a practical approach when different departments or business units in a company have different IT needs and strategies.

Benefits of Decentralized IT Structures

  • The ability to tailor IT selection and configuration. When individual departments have IT decision-making power, they can choose and configure IT resources based on their own specific needs. For example, each department has its own servers optimized to run its required applications.
  • More fail-safes and organizational redundancy. Decentralizing makes servers and applications more resilient—and it can do the same for IT networks, too. If each department maintains its own server, one can function as a backup server in case another server fails. (Of course, this type of redundancy would need to be properly configured in advance.)
  • Respond faster to new IT trends. Since departments in decentralized organizations can make independent decisions, it’s easier for them to take advantage of new technology in the cloud.

One drawback of decentralized IT structures is that this model often leads to information silos – collections of data and information that cannot be easily shared across departments. Centralized IT structures help prevent these silos, leading to better knowledge-sharing and cooperation between departments. For example, using one centrally managed CRM system makes it possible for any employee in a company to access customer information from anywhere — think SalesForce.

The Reality is Hybrid IT

As we see above and in real life, there are many reasons an organization might be tempted to move toward or away from a centralized IT organizational structure but in practice many companies practice a hybrid model – some IT systems like your CRM and ChatOps are centralized, while others like your Cloud Provider and Orchestration tool may be decentralized (buy business unit). The top reasons for this hybrid model that come to mind are technical agility and the availability of tools through SaaS, IaaS and PaaS providers – IT no longer needs to build every solution and tool for you. And decentralized IT organizational structures are typically best for companies that rely on technical agility to remain competitive. These include newer, smaller companies (e.g., startups), and organizations that need to respond quickly to new IT developments (e.g., software and hardware companies or app developers). And, for larger companies that want to bring that mentality and model to their business, here is a great example, Capital One, a bank wanting to be a technology company. 

What are your thoughts on the centralization vs decentralization of IT?

The 5 Components of Azure DevOps

The 5 Components of Azure DevOps

In the search to accelerate and simplify the DevOps process, we take a look at Microsoft’s Azure DevOps, a hosted service providing development and collaboration tool that was formerly known as Visual Studio Team Services (VSTS). Last year, Microsoft split VSTS into five separate Azure-branded services, under the banner Azure DevOps for a complete offering in public cloud that makes it easier for developers to adopt portions of the Azure DevOps platform, without requiring them to go “all in” like the former VSTS.

Azure DevOps supports both public and private cloud configurations – the services include:

  • Azure Boards – A work tracking system with Kanban boards, dashboards, and reporting
  • Azure Pipelines – A CI/CD, testing, and deployment system that can connect to any Git repository
  • Azure Repos – A cloud-hosted private Git repository service
  • Azure Test Plans – A solution for tests and capturing data about defects
  • Azure Artifacts – A hosting facility for Maven, npm, and NuGet packages

Each of these Azure DevOps services is open and extensible and can be used with all varieties of applications, regardless of the framework, platform or cloud. Built-in cloud-hosted agents are provided for Windows, Mac OS and Linux and workflows are enabled for native container support and Kubernetes deployment options, virtual machines, and serverless environments.

With all five services together users can take advantage of an integrated suite that provides end to end DevOps functionalities. But, since they are broken up into separate components, Azure DevOps gives users the flexibility to just pick which services to employ without the need to use the full suite. For example, with Kubernetes having a standard interface and running the same way on all cloud providers, Azure Pipelines can be used for deploying to Azure Kubernetes Service (AKS), Google Kubernetes Engine (GKE), Amazon Elastic Kubernetes Service (EKS), or clusters from any other cloud provider without requiring the use of any of the other Azure DevOps components.

Embracing Azure DevOps

One of the main benefits for teams using Azure DevOps is developers will be able to work securely from anywhere and in any format and embrace open-source technology. Azure DevOps addresses the vendor lock-in problem from its early version by providing extensive integration with industry and community tools.

With the many integrations available, users can log in using SSO tools like Azure AD or communicate with their team via Slack integration while accessing both cloud and on-premises resources.

Azure Pipelines offers free CI/CD with unlimited minutes and 10 parallel jobs for every open source project and many of the top open-source projects already use Azure Pipelines for CI/CD, such as Atom, CPython, Pipenv, Tox, Visual Studio Code, and TypeScript.

Benefits of Azure DevOps 

Azure DevOps use cases include:

  • Planning – Azure DevOps makes it easy for DevOps teams to manage their work with full visibility across products and projects, helping them keep development efforts transparent and on schedule. Teams can define, track, and layout work with Kanban boards, backlogs, custom dashboards and reporting capabilities using Azure Boards.
  • Developing –  Allows teams to share code and collaborate together with Visual Studio and Visual Studio Code. Users can create automatic workflows for automated testing and continuous integration in the cloud with Azure Pipelines. 
  • Delivery – Helps teams deploy applications to any Azure service automatically and with full control. Users can define and spin up multiple cloud environments with Azure Resource Manager or HashiCorp Terraform, and then create continuous delivery pipelines into these environments using Azure Pipelines or tools such as Jenkins and Spinnaker.
  • Operations –  With Azure Monitor, users can implement full stack monitoring, get actionable alerts, and gain insights from logs and telemetry.

Pricing

As for Azure DevOps pricing, there are a lot of open-source tools that can be combined to deliver the functionality that Azure DevOps promises to provide, but the basic plan for open source projects and small projects is free up to five users. For larger teams, the cost can range from $30 per month for 10 users to $90 per month for 20 users and so forth.

In summary, Azure DevOps is an all in one focussed project tracking and planning tool mixed with Developer and DevOps tools for writing, building and deploying code that’s relatively quick and easy to use. But, while maintenance cost is decreased, developers only need an active subscription to have constant access to the latest version. Azure DevOps will indirectly utilize Azure Storage and compute services that will increase usage and impact costs.

Why Use One Cloud, When You Can Use Any Cloud?

Why Use One Cloud, When You Can Use Any Cloud?

No, seriously, why would we just use one cloud?

Let’s stop for a moment and think about what has happened over the course of the last few years in public cloud computing and the hypervisor wars on-premises.  VMware has largely dominated the data center, but we are seeing a strong push from Microsoft on the hypervisor front.  KVM and Xen continue to grow in popularity for certain sectors, and all across the spectrum we see lots of folks running more than one hypervisor.

The cloud is no different.  The reason that we are all seeking the “AWS killer” just like the elusive “iPhone killer” is that there is some bizarre need to locate a winner of the platform war. 

This isn’t a zero-sum game.  The real shift in our industry is the broad acceptance of multiple platforms inside every IT portfolio.  We jumped right past the cloud to the multi-cloud.

Why Run More Than One Cloud?

Technology is not the problem, it’s the solution.  Business challenges are being answered by technology which is what really matters.  So, why would we run more than one cloud?  The reason is a technological one usually.  Certain features, APIs, and architectures may be supported on one more than another.  There are raw economics involved as well.  There are overall availability concerns which drive businesses to disperse their IT across multiple data centres, so why not do the same in the cloud?

The reason that AWS and OpenStack are often pitted against each other is that there are capabilities to enable AWS API access within the OpenStack platform. This is something that Randy Bias and many in the community fought for over the last few years.  The reason that it becomes important is that we see the huge adoption of AWS and being able to take the same workloads and move them to OpenStack using the same API calls and interactions would be a massive win for OpenStack as a platform.

If we stick to strictly public cloud providers, we can start with what we would call the big three:  AWS, Microsoft Azure, Google Cloud Platform.  Among those three, we see a lot of parrying as we see features and pricing updates happening regularly.  Features more so than pricing lately. That results in an ever-growing set of services that can be easily consumed.  As we see common orchestration and operational platforms like Mesos, Kubernetes, and the like gaining in popularity, it gives even more credence to the commoditization of cloud.  (Author’s opinion note:  The supposed “race to zero” for cloud costs is over.  They have all agreed that pricing isn’t where they win the customers any more)

Reducing the Complexity of Multi-Cloud

Complexity is the one thing that will slow the multi-cloud adoption a bit longer.  There are clearly different ways to consume resources, and to programmatically create and destroy resources in the public cloud platforms.   Especially when you go outside of the big three.  That means consumers of the public cloud will have to start with one target and generally work up to a deep comfort there before moving to embrace a multi-cloud strategy.

Once we remove or reduce complexity from the list of barriers, that opens up the door for embracing the economic value of a multi-cloud strategy.  This is where we can embrace spot pricing and on-demand growth to tackle scaling needs, while making the workload truly portable and making sure that price becomes the real win.  Networking stacks across the clouds are rather different for a reason.  If every car manufacturer used the same exact parts, they would lower the chances of you coming back to them for up-sell opportunities.  The same goes for the cloud.  Networking and security (they should always be paired) will most likely be the greatest challenge that technologists face in architecting their single multi-cloud solutions.

Next-Generation applications are being built as cloud-native where possible.  This opens up the door for what has been talked about for years.  Supposed freedom from vendor lock-in.  I’m always rather skeptical when a representative from one cloud company says “come to us and avoid vendor lock-in” because every vendor, even public cloud ones, have lock-in.  

What we do gain by embracing the cloud-native approach to application development and deployment is that we reduce the risk of lock-in.

The more we learn from the forward-leaning development teams, the more we are able to give ourselves agility in a multi-cloud architecture.  As all of the public cloud pundits who represent one faction or another are arguing over who will be the last one to be all-in on the public cloud running cloud-native applications, they forgot about one thing:  they opened the door for their competition too.

How to Manage Hybrid & Multi-Cloud Environments with Google Cloud Composer

How to Manage Hybrid & Multi-Cloud Environments with Google Cloud Composer

As we continue to evaluate ways to automate various aspects of software development, today we’ll take a look at Google Cloud Composer. This is a fully managed workflow orchestration service built on Apache Airflow that makes workflow management and creation simple and consistent.

The evolution of hybrid and multi-cloud environments continue to grow as enterprises want to take advantage of the cloud’s scalability, flexibility, and global reach. Of the three major providers, Google Cloud has been the most open to supporting this multi-cloud reality. For example, earlier this year, Google launched Anthos, a new managed service offering for hybrid and multi-cloud environments to give enterprises operational consistency by running quickly on any existing hardware, leverage open APIs and give developers the freedom to modernize. But, implementing the management of these environments can be either an invaluable proposition for your company or one to completely challenge your infrastructure instead – which brings us to Google’s solution, Cloud Composer.

How does Google Cloud Composer work?

With Cloud Composer, you can monitor, schedule and manage workflows across your hybrid and multi-cloud environment. Here is how:

  • As part of Google Cloud Platform (GCP), Cloud Composer integrates with tools like BigQuery, Dataflow, Dataproc, Datastore, Cloud Storage, Pub/Sub and Cloud ML Engine, giving users the ability to orchestrate end-to-end GCP workloads.
  • You can code directed acyclic graphs (DAGs) using Python to improve workflow readability and pinpoint areas in need of assistance.
  • It has one-click deployment built-in to give you instant and easy access to a range of connectors and graphical representations that show your workflow in action.
  • Cloud Composer allows you to pull workflows together from wherever they live, supporting a fully-functioning and connected cloud environment.
  • Since Cloud Composer is built on Apache Airflow – an open-source technology – it provides freedom from vendor lock-in as well as integration with a wide variety of platforms.  

Simplifying hybrid and multi-cloud environment management

Cloud Composer is ideal for hybrid and multi-cloud management because it’s built on Apache Airflow and operated with the Python programming language. Using open-source technology and the “no lock-in” approach and portability gives users the flexibility to create and deploy workflows seamlessly across clouds for a unified data environment.

Setting up your environment is quick and simple. Pipelines created with Cloud Composer will be configured as DAGs with easy integration for any required Python libraries, giving users of almost any level the ability to create and schedule their own workflows. With the built-in one-click deployment, you get instant and easy access to a range of connectors and graphical representations that show your workflow in action.

However, costs can be a drawback to making the most of your cloud environment when using Cloud Composer. Landing on specific costs for Cloud Composer can be hard to calculate, as Google measures the resources your deployments use and add the total cost of your Apache Airflow deployments onto your wider GCP bill. 

Cloud Composer Pricing 

Pricing for Cloud Composer is based on the size of a Cloud Composer environment and the duration the environment runs, so you pay for what you use, as measured by vCPU/hour, GB/month, and GB transferred/month. Google offers multiple pricing units for Cloud Composer because it uses several GCP products as building blocks. You can also use the Google Cloud Platform pricing calculator to estimate the cost of using Cloud Composer. 

So, should you use Google Cloud Composer? Cloud Composer environments are meant to be long-running compute resources that are always online so that you can schedule repeating workflows whenever necessary. Unfortunately, since you can’t turn on and off a Cloud Composer environment; you can only create or destroy, it may not be right for every environment and could cost more than the advantages may be worth.