DevOps | Cloud Academy Blog https://cloudacademy.com/blog/category/devops/ Wed, 28 Sep 2022 15:06:54 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.1 Kubernetes vs. Docker: Differences and Use Cases https://cloudacademy.com/blog/kubernetes-vs-docker/ https://cloudacademy.com/blog/kubernetes-vs-docker/#respond Tue, 01 Mar 2022 01:01:00 +0000 https://cloudacademy.com/?p=48836 Do you wonder about the merits of Kubernetes vs. Docker? Let’s get into the details to talk about their differences and how they work together. Docker is about developing and shipping applications within containers – also known as Docker images – while Kubernetes is high-level orchestration to potentially billions of...

The post Kubernetes vs. Docker: Differences and Use Cases appeared first on Cloud Academy.

]]>
Do you wonder about the merits of Kubernetes vs. Docker? Let’s get into the details to talk about their differences and how they work together.

Docker is about developing and shipping applications within containers – also known as Docker images – while Kubernetes is high-level orchestration to potentially billions of these containers. They’re different but complementary and work well together in large-scale, complicated deployments.

As we discuss the differences and advantages of each platform you’ll understand the big picture and be able to learn more with in-depth resources to get you both book knowledge and hands-on experience.

What are containers?

To better understand the merits of Kubernetes vs Docker, it’s helpful to take a step back and get comfortable with the concept of containers in application development and deployment. 

A container is a unit of software that bundles code and all dependencies together so that the application can run quickly and reliably in any computing environment.

Containers can be described as lightweight virtual machines. Virtual machines require you to virtualize the entire operating system and any software you wish to run. This makes VMs very resource-intensive.

Containers were introduced by the Linux operating system to address this problem. It’s simple: if you already have a Linux OS running on your computer, why not create a new OS for each VM using that OS? You can instead use the core OS (called the kernel) for each VM. So, VMs can only run the software they are required to.

Containers are helping teams of any size to resolve issues such as consistency, scalability, security, and scalability. Containers such as Docker can be used to separate the application and the underlying infrastructure.

What is Docker?

Docker allows you to separate your application and the underlying infrastructure. It bundles up your code with all its dependencies into one self-contained entity which will run on any supported system.

Advantages and disadvantages of Docker

Besides being the most popular platform and the defacto standard for container images, the benefits of Docker are the benefits of containerization: 

Advantages of Docker

Docker (i.e. Docker Compose) is portable, scalable, and has increased security from being isolated. This may be a different setup than what you’re used to, which leads us to some of Docker’s cons.

Disadvantages of Docker

It’s true, containerization does have some disadvantages. The naysayers say that they’re not as slow as running on bare metal, the ecosystem is general is a little all over the place and fractured, there are challenges with persistent storage since the container can be moved / is modular, and some applications (especially monolithic ones) would perform badly with containers.

What is Kubernetes

Kubernetes is the current standard in container orchestration systems.

Kubernetes makes it easy to manage and deploy containers at a large scale. Google developed it based on years of experience with containers in production. Kubernetes gives you a toolbox that automates scaling and operating containerized apps in production.

Advantages and disadvantages of Kubernetes

The advantages of Kubernetes are everything we’ve reviewed above, including:

  • load balancing
  • automatic packaging
  • self healing systems
  • powerful for CI/CD methodologies
  • sophisticated orchestration of complex deployments

Kubernetes does have some disadvantages, which are the flipside of all the pros listed above:

  • can be too much of a solution / overkill for all but the larger deployments
  • can be slow to implement and have a learning curve
  • its sophistication brings added complexity to a project

What’s the difference between Kubernetes vs. Docker?

As referenced at the start of this post, the difference between Kubernetes and Docker is pretty big. When people mention Docker, they’re usually referring to Docker Compose which is used for creating individual containerized applications. Docker Compose has become the standard so people just throw around the term “Docker” for shorthand.

Kubernetes is an orchestration system where you can control all your container resources from one single management plane.It was originally created by Google to monitor and control large (i.e. billions) of containers and is now open source.

Kubernetes vs. Docker Compose

When people ask “Kubernetes vs Docker” they really mean Docker Compose, the core Docker product which allows you to create containerized applications. When thinking about your options, a good question for yourself can be, “Why not both?” With the two technologies, you’ll be able to isolate applications as containerized systems and orchestrate huge numbers of them in a safe and dependable way.

Kubernetes vs. Docker Swarm

Docker Swarm is the container orchestration system from Docker, so comparing it to Kubernetes is much more applicable than saying “Kubernetes vs Docker Compose”. Docker Swarm is a more lightweight and simpler orchestration system than Kubernetes, but it lacks strengths that Kubernetes had in automation and self-healing – features that can be important to huge deployments.

Kubernetes vs Docker certifications

Kubernetes has more certifications than Docker, but since the two technologies are different it’s still useful to be familiar with the certs from both. Below are resources to get you familiar with Kubernetes and Docker certs.

Kubernetes certifications 

Certified Kubernetes Administrator (CKA) Exam Preparation

This learning path covers all the general domains of the CKA certification exam. This series of courses, labs, and exams is for cluster administrators who already have familiarity with Docker.

Certified Kubernetes Application Developer (CKAD) Exam Preparation

This learning path covers all domains of the CKAD exam. This is intended for Kubernetes application developers, but is also useful for anyone who wants to learn how to work with Kubernetes.

Certified Kubernetes Security Specialist (CKS)

CKS certification demonstrates that the person has command over securing the Kubernetes technology stack. CKA certification is required before you sit for the CKS.

Kubernetes and Cloud Native Associate (KCNA)

The KCNA is a foundational-level certification that helps learners progress to the more advanced Kubernetes certs: CKA, CKAD, and CKS.

Docker certification

Docker Certified Associate (DCA) Exam Preparation 

This learning path covers six DOcker domains, allowing you to become proficient in Docker orchestration, security, etc. Ideally, Docker recommends 6-12 months of experience with its technology before you start a certification course.

Resources: Learn about Kubernetes, Docker, and microservices

Building, Deploying, and Running Containers in Production

This Learning Path will show you how to engineer and operate containers in production-like environments. The Learning Path will begin by introducing you to Docker containers and then move on to concepts such Dockerfiles. You’ll learn the entire process for container development and deployment. This Learning Path ends with you building, deploying and testing your Kubernetes cloud-native containerised application.

Introduction to Kubernetes

This learning path is for anyone who wants to manage containers at scale. By the time you’re done, you’ll be able to use Kubernetes to manage containers, as well as deploy a stateless and stateful application.

Docker in Depth

Learn all about Docker from individual containers to continuous deployment of an application in AWS.

Building, Deploying, and Running Containers in Production

With this learning path you’ll get a taste of engineering and operating containers in production-like environments, culminating with building, deploying, and testing your own Kubernetes cluster.

Python based Microservices – Go from Zero to Hero

Learn the basics of designing, building and deploying production-grade microservices using Python, Flask, and Docker containers. 

FAQ

Can I use Kubernetes without Docker?

Technically, yes you can use Kubernetes without Docker as long as you have some other kind of container runtime. A couple of options are containerd and Podman, though Docker is by far the most popular container platform.

Is Kubernetes free?

Yes, it’s free if you use the completely open source version available on its GitHub repository. A lot of users end up preferring to use the Kubernetes offering that is bundled with other services, libraries, platforms, etc. on the big cloud offerings. Some even have more fully-managed offerings that are helpful but more expensive.

Should I learn Docker or Kubernetes first?

Yes, it’s a good idea to start with the smaller part of the system: the container (Docker Compose) and then move up to the orchestration system (Kubernetes). But if you’re passionate and curious, by all means, jump into both at the same time to see where your learning takes you.

Are containers outdated?

Not yet. They’re still valuable to microservices-based architectures even though they’ve basically been around as long as virtual machines. Some people thought they would be replaced by serverless or no-code solutions but those tend to solve different solutions at different scales.

The post Kubernetes vs. Docker: Differences and Use Cases appeared first on Cloud Academy.

]]>
0
DevOps Engineer: What Does One Do and How Do You Become One? https://cloudacademy.com/blog/how-to-become-devops-engineer/ https://cloudacademy.com/blog/how-to-become-devops-engineer/#respond Fri, 15 Oct 2021 12:10:40 +0000 https://cloudacademy.com/?p=33729 A DevOps engineer is a professional who needs to understand the methodologies and tools used to develop, deploy, and operate high-quality software. He or she aims to balance needs throughout the software development life cycle, from coding and deployment, to maintenance and updates. Today, DevOps is one of the top...

The post DevOps Engineer: What Does One Do and How Do You Become One? appeared first on Cloud Academy.

]]>
A DevOps engineer is a professional who needs to understand the methodologies and tools used to develop, deploy, and operate high-quality software.

A DevOps engineer is a professional who needs to understand the methodologies and tools used to develop, deploy, and operate high-quality software.

He or she aims to balance needs throughout the software development life cycle, from coding and deployment, to maintenance and updates.

Today, DevOps is one of the top most-wanted technology roles that the US IT industry is looking for, so let’s look closer at what a DevOps engineer does, their role and responsibilities within an organization, the salary range for the job in the US, and the most popular DevOps engineer skills.

What does a DevOps Engineer do?

The DevOps engineer role has a wide impact and provides deep value, so it’s helpful to think about its three main guiding principles: 1) empower the exchange of information shared thought processes, 2) shorten and amplify feedback loops, and 3) create a culture that fosters continual experimentation and learning.

More concretely, the DevOps engineer’s goal is to improve multiple facets of the software development life cycle (SDLC) process using a mix of practices, tools, and technologies.

They often function in a situation where developers, system administrators, and programmers are all working on the same product but not necessarily sharing information. DevOps engineers can help in reducing this communication gap by blending the skills of a business analyst with the technical chops to build the solution—plus they know the business well, and can look at how any issue affects the entire company.

How to become a DevOps Engineer

The first step on your journey to becoming a DevOps Engineer is learning about the concepts source control, continuous integration, and continuous delivery as well as the technologies and tools that surround them. These provide a good base for the day-to-day technical tasks involved in DevOps.

Cloud Academy offers a range of hands-on training and labs that will allow you to deploy your own cloud environments in real time using a variety of DevOps principles.

It’s also a good idea to steer your through the basics of Agile. This will allow you to see the frameworks and process of the this development methodology and how it is used in modern organizations. You’ll be able to adapt to change, make your processes more efficient, and respond to customer needs by adopting an Agile approach in operations.

GitOps is another concept also worth learning. While code automation has had much written about and implemented, infrastructure automation is newer so it bears a deep size. Familiarizing yourself with GitOps enables benefits such as allowing you to automate Kubernetes deployments, saving you time and effort.

Once you put together these skills, you’ll be able to jump into the exciting part: building and deploying your cloud-native app with a stack containing tech like React, Go, Docker, and Kubernetes.

Role and responsibilities

The role of a DevOps engineer differs from one company to another, but it generally involves a mix of release engineering, infrastructure management, and system administration.

DevOps Engineers are responsible for analyzing the elements and functions of the cloud environment, and writing code to scale them to meet a particular need. This could involve adding users to a cloud infrastructure, adding permissions, changing processes, or other tasks expected to meet business prerequisites.

This type of work requires fast fire coding or the capacity to write small pieces of code in various languages. This also implies that DevOps experts need to be proficient in testing in virtual environments.

DevOps Engineer Salary: How Much Does a DevOps Engineer Make in the US?

According to 2021 research, the average salary for a DevOps Engineer in the United States is around $120,000 that, with additional cash compensation, can reach a total $140,000 for an experienced engineer.

The base salary range is usually between $105,000 and $135,000 but it can vary depending on many critical factors, including education, certifications, soft and hard skills, and years of experience.

Becoming a DevOps Engineer: resources

Getting started with DevOps

This selection of content will help you take your first steps into DevOps. It will provide information about the tools and methodologies used to create, deploy, and maintain high-quality software. This is the best place to start if you are new to DevOps.

DevOps Tools & Services

A cohesive team and a solid understanding of the tools, best practice, and processes needed to create and deliver software and services at large scale is key to successful DevOps implementation.

This collection of content will assist you in doing just that.

Agile Methodology

Are you thinking about using Scrum in your software development? This selection of content will give you a complete understanding of the Agile approach to working.

DevOps Certifications

You can prove your DevOps expertise by becoming certified. These learning paths will prepare you for many DevOps-based certifications. These learning paths are very comprehensive and will help you pass your certification exams.

DevOps on AWS

This collection of content will help you implement DevOps practices when working specifically in AWS. Find out more about AWS’s DevOps Services and how to use them to improve your workflow.

DevOps on Microsoft Azure

Learn how to use DevOps to develop and deploy applications on Azure. This collection of content will provide you with the knowledge and skills to implement DevOps processes when developing and deploying applications on Azure.

DevOps Hands on Labs

You can put your knowledge into practice with our guided labs.

What Skills Does a DevOps Engineer Need?

DevOps is about delivering highly valuable business features in very short periods through cross-team collaboration. In order to succeed, DevOps engineers need to have soft interpersonal, tooling, and technical skills.

Let’s dig into what are the most important skills for DevOps success.

Communication & collaboration

Communication and collaboration are essential to success in DevOps. These are essential for breaking down barriers between DevOps and Development teams, aligning their goals with business objectives, and implementing DevOps culture transversely.

Coding

DevOps engineers should have programming and coding skills. Python, PHP, Javascript, Bash, Node.js and Java are among the most recommended coding languages to have at least a basic understanding of.

Automation

DevOps automation skills are closely tied to knowledge about DevOps toolset and programming. To be successful in DevOps, fluency in automation is a must as it is the core of DevOps. DevOps engineers should be able to automate the entire process, including CI/CD cycles and app performance monitoring, infrastructure, configurations, and other tasks.

Speed

Faster iterations mean businesses may quickly adapt to changing market conditions, faster business hypothesis validation, and faster recovery from outages. And businesses that ship software faster are more likely to succeed in the marketplace.

Thus, it is in the companies’ best interest to accelerate their software delivery value stream and DevOps engineers must be able to reach this goal.

Security

DevOps helps speed deployment and reduce the risk rate. This constraint might mean that security should be implemented at the end, or as an independent process within the traditional process. DevSecOps is able to integrate security with the SDLC from the beginning. DevSecOps skills will be a huge advantage in your DevOps career.

Frequently Asked Questions (FAQ)

What does a DevOps engineer do?

The DevOps engineer role has a wide impact and provides deep value, so it’s helpful to think about its three main guiding principles: 1) empower the exchange of information shared thought processes, 2) shorten and amplify feedback loops, and 3) create a culture that fosters continual experimentation and learning.

What skills does a DevOps engineer need?

DevOps is about delivering highly valuable business features in very short periods through cross-team collaboration. In order to succeed, DevOps engineers need to have soft interpersonal, tooling, and technical skills.

What is DevOps Engineer salary?

According to 2021 research, the average salary for a DevOps Engineer in the United States is around $120,000 that, with additional cash compensation, can reach a total $140,000 for an experienced engineer.

The post DevOps Engineer: What Does One Do and How Do You Become One? appeared first on Cloud Academy.

]]>
0
5 Cool Things About Azure Bicep Templates https://cloudacademy.com/blog/5-cool-things-about-azure-bicep-templates/ https://cloudacademy.com/blog/5-cool-things-about-azure-bicep-templates/#respond Thu, 30 Sep 2021 15:40:08 +0000 https://cloudacademy.com/?p=47328 I’ve been familiar with the basics of Infrastructure as Code, mostly through AWS Cloud Formation. So I was excited/terrified to learn about all the advantages you can get when using Azure Bicep when compared to just JSON-based Azure Resource Manager templates aka ARM templates (cue the upper body workout jokes...

The post 5 Cool Things About Azure Bicep Templates appeared first on Cloud Academy.

]]>
This should really be called everything I know about Biceps I learned from Adil Islam’s webinar.

I’ve been familiar with the basics of Infrastructure as Code, mostly through AWS Cloud Formation. So I was excited/terrified to learn about all the advantages you can get when using Azure Bicep when compared to just JSON-based Azure Resource Manager templates aka ARM templates (cue the upper body workout jokes / Arnold-Carl Weathers memes).

As I mentioned, Adil Islam ran a fun-to-watch webinar (with included GitHub repo) where he explains the background of standard ARM vs Bicep templates and walks you through creating some basic resources. Without further delay, here are five nice takeaways from his session.

What is Bicep?

Microsoft says, “Bicep is a domain-specific language (DSL) that uses declarative syntax to deploy Azure resources.” Bicep simplifies the authoring experience you have with Resource Manager templates. Bicep is used only within the Azure ecosystem and is supported directly by them. This leads to some cool advantages:

1. Bicep templates can be half the size (or less) of JSON-based ARM templates.

Yes, this part is true — it all just depends on what you have included, of course. But you can definitely see size reduction — and a big part of it is due to Bicep’s structure. Standard ARM templates are JSON-based and Bicep templates are formatted more like YAML files with more readable headers and bodies.

Bicep vs ARM template size

2. Bicep supports new Azure services immediately

Getting access to supporting new Azure services immediately makes Bicep equally as useful as ARM templates and ahead of third-party Infrastructure-as-Code tools such as Terraform.

3. Bicep has its own CLI

The Bicep CLI is super easy to install, and along with whatever IDE you want you’ll have a quick way to get started experimenting with Bicep templates.

4. Bicep is not a panacea, especially if you have lots of huge legacy templates

Many organizations will probably have huge legacy ARM templates with all sorts of nested options and dependencies. Choosing whether to convert these to Bicep is going to be a business decision because there will be time-consuming errors when you go through the process. It may be better to just use Bicep for new templates. Adil’s webinar shows a flowchart he created to help make such a decision.

Flowchart - Converting ARM to Bicep templates

5. Expect errors when you convert from ARM to Bicep

Note that Azure mentions that there is no guaranteed mapping from ARM-to-Bicep templates. Adil walks you through his process to debug the expected conversion errors and figure out if they’re problems on your side or on Azure’s end.

Azure Bicep Issues on GitHub

Want to see a Bicep conversion in action?

Check out Adil Islam’s webinar below.

The post 5 Cool Things About Azure Bicep Templates appeared first on Cloud Academy.

]]>
0
It’s 10:00 AM: Do You Know Where Your Team’s Tech Skills Are? https://cloudacademy.com/blog/do-you-know-where-your-teams-tech-skills-are/ https://cloudacademy.com/blog/do-you-know-where-your-teams-tech-skills-are/#respond Mon, 17 May 2021 05:00:28 +0000 https://cloudacademy.com/?p=46318 One of the most challenging parts of managing a team in today’s digital world is that technology advances faster than people’s skills can — leaving you guessing if you have the resources to complete projects on time and on budget. It’s that sinking feeling in your gut of fear and...

The post It’s 10:00 AM: Do You Know Where Your Team’s Tech Skills Are? appeared first on Cloud Academy.

]]>
One of the most challenging parts of managing a team in today’s digital world is that technology advances faster than people’s skills can — leaving you guessing if you have the resources to complete projects on time and on budget.

It’s that sinking feeling in your gut of fear and worry about another missed deadline. But what if the project is mission critical? You must find a way to achieve your goals even if you don’t know if you have the talent on your team right now. And you need to use your budget wisely to generate results, fast.

So, you consider your options. Hiring cycles for specialty tech talent are both slow and expensive, and that’s before onboarding even begins. You can start looking for the right people with the right mix of interpersonal and technical expertise, but that timeline is really out of your control — and the clock is ticking.

Taking a step back… could your team execute if they were given the proper tools to upskill quickly?

Putting a stop to the guesswork

The first step in forecasting your skill requirements is to fully understand where your team sits versus where it needs to be to execute against current and future business objectives. This knowledge will give you the confidence to make the right decisions with regard to investing internally or looking for talent on the open market. But where to begin?

Cloud Academy provides the answer to the question, “What are my team’s current tech skills?” with our accurate and objective tech skill assessment tool. Use our out-of-the-box assessments or create a custom version to test and validate on topics including:

Sounds good. What next?

In addition to assessing your team, Cloud Academy for Business helps you recommend and assign the training plans you need to take individuals’ skills to the next level. We are nothing like other e-learning platforms that outsource their content, don’t curate it properly, and leave skills development to the whim of employees’ motivation.

Cloud Academy is a software company built on the premise that an investment in the skills growth of your workforce should have a direct, positive, and measurable effect on operational goals. And that without accountability via regular evaluation, training programs are bound to fail.

With our platform, managers and administrators can set expectations around timelines that promote programmatic upskilling, establishing defined job roles with career progression tracks, and better execution on business objectives — all in a predictable and scalable way — whether you have two employees or tens of thousands. This method of shared accountability promotes team growth, improved retention, and a culture of success.

Our in-house content team consists of tech experts around the globe who continually create new and refresh existing educational assets. This means the learning paths, hands-on labs in live environments, quizzes, exams, and certifications that you assign your team are always up to date. Not to mention, our dedicated customer success team works with you to help you identify the right priorities and the best path forward for getting your squad to where it needs to be.

Try Cloud Academy’s Enterprise Plan for 2 weeks free

For registrations from May 17 until May 31, 2021, we’re offering your business unlimited access to the Cloud Academy platform for 14 days at no cost. Start by assessing your team’s skills, and explore the upskilling potential enabled by our library of content. We’ll be there to help you every step of the way.

Don’t miss out! Click here to get started.

Enterprise-Free-Trial-Skill-Assessment

The post It’s 10:00 AM: Do You Know Where Your Team’s Tech Skills Are? appeared first on Cloud Academy.

]]>
0
Micro-Blog 3 of 3 – Read These Before You Take the CKA: The Cloud Is Just Someone Else’s Computer https://cloudacademy.com/blog/micro-blog-3-of-3-the-cloud-is-just-someone-elses-computer/ https://cloudacademy.com/blog/micro-blog-3-of-3-the-cloud-is-just-someone-elses-computer/#respond Tue, 06 Apr 2021 15:17:06 +0000 https://cloudacademy.com/?p=45987 Microblog #3: The Cloud Is Just Someone Else’s Computer Welcome back! This tip is more of a strong recommendation than anything else.  Since the CKA and CKAD focus so strongly on your ability to prove that you can administer, or develop in a cluster, I wanted to take the time...

The post Micro-Blog 3 of 3 – Read These Before You Take the CKA: The Cloud Is Just Someone Else’s Computer appeared first on Cloud Academy.

]]>
Microblog #3: The Cloud Is Just Someone Else’s Computer

Welcome back!

This tip is more of a strong recommendation than anything else. 

Since the CKA and CKAD focus so strongly on your ability to prove that you can administer, or develop in a cluster, I wanted to take the time to point out that you can actually practice, free of charge, on live kubeadm clusters in the cloud with Play with Kubernetes.

All you need is a Github Account or a Docker Account.

After you authenticate, you can immediately start interacting with a cluster that was created with kubeadm by clicking “ADD NEW INSTANCE” on the left-hand side.

So, after going through our own course content, I spun up three instances with Play with Kubernetes, and practiced exam topics on the CKA rubric. This gave me real-world experience, with several nodes in the following:

  • Cluster Architecture, Installation & Configuration
  • Workloads and Scheduling
  • Services and Networking
  • Storage

Coincidentally, these happen to be four of the five sections on the exam. And I can wipe the instances and start from scratch if I want to, ensuring that I have a good understanding of these topics. In fact, I used my bookmarks to prep for a real exam by working through each individual category and subcategory, using fake questions like “How do you create a persistent volume?” and “How do you attach a sidecar container to another container?” By going down that list one at a time, I easily beat the two-hour limit that the CKA imposes (you get four in Play with Kubernetes), and I ended my actual CKA exam earlier than I thought after double-checking my answers.

So what are you waiting for? Get cracking on those instances and feel confident that you can walk into your exam day well-prepared.

The post Micro-Blog 3 of 3 – Read These Before You Take the CKA: The Cloud Is Just Someone Else’s Computer appeared first on Cloud Academy.

]]>
0
Micro-Blog 2 of 3 – Read These Before You Take the CKA: Hack. Time. https://cloudacademy.com/blog/micro-blog-2-of-3-read-these-before-you-take-the-cka-hack-time/ https://cloudacademy.com/blog/micro-blog-2-of-3-read-these-before-you-take-the-cka-hack-time/#respond Tue, 30 Mar 2021 14:53:45 +0000 https://cloudacademy.com/?p=45976 Microblog #2: HACK. TIME. Welcome back to the micro-series on preparing for the CKA exam. This series is going to equip you with several tips and tricks that will help you navigate the exam and its common pitfalls so you save time and get to answering questions. Let’s kick off...

The post Micro-Blog 2 of 3 – Read These Before You Take the CKA: Hack. Time. appeared first on Cloud Academy.

]]>
Microblog #2: HACK. TIME.

Welcome back to the micro-series on preparing for the CKA exam. This series is going to equip you with several tips and tricks that will help you navigate the exam and its common pitfalls so you save time and get to answering questions.

Let’s kick off this second round by focusing on what we ought to do after we clear the exam space requirements with the proctor:

HACK. TIME.

Well, no. Not really. But we are going to have a six-step process that will immediately save us time over the course of the exam.

The first thing we’ll do is navigate to the cheatsheet provided in the kubernetes.io documentation. It has some commands we can copy-paste straight into the terminal to gain an advantage. Specifically, these four:

source <(kubectl completion bash) 
echo "source <(kubectl completion bash)" >> ~/.bashrc 
alias k=kubectl
complete -F __start_kubectl k

There are two more not listed above that will speed up yaml creation if we need to create a resource from scratch:

Echo ‘alias kdr=”kubectl --dry-run=client -o yaml”’ >>~/.bashrc
Echo ‘complete -F __start_kubectl kdr’ >>~/.bashrc

Let’s break these commands down:

The first enables auto-completion provided for Kubernetes in the current shell. And the second adds it to the bash profile for perpetuity in case you need to resource your shell.

The third sets up an alias for kubectl to be invoked as k.  And the fourth makes sure that autocompletion is enabled for that alias. Those four alone are great, and will work wonders for your imperative commands.

The fifth and sixth are two little flavors I use and enjoy in my Kubernetes environments. For example, the fifth adds the alias of kdr to be kubectl --dry-run=client -o yaml and puts it in the bash profile. So you can create any resource that has dry-runs supported with three letters. And the sixth ensures that if we start with kdr, we can autocomplete.

I typically source the profile, to be extra-sure everything is set up correctly before getting started:

Source ~/.bashrc

That’ll do it for this microblog. Our next microblog will focus on the hands-on practicality that the CKA is well-known for, and how we can get that practicality.

The post Micro-Blog 2 of 3 – Read These Before You Take the CKA: Hack. Time. appeared first on Cloud Academy.

]]>
0
Micro-Blog 1 of 3 – Read These Before You Take the CKA: Becoming the Bookworm https://cloudacademy.com/blog/micro-blog-1-of-3-the-cka-exam-becoming-the-bookworm/ https://cloudacademy.com/blog/micro-blog-1-of-3-the-cka-exam-becoming-the-bookworm/#respond Wed, 24 Mar 2021 15:57:49 +0000 https://cloudacademy.com/?p=45968 Microblog #1: Become the Bookworm I know what you’re thinking, read a Kubernetes book blah blah blah. Actually no, but you will be reading something. This tip is all about breaking down the exam’s content to a molecular level and bookmarking the documentation that pertains to that content for reference...

The post Micro-Blog 1 of 3 – Read These Before You Take the CKA: Becoming the Bookworm appeared first on Cloud Academy.

]]>
Microblog #1: Become the Bookworm


I know what you’re thinking, read a Kubernetes book blah blah blah. Actually no, but you will be reading something. This tip is all about breaking down the exam’s content to a molecular level and bookmarking the documentation that pertains to that content for reference during the exam.

“WHAT?! BOOKMARK RELEVANT DOCUMENTATION AND OPEN IT DURING THE EXAM?! HERESY!”

According to the official rules and policies surrounding the CKA you can have one tab open that references any of the following:

And any of their subdomains. You do have to be careful to stay within those sites, as some links within them navigate to external websites. You can always hover over the link and verify in the bottom left corner if you’re unsure.

What does this mean? It means you can create a folder on your Chromium browser that specifically breaks down the exam objectives by their title and their respective content. This was how I structured my bookmarks for the exam:

[Folder] CKA-Links
[DO FIRST] 1. autocompletion/cheatsheet
2. <exam-focus>/<exam-objective>

And so on and so forth. Each <exam-focus> was specifically titled to what the CKA Rubric covered, so I was never lost.

That does it for this microblog. For our next one, we’ll cover how to hit the ground running when we first jump into the exam terminal environment.

The post Micro-Blog 1 of 3 – Read These Before You Take the CKA: Becoming the Bookworm appeared first on Cloud Academy.

]]>
0
Using Docker to Deploy and Optimize WordPress at Scale https://cloudacademy.com/blog/using-docker-to-deploy-and-optimize-wordpress-at-scale/ https://cloudacademy.com/blog/using-docker-to-deploy-and-optimize-wordpress-at-scale/#respond Fri, 23 Oct 2020 02:58:01 +0000 https://cloudacademy.com/?p=44459 Here at Cloud Academy, we use WordPress to serve our blog and product/public pages, such as the home page, the pricing page, etc. Why WordPress? With WordPress, the marketing and content teams can quickly and easily change the look & feel and the content of the pages, without reinventing the...

The post Using Docker to Deploy and Optimize WordPress at Scale appeared first on Cloud Academy.

]]>
Here at Cloud Academy, we use WordPress to serve our blog and product/public pages, such as the home page, the pricing page, etc.

Why WordPress?

With WordPress, the marketing and content teams can quickly and easily change the look & feel and the content of the pages, without reinventing the wheel.

State of the art

The first implementation of WordPress infrastructure was to deploy the whole code (core, theme and plugins) on an EFS storage, which basically is an NFS storage, attached to a couple of EC2 instances. Moreover, we installed W3 Total Cache plugin to handle full page cache and to serve static files (such as images, CSS, js, etc.) from a Cloudfront CDN.

Here is a simplified schema of our infrastructure:

Old Infrastructure

 

Considering this implementation, we found two different problems:

  • The first is related to the EFS that doesn’t serve PHP files as fast as needed.
  • The second is related to W3 Total cache plugins that, as previously mentioned, also handle the full page cache. This plugin retrieves a page from Redis cache but, despite Redis being a good choice to handle cached files, before getting the file from cache, W3 Total Cache needs to start the whole WordPress framework to understand the right object to get — bringing us back to the first problem.

With this approach, we waste a lot of time in getting the PHP files from the network file system and the cached page from Redis.

Refactoring

To solve the above-mentioned problems, we’ve rethought the whole infrastructure, moving it to a more standard Cloud Academy approach using Docker containers and ECS orchestrator.

Furthermore, we moved the CDN component to act as a full page cache instead of serving just the static assets.

Here is the schema of the new implementation:

New Infrastructure

As you can see, the whole WordPress code is now built in a docker container (using our standard Jenkins pipeline) and then deployed to an ECS cluster managed by Spotinst.

However, we kept the EFS storage because the uploaded files from WordPress editors must be shared across all ECS containers.

The Docker build

One of our main goals was also to keep the minimum number of files versioned in the git repository. This means that the only versioned files for a WordPress project are related to custom themes and custom plugins.

To build the right docker image, we used the following approach:

Starting from the PHP image, the Dockerfile installs the wp-cli application and then downloads the WordPress core and all public plugins. To choose which plugins must be installed, the script reads a CSV file (that’s versioned) containing the list of all plugins and the relative version. In this way, when we want to install a new plugin, we just add it to the CSV file and then rebuild the Docker image.

The CDN

As shown in the schema, we moved the Cloudfront CDN to work as the main WordPress entry point. In this way, the “Hit” requests are handled directly by Cloudfront without loading the entire WordPress stack. In addition, Cloudfront gives us the ability to configure different behaviors depending on the requested page. More specifically, we can configure the cache TTL depending on the throughput that a single page has in order to maximize the cache performance (from 3 minutes to 10 minutes).

Unfortunately, CloudFront as a single entry point introduces a problem regarding the WordPress admin: as you can guess, we cannot have the admin pages cached. This is a problem because if we cache the admin pages, the editors can have unexpected behaviors with their sessions mixed together. To solve this issue, we created a new CDN behavior dedicated to the admin section that basically skips the cache, just using all headers and cookies as part of the cache key.

Now, what about the W3 Total Cache plugin? We decided to keep it installed because it will continue to optimize the cache miss from Cloudfront and the minification of static files.

Results

Of course, after the infrastructure refactoring, we made some benchmarks to understand the real benefit gained. This chart compares the performance of the old infrastructure versus the new:

Requests Chart

In the chart, the blue, brown and orange lines are related to the old infrastructure and the green, purple and red lines to the new infrastructure.

First of all, notice how the average response time of the new infrastructure (red line) is about half of the old one (blue line). But the very big improvement is on the 95 percentile brown line vs. purple line. This means that, thanks to the new infrastructure, 95% of requests are now served in less than a second.

Another effect that we measured with the ‘ab’ tool is the increase in throughput. We can now handle about twice as many requests than before using the same hardware configuration.

Considering the EFS usage, as you can see in the following image, the throughput that the file system must handle is now significantly lower than the old infrastructure (except for the switch moment, where there was a spike caused by all Cloudfront cache miss requests). This allowed us to decrease the EFS provisioned throughput which, of course, means a cost savings.

EFS Chart

In addition, the time needed to scale up in case of a spike is considerably lower because we simply need to add new extra containers to the ECS service to handle new requests.

What’s next?

After this refactoring, which mainly focused on infrastructure, we are fully aware that most of the time the major issues are at the application level. So we are investigating how to refactor the WordPress front end, replacing it with a react app (like the rest of our platform), using either Next.js or Gatsby, in order to completely avoid the load of the WordPress framework except when serving API requests.

The post Using Docker to Deploy and Optimize WordPress at Scale appeared first on Cloud Academy.

]]>
0
New Content: AWS Data Analytics – Specialty Certification, Azure AI-900 Certification, Plus New Learning Paths, Courses, Labs, and More https://cloudacademy.com/blog/new-content-aws-data-analytics-azure-ai900-certifications/ https://cloudacademy.com/blog/new-content-aws-data-analytics-azure-ai900-certifications/#respond Wed, 14 Oct 2020 02:43:22 +0000 https://cloudacademy.com/?p=44425 This month our Content Team released two big certification Learning Paths: the AWS Certified Data Analytics – Speciality, and the Azure AI Fundamentals AI-900. In total, we released four new Learning Paths, 16 courses, 24 assessments, and 11 labs.  New content on Cloud Academy At any time, you can find...

The post New Content: AWS Data Analytics – Specialty Certification, Azure AI-900 Certification, Plus New Learning Paths, Courses, Labs, and More appeared first on Cloud Academy.

]]>
This month our Content Team released two big certification Learning Paths: the AWS Certified Data Analytics – Speciality, and the Azure AI Fundamentals AI-900. In total, we released four new Learning Paths, 16 courses, 24 assessments, and 11 labs. 

New content on Cloud Academy

At any time, you can find all of our new releases by going to our Training Library and finding the section titled “New this month in our library.” You can also keep track of what new training is coming for the next 4-6 weeks with our Content Roadmap.


AWS

Learning Path: AWS Data Analytics Specialty (DAS-C01) Certification Preparation (Preview)

This certification Learning Path is specifically designed to prepare you for the AWS Certified Data Analytics – Specialty (DAS-C01) exam. It covers all the elements required across all five of the domains outlined in the exam guide.

Course: AWS Databases used with Data Analytics

This course introduces a number of different AWS database services that are commonly used with data analytics solutions and that will likely be referenced within the AWS Data Analytics Specialty certification. As such, this course explores the following database services: Amazon RDS, Amazon DynamoDB, Amazon ElastiCache, Amazon Redshift.

Course: Overview of Differences Between AWS Database Types

This course provides a high-level overview of the managed database offerings available from AWS. It covers relational and non-relational databases, how they work, their strengths, and what workloads are best suited for them.

Course: Backup and Restore Capabilities of Amazon RDS & Amazon DynamoDB

This course explores the different strategies that are available for when you need to both back up and restore your AWS databases across Amazon Relational Database Service (RDS) and Amazon DynamoDB. During this course, you will learn about the different backup features that are available in Amazon RDS and DynamoDB, how to identify the differences between them, and when you should use one over the other. 

Course: Data Visualization – How to Convey your Data

This course explores how to interpret your data allowing you to effectively decide which chart type you should use to visualize and convey your data analytics. Using the correct visualization techniques allows you to gain the most from your data. In this course, we will look at the importance of data visualization, and then move onto the relationships, comparisons, distribution, and composition of data.

Course: Security best practices when working with AWS Databases

This course explores the security best practices when working with AWS databases, specifically looking at RDS and DynamoDB with some extra content related to Aurora. This course is recommended for anywho who is looking to broaden and reinforce their AWS security understanding, or anyone who is interested in creating secure databases in general.


Azure

Learning Path: AI-900 Exam Preparation: Microsoft Azure AI Fundamentals (preview)

This learning path is designed to help you prepare for the AI-900 Microsoft Azure AI Fundamentals exam. Even if you don’t plan to take the exam, these courses and hands-on labs will help you gain a solid understanding of Azure’s AI services.

Course: Implementing High Availability Disaster Recovery from Azure SQL Databases

This course examines the features that Azure provides to help you make sure your SQL databases, whether they are managed in the cloud or on-premises, are not the point of failure in your systems.

Course: Introduction to Azure Storage

This course is intended for those who wish to learn about the basics of Microsoft Azure storage, covering the core storage services in Azure and the different storage account types that are available. You’ll watch a demonstration that shows you how to create a storage account in Microsoft Azure, then move on to more detail.

Course: Designing for Azure Identity Management (update)

This Designing for Azure Identity Management course will guide you through the theory and practice of recognizing, implementing, and deploying the services on offer within your enterprise. Learn how to better the protection of your organization by designing advanced identity management solutions. 

Course: Managing Code Quality and Security Policies with Azure DevOps

This course explores how to manage code quality and security policies with Azure DevOps, and will help those preparing for Microsoft’s AZ-400 exam. 

Lab Playground: Azure Notebooks Machine Learning Playground

Azure Notebooks enables data scientists and machine learning engineers to build and deploy models using Jupyter notebooks from within the Azure ML Workspace. A full Jupyter notebook environment is hosted in the cloud and provides access to an entire Anaconda environment. The tedious task of set up and installing all the tools for a data science environment is automated. Data scientists, teachers, and students can dive right into learning without spending time installing software. 

Lab Challenge: Azure Machine Learning Challenge

In this lab challenge, you will take on the role of a data scientist and complete several tasks within the Azure Machine Learning service. You will train a machine learning model using an Azure Notebook or the Azure Machine Learning GUI. After the model has been trained and is ready for production, you will deploy it as a web service using Azure Container Instances.


Data Science/Artificial Intelligence

Learning Path: Wrestling with Data

In this learning path, we dive into the various tools and techniques available for manipulating information and data sources. We then show you how you can use this knowledge to actually solve some real-world problems.

Learning Path: Introduction to Data Visualization

This learning path explores data sources and formatting, and how to present data in a way that provides meaningful information. You’ll look at data access patterns, and how different interfaces allow you to access the underlying information. This learning path also provides a practical, real-world example of how all this theory plays out in a business scenario. 

Course: Data Wrangling with PANDAS

In this course, we are going to explore techniques to permit advanced data exploration and analysis using Python. We will focus on the Pandas library, focusing on real-life case scenarios to help you to better understand how data can be processed using Pandas.

Course: Wrestling With Data

In this course, we’re going to do a deep dive into the various tools and techniques available for manipulating information and data sources along with showing you at the end of it how you can actually solve some real-world problems. If you are trying to handle increasingly complex data sets and round out your experience as a professional data engineer, this is a great course to get a practical field-based understanding.


DevOps

Hands-on Lab: Installing and Running Applications with Docker Enterprise Universal Control Plane

In this lab, you will learn how to install UCP onto bare Docker hosts to create a multi-node installation from the ground up. You will also learn how to deploy applications onto the UCP cluster you create using the web interface.

Lab Challenge: Docker Swarm Playground

This playground provides a Docker swarm cluster comprised of one manager node and two worker nodes. You have full access to swarm nodes with, each having docker, docker-compose, and relevant command-line completions already installed and running. You have full access to the underlying host as well so you are not restricted compared to an environment that runs Docker in Docker (dind).

Hands-on Lab: Deploying Infrastructure with Terraform

Terraform is an infrastructure automation tool that allows companies to manage infrastructure through code. This provides many benefits such as greater recovery, predictability, and speed. In this lab, you will create a Terraform configuration to deploy a VPC in AWS.

Course: Tech Talk: Building Automated Machine Images with HashiCorp Packer

In this Tech Talk, you’ll watch a presentation on HashiCorp’s Packer and how it can be used to build automated machine images and then deploy the new image into a production environment with a CI/CD pipeline. You’ll follow virtually as a group of IT professionals discuss the tool and its uses. 


Google Cloud Platform

Course: Managing GCP Operations Monitoring

This course shows you how to monitor your operations on GCP. It starts with monitoring dashboards. You’ll learn what they are and how to create, list, view, and filter them. You’ll also see how to create a custom dashboard right in the GCP console.

Course: Managing and Investigating Service Incidents on GCP

Managing and investigating service incidents is an important part of the maintenance process. It is a necessity that can be laboring but with the right organization, understanding of the systems, the knowledge of processes, and the discipline to adhere to best practices, it can be optimized. This course will focus on the predominant parts of managing service incidents and utilizing Google Cloud Platform to aid in the endeavor.

Hands-on Lab: Scaling an Application Through a Google Cloud Managed Instance Group

In this lab, you will create an instance template, an instance group with the autoscaling enabled, and you will then attach an HTTP load balancer to the instance group to load balance the traffic to the VM group. You will also perform a stress test to check that the autoscaling is working properly.

Lab Challenge: Google Cloud Scaling Applications Challenge

In this lab challenge, you will need to prove your knowledge of highly available and scalable applications by creating infrastructure on Google Compute Engine. The objectives you will need to achieve represent essential skills that a Google Certified Associate Cloud Engineer and Google Certified Professional Cloud Architect need to have. 

Lab Challenge: Google Cloud SQL Challenge

In this lab challenge, you will need to prove your knowledge of Google Cloud SQL by creating a production ready Cloud SQL instance. The objectives you will need to achieve represent essential skills that a Google Certified Associate Cloud Engineer and Google Certified Data Engineer need to have.


Programming

Course:  Building a Python Application: Course One

One of the best ways to learn new programming languages and concepts is to build something. Learning the syntax is always just the first step. After learning the syntax the question that arises tends to be: what should I build? Finding a project to build can be challenging if you don’t already have some problems in mind to solve. This course is broken up into sprints to give you a real-world development experience, and guide you through each step of building an application with Python.

Hands-on Lab: Introduction to Graph Database With Neo4j

In this lab, you will understand the core principles of a graph database (especially a property graph) and you will install the Neo4j DBMS on an EC2 instance. This lab is intended for data engineers who want to switch to the graph data model or developers who need to build an application based on a graph database.

Hands-on: Lab: Constructing Regular Expression Character Classes

Regular Expressions are a tool for searching and manipulating text. In this lab, you will use the Python programming language to learn the basics of how to use a regular expression and you’ll learn about the different character classes available for matching different types of characters.

Hands-on: Lab: Working with Regular Expressions: Special Characters and Anchors

In this lab, you will learn how to use quantifiers to match sequences of characters in different ways, you will learn about anchors, and you will learn how to use capture groups. 

Hands-on: Lab: Using Regular Expressions Effectively in the Real World

In this lab, you will learn how to use quantifiers to match sequences of characters in different ways, you will learn about anchors, and you will learn how to use capture groups. 


Webinars

Office Hours: AWS Solutions Architect – Associate | Domain 1 of 4: Design Resilient Architectures

Take a deep dive on Domain 1: Design Resilient Architectures of the AWS Solutions Architect – Associate exam.

Office Hours: Decoupling Architectures Like There’s No Tomorrow

Learn all about decoupling architectures, how to set them up, and manage them, successfully with our experts.


Platform

Learn, Grow, Succeed: Introducing Training Plans for Individuals

We’re excited to announce that we’ve released a new game-changing feature for individual users that has already proven to help our largest enterprise customers increase newly acquired skills by 5x: Training Plans.


Stay updated

As always, we use Cloud Academy Blog to keep you up-to-date on the latest technology and best practices. All of our new blogs, reports, and updates go directly to our Cloud Academy social media accounts. For the latest updates, follow and like us on the following social media platforms:

The post New Content: AWS Data Analytics – Specialty Certification, Azure AI-900 Certification, Plus New Learning Paths, Courses, Labs, and More appeared first on Cloud Academy.

]]>
0