Docker | Cloud Academy Blog https://cloudacademy.com/blog/category/docker/ Fri, 11 Aug 2023 19:01:29 +0000 en-US hourly 1 https://wordpress.org/?v=6.4.1 Docker vs. Virtual Machines: Differences You Should Know https://cloudacademy.com/blog/docker-vs-virtual-machines-differences-you-should-know/ https://cloudacademy.com/blog/docker-vs-virtual-machines-differences-you-should-know/#respond Fri, 11 Aug 2023 01:00:00 +0000 https://cloudacademy.com/?p=38742 We'll compare the differences and provide our insights to help you decide between the two.

The post Docker vs. Virtual Machines: Differences You Should Know appeared first on Cloud Academy.

]]>
What’s the difference between Docker and Virtual Machine? In this article, we’ll compare the differences and provide our insights to help you decide between the two. Before we get started discussing Docker vs VM differences, let’s first explain the basics. 

What is Docker?

Organizations in today’s world look forward to transforming their business digitally but are constrained by the diverse portfolio of applications, cloud, and on-premises-based infrastructure. Docker solves this obstacle of every organization with a container platform that brings traditional applications and microservices built on Windows, Linux, and mainframe into an automated and secure supply chain.

Docker is a software development tool and a virtualization technology that makes it easy to develop, deploy, and manage applications by using containers. A container refers to a lightweight, stand-alone, executable package of a piece of software that contains all the libraries, configuration files, dependencies, and other necessary parts to operate the application.

In other words, applications run the same irrespective of where they are and what machine they are running on because the container provides the environment throughout the software development life cycle of the application. Since containers are isolated, they provide security, thus allowing multiple containers to run simultaneously on the given host. Also, containers are lightweight because they do not require an extra load of a hypervisor. A hypervisor is a guest operating system like VMWare or VirtualBox, but instead, containers run directly within the host’s machine kernel.

Containers provide the following benefits: 

  • Reduced IT management resources 
  • Reduced size of snapshots 
  • Quicker spinning up apps
  • Reduced and simplified security updates
  • Less code to transfer, migrate and upload workloads

To start your Docker journey, contact us to learn more about Cloud Academy’s in depth Docker Learning Path.

Cloud Academy Docker in Depth Learning Path

What is a Virtual Machine?

A Virtual Machine (VM), on the other hand, is created to perform tasks that if otherwise performed directly on the host environment, may prove to be risky. VMs are isolated from the rest of the system; the software inside the virtual machine cannot tamper with the host computer. Therefore, implementing tasks such as accessing virus-infected data and testing of operating systems are done using virtual machines. We can define a virtual machine as: 

A virtual machine is a computer file or software usually termed as a guest, or an image that is created within a computing environment called the host. 

A virtual machine is capable of performing tasks such as running applications and programs like a separate computer making them ideal for testing other operating systems like beta releases, creating operating system backups, and running software and applications. A host can have several virtual machines running at a specific time. Logfile, NVRAM setting file, virtual disk file, and configuration file are some of the key files that make up a virtual machine. Another sector where VMs are of great use is server virtualization. In server virtualization, a physical server is divided into multiple isolated and unique servers, thereby allowing each server to run its operating system independently. Each virtual machine provides its virtual hardware, such as CPUs, memory, network interfaces, hard drives, and other devices. 

VMs are broadly divided into two categories depending upon their use:

  1. System Virtual Machines: A platform that allows multiple VMs, each running with its copy of the operating system to share the physical resources of the host system. Hypervisor, which is also a software layer, provides the virtualization technique. The hypervisor executes at the top of the operating system or the hardware alone.
  2. Process Virtual Machine: Provides a platform-independent programming environment. The process virtual machine is designed to hide the information of the underlying hardware and operating system and allows the program to execute in the same manner on every given platform.

To learn more about VMs, check out Cloud Academy’s Virtual Machines Overview Course. If you don’t already have a Cloud Academy account, contact us for a FREE 7-day trial.  

Although several VMs running at a time may sound efficient, it leads to unstable performance. As the guest OS would have its kernel, set of libraries and dependencies, this would take up a large chunk of system resources.

Other drawbacks include inefficient hypervisor and long boot uptime. The concept of Containerization overcomes these flaws. Docker is one such containerization platform.

Docker vs Virtual Machine: main differences

The following are the significant differences between Docker and virtual machines.

OS Support and Architecture

The main difference between Docker and VMs lies in their architecture, demonstrated below.

Docker vs. Virtual Machines

VMs have the host OS and guest OS inside each VM. A guest OS can be any OS, like Linux or Windows, irrespective of the host OS. In contrast, Docker containers host on a single physical server with a host OS, which shares among them. Sharing the host OS between containers makes them light and increases the boot time. Docker containers are considered suitable to run multiple applications over a single OS kernel; whereas, virtual machines are needed if the applications or services required to run on different OS. 

Security

The second difference between VMs and Docker is that Virtual Machines are stand-alone with their kernel and security features. Therefore, applications needing more privileges and security run on virtual machines. 

On the flip side, providing root access to applications and running them with administrative premises is not recommended in the case of Docker containers because containers share the host kernel. The container technology has access to the kernel subsystems; as a result, a single infected application is capable of hacking the entire host system. 

Portability

Another relevant Docker vs Virtual Machine difference is about portability: VMs are isolated from their OS, and so they are not ported across multiple platforms without incurring compatibility issues. At the development level, if an application is to be tested on different platforms, then Docker containers must be considered. 

Docker container packages are self-contained and can run applications in any environment, and since they don’t need a guest OS, they can be easily ported across different platforms. Docker containers can be easily deployed in servers since containers being lightweight can be started and stopped in very less time compared to virtual machines.

Performance

The last main Docker vs VM difference refers to performance: Virtual Machines are more resource-intensive than Docker containers as the virtual machines need to load the entire OS to start. The lightweight architecture of Docker containers is less resource-intensive than virtual machines. 

In the case of a virtual machine, resources like CPU, memory, and I/O may not be allocated permanently to containers — unlike in the case of a Docker container, where the resource usage works with the load or traffic. 

Scaling up and duplicating a Docker container is simple and easy as compared to a virtual machine because there is no need to install an operating system in them.

Apart from the major differences between Docker and VMs, some other ones are summarized below: 

 DockerVirtual Machines (VMs)
Boot-TimeBoots in a few seconds.It takes a few minutes for VMs to boot.
Runs onDockers make use of the execution engine.VMs make use of the hypervisor.
Memory EfficiencyNo space is needed to virtualize, hence less memory. Requires entire OS to be loaded before starting the surface, so less efficient. 
IsolationProne to adversities as no provisions for isolation systems.Interference possibility is minimum because of the efficient isolation mechanism.
DeploymentDeploying is easy as only a single image, containerized can be used across all platforms. Deployment is comparatively lengthy as separate instances are responsible for execution.
UsageDocker has a complex usage mechanism consisting of both third party and docker managed tools.Tools are easy to use and simpler to work with. 

Should I choose Docker or Virtual Machine (VM)? 

It won’t be fair to compare Docker and virtual machines since they are intended for different use. Docker, no doubt is gaining momentum these days, but they cannot be said to replace virtual machines. In spite of Docker gaining popularity, a virtual machine is a better choice in certain cases. Virtual machines are considered a suitable choice in a production environment, rather than Docker containers since they run on their own OS without being a threat to the host computer. But if the applications are to be tested then Docker is the choice to go for, as Docker provides different OS platforms for the thorough testing of the software or an application. 

Furthermore, a Docker container uses docker-engine instead of a hypervisor, like in a virtual machine. As the host kernel is not shared, using docker-engine makes containers small, isolated, compatible, high performance-intensive, and quickly responsive. Docker containers have comparatively low overhead as they have compatibility to share single kernel and application libraries. Organizations are making use of the hybrid approach mostly as the choice between virtual machines and Docker containers depend upon the kind of workload offered. 

Also, not many digital operational companies rely on virtual machines as their primary choice and prefer migrating towards using containers as the deployment is comparatively lengthy and running microservices is also one of the major challenges it possesses. However, they are still some firms that prefer virtual machines over Dockers whereas companies who are interested in enterprise-grade security for their infrastructure prefer to make use of Dockers. 

Finally, containers and Docker are not in conflict with virtual machines, they are both complementary tools for different workloads and usage. Virtual machines are built for applications that are usually static and don’t change very often. Whereas, the Docker platform is built with a mindset to be more flexible so that containers can be updated easily and frequently. 

Will containers ever replace virtual machines? Comment your thoughts below or give further suggestions.

For more on our content or to schedule a demo with one of experts contact us today!

The post Docker vs. Virtual Machines: Differences You Should Know appeared first on Cloud Academy.

]]>
0
What is Docker: Quick Applications Development with Containers https://cloudacademy.com/blog/what-is-docker/ https://cloudacademy.com/blog/what-is-docker/#respond Mon, 23 Jan 2023 08:30:00 +0000 https://cloudacademy.com/?p=52975 Docker is a platform that enables quick application development through Containers. Using containers, developers can package up an application with all its dependencies and ship it out as one package.

The post What is Docker: Quick Applications Development with Containers appeared first on Cloud Academy.

]]>
The objective of this blog post is to give you a full overview of what Docker is, its components, how it works, and more.

Here’s everything we’ll cover:

What is Docker?

Docker is a platform that enables quick application development through Containers. Using containers, developers can package up an application with all its dependencies and ship it out as one package. This makes deploying and managing applications much more accessible, especially in a cloud-based or microservices-based architecture.

Containers are standardized units containing all dependencies and environments to develop and run a given application. Containers are a lightweight, stand-alone, executable package that includes everything needed to run a piece of software, including code, runtime, system tools, system libraries, and settings.

You can learn more about Docker on Cloud Academy. For complete beginners, Cloud Academy has an introductory course, Introduction to Docker.

Docker Architecture

Docker is built on a client-server model. The Docker client communicates with the Docker daemon, which is in charge of building, running, and distributing your Docker containers.

A Docker client and daemon can run on the same system, or a Docker client can connect to a remote Docker daemon. The Docker client and daemon communicate via a REST API, UNIX sockets, or a network interface. Docker Compose is another Docker client that allows you to work with applications made up of a collection of containers.

Docker Architecture

The Docker architecture consists of these main components:

  • Docker Client
  • Docker Deamon
  • Docker Registries
  • Docker Desktop
  • Docker Objects:
    • Docker Images
    • Docker Containers

Docker Client

The Docker client is a tool that allows users to interact with Docker servers. It provides a command-line interface (CLI) that can be used to issue commands to the server, as well as a graphical user interface (GUI) that can be used to manage Docker servers. The Docker client talks to the Docker daemon, sending it the commands to build, run, and stop containers. The Docker client and daemon can run on the same system or connect a Docker client to a remote Docker daemon. The Docker client and daemon communicate via sockets or through a RESTful API.

Docker Daemon

The Docker daemon (dockerd) listens for Docker API requests and manages Docker objects such as images, containers, networks, and data volumes. A single Docker daemon can manage multiple containers and images. When you install Docker, a dockerd daemon is automatically configured and launched.

Docker Registries

A registry is a collection of repositories, and a repository is a collection of images. Docker Hub is a public registry that contains many Docker images. You can also run your own private registry.

Docker Desktop

Docker Desktop is an application for macOS and Windows machines to create and manage Docker containers. With Docker Desktop, you can develop and test your applications locally, then deploy them to production with confidence. Docker Desktop includes everything you need to build, test, and ship your applications.

Docker Desktop is the easiest way to run Docker on your desktop. If you’re new to Docker, you can start with the Docker Basics course. Docker Desktop is free to download and use.

Docker Images

Images are read-only templates used to create Docker containers. Images are created with the build command and can be pushed to a registry with the push command. The Docker Hub maintains a collection of official images, and you can browse the Docker Hub to find your needed images.

Docker Containers

A container is a runtime instance of a Docker image—what the image becomes in memory when executed. It runs completely isolated from the host environment by default, only accessing host files and ports if configured.

Docker networks: Networks are used to enable communication between containers. By default, containers can communicate with each other by their container ID. You can also create user-defined networks.

How Docker Works

How Docker works is very simple. When you install Docker, it creates a virtual environment on your computer. This virtual environment is completely isolated from your main Operating System (OS), making it very secure.

Docker containers can hold any application. When you want to run a container, you tell Docker which image you want to use and what command you want to run. Docker will then fetch the image and run the command inside the isolated environment.

Docker images are built from instructions called Dockerfiles. A Dockerfile is just a text file containing a list of commands that must be run to build the image. For example, an image could be built from a Dockerfile that installs Apache and copies over some static HTML files.

When you run a command in a Docker container, the output of that command is displayed on your screen. But, the changes made to the files inside the container are not saved when the container is stopped. You must create a new image from the container to save the changes.

Creating a new image from a container is called committing. To commit a container, you use the docker commit command. This command takes the container ID and a name for the new image. For example, if you wanted to save the changes made to a container with the ID of 1234 as a new image called “my-image”, you would run the following command:

$ docker commit 1234 my-image

Once you have achieved a new image, you can push it to a Docker registry so that others can use it. Docker registries are websites where images can be stored and shared. The largest and most popular registry is Docker Hub.

To run a Docker container, you first must pull the image from the registry. For example, if you wanted to run the “my-image” image that you pushed to Docker Hub, you would use the following command:

$ docker run my-image

This would start a new container from the “my-image” image and run it on your computer.

Docker Pricing

Docker pricing is based on the repository size, the number of users and the level of support required. Four payment plans are offered, including a free community edition and a paid enterprise edition:

  • Docker Personal: A free edition of Docker comes bundled with unlimited repositories, and 200 image pulls every 6 hours.
  • Docker Pro: Priced at 5 USD per month. Bundled with 5000 image pulls a day with up to 5 concurrent builds.
  • Docker Team: Priced at 9 USD per user per month, with a minimum of 5 users
  • Docker Business: Priced at 24 USD per month, with a minimum of 5 users.

Below is a table comparing what the packages offer:

PersonalProTeamBusiness
0 USD5 USD/Month9 USD/User/Month24 USD/User/Month
Docker Desktop
Unlimited public repositories
Unlimited Scoped tokens
Docker engine + Kubernetes
===
200 image pulls / 6 hours5000 image pulls / dayUnlimited image pullsUnlimited image pulls
/5 concurrent builds15 concurrent builds15 concurrent builds
//Can add users in bulkCan add users in bulk
//Audit LogsAudit Logs
///Single Sign-on
///Purchase via invoice
///Volume pricing available

Docker Advantages and Disadvantages

There are multiple reasons to use Docker. 

Docker enables developers to package applications with all the dependencies needed and ship them out as self-contained units. This way, you can run the same application on different machines without worrying about different environments causing issues.

Docker also makes it easy to set up development environments quickly. You can use a pre-built image from Docker Hub or create your own and have a complete environment up and running in minutes.

There are also several security benefits to using Docker. By running each application in its own isolated container, you can limit each access to the underlying host system. This can help to prevent one compromised container from giving an attacker access to the rest of the system.

The main reason not to use Docker is that it can add complexity to your development workflow. If you are working on a team of developers, you must ensure that everyone uses the same version of Docker and that your application containers are compatible.

There are many reasons why someone might choose to start using docker. Some common reasons include wanting to increase the efficiency of their development workflow, creating a more consistent development environment, or reducing the time and effort required to set up and maintain a development environment.

Docker Use Cases

Docker has multiple use cases, as follows:

  • Docker can be used for spinning up testing and development environments quickly and easily. 
  • Docker can also be used for shipping and deploying applications – you can build your app in a Docker image and then ship it to your Production environment, where it will run in a Docker container. 
  • Docker can also be used for creating microservices-based applications. 
  • Docker can also be used to orchestrate and manage large numbers of containers.

Docker vs Virtual Machines

Docker and Virtual Machines differ in many aspects, such as architecture, security, etc.

If you want to have a full comparison between them, check the Docker vs. Virtual Machines: Differences You Should Know‘s article on the Cloud Academy’s blog.

Learn Docker on Cloud Academy

Here’s a full list of training content on Docker that you can find in the Cloud Academy Library:

We hope this blog post helped you understand Docker’s multiple aspects and features. If you have thoughts or questions, feel free to leave a comment or contact Cloud Academy.

Thanks and Happy Learning!

The post What is Docker: Quick Applications Development with Containers appeared first on Cloud Academy.

]]>
0
Azure Container Registry Overview: Build, Manage, and Store Container images and Artifacts in a Private Registry https://cloudacademy.com/blog/azure-container-registry-overview-build-manage-and-store-container-images-and-artifacts-in-a-private-registry/ https://cloudacademy.com/blog/azure-container-registry-overview-build-manage-and-store-container-images-and-artifacts-in-a-private-registry/#respond Mon, 16 Jan 2023 08:30:00 +0000 https://cloudacademy.com/?p=52928 Azure Container Registry provides an enterprise-ready container registry for building, storing, and managing container images and associated artifacts.

The post Azure Container Registry Overview: Build, Manage, and Store Container images and Artifacts in a Private Registry appeared first on Cloud Academy.

]]>
Binaries, configuration files, web pages, and even virtual machines (VMs) and containers are parts of a DevOps build pipeline. In a contemporary application, they form the building blocks. Containers simplify the deployment process by including as many parts as possible.
However, this raises some questions: 
How do you deploy those containers across a large-scale cloud application and manage them?
Every engineer wants to be able to easily manage services and applications. But which technology is best suited for the task? In this article, we’ll look at Microsoft’s Azure Container Registry in detail and examine why it may be the ideal option for your development team.

What we’ll cover:

What is Azure Container Registry (ACR)?

Azure Container Registry (ACR) is a highly scalable and secure Docker registry service that lets you to deploy, manage, and store Docker container images to the Microsoft Azure cloud platform. It provides an easy way to use the same image across different environments, such as development, testing, and production.

ACR enables you to create private registries, which are only accessible by you and your team members, or public registries, which can be accessed by anyone with the registry’s name and a valid subscription ID.

ACR supports Docker image signing and can automatically build new images from commit messages. It lets you download your private photos for deployment into Kubernetes clusters or on-premises environments.

Use the Azure Container Registry client library to:

  • Register pictures or relics.
  • Obtain metadata for the repositories, tags, and photographs.
  • On registry items, set the read/write/delete properties.
  • Delete the repositories, designations, and artifacts.

Azure Container Registry Key Concepts

Here are some key concepts of Azure Container Registry:

Registry

You can store and distribute container images using Azure Container Registry, a hosted Docker registry service. Use it to store Docker images for later use, or use it as a private image repository for your applications.

Azure Container Registry is built on top of Azure Storage, so it has all the benefits of using Azure Storage, such as global availability and geo-replication supporting global distribution.

Each image in the registry has an associated tag that consists of a namespace and a name. You can either create new namespaces or use existing ones. The namespace could be used by you or someone else—it’s up to you!

A variety of content artifacts, such as Open Container Initiative (OCI) image formats and Helm charts, are supported by Azure Container Registry.

Repository

A repository is a container registry hosted by a Microsoft-hosted service (such as Azure Container Registry). Repositories are typically used for storing private images that you can share with other team members or applications within your organization.

Namespaces may also be included in repository names. By marking names with a forward slash, namespaces lets you identify linked repositories and artifact ownership inside your business. The registry controls each repository separately rather than in a hierarchical manner.

Artifacts

Artifacts are files that you’ve pushed to an ACR repository. These can be Dockerfile files, which contain instructions for creating a Docker image, or individual files placed inside the root directory of an image.

Azure Container Registry Features

Azure Container Registry uses docker distribution to store and distribute Docker images. The service includes features such as:

Registry Service Tiers

Azure Container Registry is available in two service tiers: Basic and Standard. 

  • The Basic tier provides an image repository with limited storage capacity and retention time. 
  • The Standard tier provides an image repository with unlimited storage capacity and retention time. 

Security and Access

Access a registry with the Azure CLI or with the usual docker login command. Using TLS encryption, Azure Container Registry encrypts connections to clients and sends container images over HTTPS.

You can specify which users can access your registry with either principal service accounts or Managed Identity Access Policies (MIPS). Service principal accounts are credentials used by Azure services to authenticate with Azure resources. At the same time, MIPs allow users to show using their organizational accounts on-premises or inside the Azure portal in addition to their organizational accounts on-premises without having to manage new credentials.

A Premium service tier features content trust for image tag signing, firewalls, and virtual networks (preview) for controlling registry access. Microsoft Defender for Cloud may scan an image when pushed to Azure Container Registry.

Versioned Storage

Azure Container Registry stores your container images as a collection of layers that can be versioned independently. This allows you to control access by tagging layers with access control lists (ACLs) or to use permissions on specific tags.

Supported Images and Artifacts

Azure Container Registry supports images and artifacts. You can use the registry to store your container images and use it as a repository for your application image layers.

You can upload an image to the registry and then deploy that image to your Kubernetes cluster or another environment. You can also store artifacts, such as binaries or configuration files, in the registry. You can then download these artifacts from the registry to deploy them on-premises or in another cloud provider’s environment.

Use normal Docker commands for pushing or pulling images. Azure Container Registry supports associated material types, including Helm charts and ideas created for the Open Container Initiative and Docker Container Images (OCI).

Automated Image Builds

Azure Container Registry provides Automated Image Builds feature that allows you to build container images from source code on a schedule. The built images are stored in the same registries as they were built. This feature helps you to avoid manual steps of pushing images to the registry and enables you to have a single source of truth for your container images. Azure Container Registry Tasks (ACR Tasks) help to create, test, and deploy images faster. By shifting docker build operations to Azure, ACR Tasks allow you to virtualize your development process.

Azure Container Registry Use Cases and Best Practices

ACR also supports the use cases listed below.

Manage Registry Size

Limit the storage capacity of an Azure Container Registry by specifying an Azure Storage account for it. This will allow you to track how much storage is being used by your registry and control capacity usage within the account.

Authentication and Authorization

These are key aspects when using Azure Container Registry. If you don’t configure it correctly, it could lead to unintended consequences like unauthorized access or privilege escalation attacks.

Dedicated Resource Group

A registry should be located in its resource group since container registries are resources that several container hosts access.

Even while you might test out a certain host type, like Azure Container Instances, you should probably destroy the container instance once you’re done. 

You might also wish to save the group of photos you uploaded to the Azure Container Registry. When you put your registry in its resource group, you reduce the chance that you’ll mistakenly delete the registry’s collection of photos when you delete the resource group for the container instance.

Network-Close Deployment

Azure Container Registry supports the ability to create a private repository that a single user or organization can use without requiring any network access to the registry server. This is accomplished by providing a tool in the Azure portal that allows you to generate an SSH key pair that will be used to authenticate your client with the registry server.

Azure Container Registry Pricing and Tiers

There are various service tiers (SKUs) for Azure Container Registry. These tiers offer predictable pricing and a range of choices for adjusting to your private Docker registry’s capacity and usage patterns in Azure.

Azure Container Registry Standard

Standard tier features, pricing, and limitations:

Features

  • Azure containers for network-closed deployment
  • Privately stored Docker images
  • Large-scale accessibility
  • Quick access to information

Pricing

Per day $0.667

Limitations

ResourceStandard Tier
Included storage (GiB)100
WriteOps per minute500
Download bandwidth (Mbps)60
Upload bandwidth (Mbps)20
Webhooks10

Azure Container Registry Premium

Premium tier features, pricing, and limitations:

Features

  • Provides high-volume plans
  • Content trust for image tag signing
  • A private link with private endpoints restricting registry access
  • Higher image throughput
  • Geo-replication across multiple regions

Pricing

Per day $1.667

Limitations

ResourcePremium Tier
Included storage (GiB)500
WriteOps per minute2000
Download bandwidth (Mbps)100
Upload bandwidth (Mbps)50
Webhooks500

Which One do I Need?

All tiers offer the same programmatic features. Additionally, they all benefit from picture storage wholly handled by Azure. Higher-level tiers provide greater performance and scale.

You can start with Basic and upgrade to Standard and Premium as your registry usage grows if numerous service tiers are available.

Learn Azure Container Registry on Cloud Academy

If you’re looking for a private container registry, the Azure Container Registry is a good choice. It has all of the features you’d expect, like creating and managing images, and it’s easy to set up and manage the service to get your developers working quickly and effectively. You can save your container images in ACR, allowing for quick and scalable retrieval of container workloads.

If you’re looking to learn, Cloud Academy offers several Microsoft Azure Courses, learning paths, and labs where you can learn and gain hands-on experience on Azure Container Registry.

Happy learning!

The post Azure Container Registry Overview: Build, Manage, and Store Container images and Artifacts in a Private Registry appeared first on Cloud Academy.

]]>
0
Kubernetes vs. Docker: Differences and Use Cases https://cloudacademy.com/blog/kubernetes-vs-docker/ https://cloudacademy.com/blog/kubernetes-vs-docker/#respond Tue, 01 Mar 2022 01:01:00 +0000 https://cloudacademy.com/?p=48836 Do you wonder about the merits of Kubernetes vs. Docker? Let’s get into the details to talk about their differences and how they work together. Docker is about developing and shipping applications within containers – also known as Docker images – while Kubernetes is high-level orchestration to potentially billions of...

The post Kubernetes vs. Docker: Differences and Use Cases appeared first on Cloud Academy.

]]>
Do you wonder about the merits of Kubernetes vs. Docker? Let’s get into the details to talk about their differences and how they work together.

Docker is about developing and shipping applications within containers – also known as Docker images – while Kubernetes is high-level orchestration to potentially billions of these containers. They’re different but complementary and work well together in large-scale, complicated deployments.

As we discuss the differences and advantages of each platform you’ll understand the big picture and be able to learn more with in-depth resources to get you both book knowledge and hands-on experience.

What are containers?

To better understand the merits of Kubernetes vs Docker, it’s helpful to take a step back and get comfortable with the concept of containers in application development and deployment. 

A container is a unit of software that bundles code and all dependencies together so that the application can run quickly and reliably in any computing environment.

Containers can be described as lightweight virtual machines. Virtual machines require you to virtualize the entire operating system and any software you wish to run. This makes VMs very resource-intensive.

Containers were introduced by the Linux operating system to address this problem. It’s simple: if you already have a Linux OS running on your computer, why not create a new OS for each VM using that OS? You can instead use the core OS (called the kernel) for each VM. So, VMs can only run the software they are required to.

Containers are helping teams of any size to resolve issues such as consistency, scalability, security, and scalability. Containers such as Docker can be used to separate the application and the underlying infrastructure.

What is Docker?

Docker allows you to separate your application and the underlying infrastructure. It bundles up your code with all its dependencies into one self-contained entity which will run on any supported system.

Advantages and disadvantages of Docker

Besides being the most popular platform and the defacto standard for container images, the benefits of Docker are the benefits of containerization: 

Advantages of Docker

Docker (i.e. Docker Compose) is portable, scalable, and has increased security from being isolated. This may be a different setup than what you’re used to, which leads us to some of Docker’s cons.

Disadvantages of Docker

It’s true, containerization does have some disadvantages. The naysayers say that they’re not as slow as running on bare metal, the ecosystem is general is a little all over the place and fractured, there are challenges with persistent storage since the container can be moved / is modular, and some applications (especially monolithic ones) would perform badly with containers.

What is Kubernetes

Kubernetes is the current standard in container orchestration systems.

Kubernetes makes it easy to manage and deploy containers at a large scale. Google developed it based on years of experience with containers in production. Kubernetes gives you a toolbox that automates scaling and operating containerized apps in production.

Advantages and disadvantages of Kubernetes

The advantages of Kubernetes are everything we’ve reviewed above, including:

  • load balancing
  • automatic packaging
  • self healing systems
  • powerful for CI/CD methodologies
  • sophisticated orchestration of complex deployments

Kubernetes does have some disadvantages, which are the flipside of all the pros listed above:

  • can be too much of a solution / overkill for all but the larger deployments
  • can be slow to implement and have a learning curve
  • its sophistication brings added complexity to a project

What’s the difference between Kubernetes vs. Docker?

As referenced at the start of this post, the difference between Kubernetes and Docker is pretty big. When people mention Docker, they’re usually referring to Docker Compose which is used for creating individual containerized applications. Docker Compose has become the standard so people just throw around the term “Docker” for shorthand.

Kubernetes is an orchestration system where you can control all your container resources from one single management plane.It was originally created by Google to monitor and control large (i.e. billions) of containers and is now open source.

Kubernetes vs. Docker Compose

When people ask “Kubernetes vs Docker” they really mean Docker Compose, the core Docker product which allows you to create containerized applications. When thinking about your options, a good question for yourself can be, “Why not both?” With the two technologies, you’ll be able to isolate applications as containerized systems and orchestrate huge numbers of them in a safe and dependable way.

Kubernetes vs. Docker Swarm

Docker Swarm is the container orchestration system from Docker, so comparing it to Kubernetes is much more applicable than saying “Kubernetes vs Docker Compose”. Docker Swarm is a more lightweight and simpler orchestration system than Kubernetes, but it lacks strengths that Kubernetes had in automation and self-healing – features that can be important to huge deployments.

Kubernetes vs Docker certifications

Kubernetes has more certifications than Docker, but since the two technologies are different it’s still useful to be familiar with the certs from both. Below are resources to get you familiar with Kubernetes and Docker certs.

Kubernetes certifications 

Certified Kubernetes Administrator (CKA) Exam Preparation

This learning path covers all the general domains of the CKA certification exam. This series of courses, labs, and exams is for cluster administrators who already have familiarity with Docker.

Certified Kubernetes Application Developer (CKAD) Exam Preparation

This learning path covers all domains of the CKAD exam. This is intended for Kubernetes application developers, but is also useful for anyone who wants to learn how to work with Kubernetes.

Certified Kubernetes Security Specialist (CKS)

CKS certification demonstrates that the person has command over securing the Kubernetes technology stack. CKA certification is required before you sit for the CKS.

Kubernetes and Cloud Native Associate (KCNA)

The KCNA is a foundational-level certification that helps learners progress to the more advanced Kubernetes certs: CKA, CKAD, and CKS.

Docker certification

Docker Certified Associate (DCA) Exam Preparation 

This learning path covers six DOcker domains, allowing you to become proficient in Docker orchestration, security, etc. Ideally, Docker recommends 6-12 months of experience with its technology before you start a certification course.

Resources: Learn about Kubernetes, Docker, and microservices

Building, Deploying, and Running Containers in Production

This Learning Path will show you how to engineer and operate containers in production-like environments. The Learning Path will begin by introducing you to Docker containers and then move on to concepts such Dockerfiles. You’ll learn the entire process for container development and deployment. This Learning Path ends with you building, deploying and testing your Kubernetes cloud-native containerised application.

Introduction to Kubernetes

This learning path is for anyone who wants to manage containers at scale. By the time you’re done, you’ll be able to use Kubernetes to manage containers, as well as deploy a stateless and stateful application.

Docker in Depth

Learn all about Docker from individual containers to continuous deployment of an application in AWS.

Building, Deploying, and Running Containers in Production

With this learning path you’ll get a taste of engineering and operating containers in production-like environments, culminating with building, deploying, and testing your own Kubernetes cluster.

Python based Microservices – Go from Zero to Hero

Learn the basics of designing, building and deploying production-grade microservices using Python, Flask, and Docker containers. 

FAQ

Can I use Kubernetes without Docker?

Technically, yes you can use Kubernetes without Docker as long as you have some other kind of container runtime. A couple of options are containerd and Podman, though Docker is by far the most popular container platform.

Is Kubernetes free?

Yes, it’s free if you use the completely open source version available on its GitHub repository. A lot of users end up preferring to use the Kubernetes offering that is bundled with other services, libraries, platforms, etc. on the big cloud offerings. Some even have more fully-managed offerings that are helpful but more expensive.

Should I learn Docker or Kubernetes first?

Yes, it’s a good idea to start with the smaller part of the system: the container (Docker Compose) and then move up to the orchestration system (Kubernetes). But if you’re passionate and curious, by all means, jump into both at the same time to see where your learning takes you.

Are containers outdated?

Not yet. They’re still valuable to microservices-based architectures even though they’ve basically been around as long as virtual machines. Some people thought they would be replaced by serverless or no-code solutions but those tend to solve different solutions at different scales.

The post Kubernetes vs. Docker: Differences and Use Cases appeared first on Cloud Academy.

]]>
0
Docker Image Security: Get it in Your Sights https://cloudacademy.com/blog/docker-image-security-get-it-in-your-sights/ https://cloudacademy.com/blog/docker-image-security-get-it-in-your-sights/#respond Fri, 21 Aug 2020 15:49:23 +0000 https://cloudacademy.com/?p=42018 For organizations and individuals alike, the adoption of Docker is increasing exponentially with no signs of slowing down. Why is this? Because Docker provides a whole host of features that make it easy to create, deploy, and manage your applications. This useful technology is especially essential as your organization scales...

The post Docker Image Security: Get it in Your Sights appeared first on Cloud Academy.

]]>
For organizations and individuals alike, the adoption of Docker is increasing exponentially with no signs of slowing down. Why is this? Because Docker provides a whole host of features that make it easy to create, deploy, and manage your applications. This useful technology is especially essential as your organization scales and your infrastructure grows. 

Want to jump right in and explore everything about Docker? Try the Cloud Academy Learning Path Docker in DepthYou’ll get courses, labs, quizzes, and exams — everything you need in one place minus the coffee.

Docker in Depth learning path

Technically speaking, Docker is a set of PaaS (Platform-as-a-Service) products that uses OS-level virtualization for delivering software in containers that communicate with one another, but when these container ecosystems aren’t designed cautiously and managed with care, they can lead to some pretty risky security issues.

While Docker offers a wide range of benefits, the security challenges that come with containerized environments shouldn’t be overlooked. To help you increase the security standards of your Docker-based containerized environments, we’ve outlined our best practices to maintain Docker image security below.

Docker Image Security Best Practices

Always Verify Images Prior to Using

By default, Docker allows you to pull Docker images without validating their authenticity first. This exposes you to Docker images with unverified author and origin details. To avoid this, use the following command to temporarily enable Docker Content Trust:

export DOCKER_CONTENT_TRUST=1

Always ensure that the image you are pulling is published by a reliable publisher and that no third parties have modified it. Docker-certified images are provided by trusted partners and curated by the official Docker Hub, and using certified images is critical when considering the code in your production environment. To add another layer of protection, Dockers allows images to be signed using Docker Notary. Notary also verifies the image signature and prevents an image from running if it has an invalid signature.

Efficiently Handle the Container Life cycle

The way a user decides to handle the container life cycle greatly determines the Docker image security. While updating a container, it is recommended to not only to check the updated layer for security, but to also test the entire stack.

Establish a Thorough and Standardized Access Management Solution

When you have a strong access management solution for your Docker implementation — and really, across your cloud infrastructure — your containers can (and should) operate with minimal privileges and access in order to reduce risk. Organizations can use Role-Based Access Control (RBAC) and Active Directory solutions to manage permissions for the entire organization easily and effectively.

Find and Fix Open-Source Vulnerabilities

Open-source resources are extremely popular in Docker. The upside is that these are free to use, and they support modifications to a great extent. However, the downside is potential unexposed vulnerabilities, which can easily take your system down. Does that mean you shouldn’t use these free readily available resources? Of course not! But you should do your due diligence first and use them with caution. One way to do this is by using tools that allow continuous scanning and monitoring of vulnerabilities across all in-use Docker image layers, such as Snyk.

Limit the System Resources Consumed by Containers

Another good Docker security practice is to implement limits on the system resources that are to be consumed by containers. This not only helps in reducing performance impacts, but also lessens the risk of DoS (denial of service) attacks.

No Root User by Default/Least Privileged Access

When there is no USER specified in the Dockerfile, the default mechanism is to execute the container using the root user. This means that the running Docker container could potentially have root access on the Docker host. This is a problem! Allowing an application on the container with the root user running increases the exposed attack surface and makes the application vulnerable to exploitation. To avoid this undesirable scenario, create a both a dedicated user and a dedicated group in the Docker image. Next, use the USER directive in the Dockerfile to make sure that the container runs the application with the least privileged access possible, as is best practice.

Use a Linter

Using a linter not only helps in avoiding common mistakes, but also in establishing best practice guidelines that can be followed in an automated, convenient way. One recommended linter to use with Docker is Hadolint, which parses a Dockerfile and generates warnings for errors that don’t follow its best practice rules. It’s even more useful when used in conjunction with an (Integrated Development Environment (IDE).

Use Multi-Stage Builds and Secrets Manager

Even when deleted, sensitive data like an SSH private key or tokens may still persist on the layer they were added to due to caching. This poses a great security risk. The solution is to keep secret information outside the Dockerfile by using multi-stage builds. Using Docker support for multi-stage builds allows fetching and managing private data in an intermediate layer that is disposed of later, ensuring that any sensitive information doesn’t reach the image build. You can also use Secrets Manager for the same purpose.

You should also be wary of a recursive copy. When you have files containing sensitive information in the folder, either remove them or use .dockerignore.

Whenever Possible, Avoid ADD and Use COPY Instead

Copying files from the host into a Docker image at build time can be accomplished by using either the ADD or COPY command. When executed, either command will perform a recursive copy. Note that unlike the ADD command, the COPY command requires that you declare a destination file.

If the destination directory doesn’t exist for the ADD command, then (unlike with the COPY command) the ADD command will create a directory. When you copy resources with URLs, it is important to reference them over secured TLS connections (HTTPS) for enhanced security. In addition, the source/origins of the resources should be validated when using the COPY command.

When you copy archives, the ADD command automatically extracts the archive into the destination directory. This is something you do not want, so it’s to be avoided. The reason is that it increases the risk of “zip bombs” and “zip slip vulnerabilities.” The COPY command allows you to separate the addition of an archive from remote locations, as well as unpack it as different layers. This optimizes the image cache. Avoid using ADD whenever possible, as using the command makes Docker susceptible to attacks via remote URLs and Zip files.

Yes to Minimal Base Images

When the project doesn’t necessitate any general system libraries or utilities, it is better to pick a minimal base image rather than a full-fledged operating system. According to Snyk’s annual state of open-source security report in 2019, several of the popular Docker containers featured on the Docker Hub contain many known vulnerabilities. Minimal base images for Docker containers feature only the necessary system libraries and tools to run the project, and therefore minimize the attack surface. Less attack surface means less risk.

Docker Image Security Tips

Employ a Robust Third-Party Security Tool

When using containers from untrusted public repositories, it is vital to assess the degree of security risk first. Use a multi-purpose security tool that offers dev-to-production security features for assessing risk.

Pay Attention to Vulnerability

You should have a robust vulnerability management program that performs multiple checks during the entire container life cycle. It must include quality gates — these will help to detect any issues related to access for detecting access issues, as well as any weak points where there could be a  potential exploit from development-to-production environments.

Monitor and Assess Container Activity

Monitoring the container ecosystem to detect and manage any suspicious activity must not be overlooked. Container monitoring tools offer real-time reports, which are helpful in reacting rightly against security breaches.

Enable Docker Content Trust

Introduced in Docker 1.8, the Docker Content Trust feature helps in verifying the authenticity, integrity, and publication date of all Docker images from the Docker Hub Registry.

Use the Docker Bench for Security Script

For further securing your Docker server and containers, be sure to run the Docker Bench for Security script. This script checks for a plethora of configuration best practices when deploying Docker containers.

Be Careful When Using a Web Server 

Always check the parameters carefully when using a web server and API for creating containers. This will prevent you from creating undesirable, or even harmful, containers.

Avoid Using the Default Bridge Network When Using a Single-Host App with Networking

When using a single-host app with networking, the use of the default bridge network must be avoided. If you do use the default bridge network and then publish a port, all containers on the bridge network become undesirably accessible. Additionally,  other technical disadvantages make it a non-recommended practice for production use.

More Important Docker Image Security Tips:

  • When there is only the need to read from volumes, mount them as read-only. There are several ways to do this, and you can choose one that best fits your process and requirements.
  • Use Docker containers for running other processes on the same server that you are using for your Docker project(s).
  • Secure API endpoints with HTTPS or SSH when exposing a REST API.
  • Never store sensitive data in a container, but only in volumes.
  • For serving, use Lets Encrypt for HTTPS certificates.
  • Keep your Docker, system libraries, and utilities up-to-date for the latest bug fixes and security enhancements.
  • Consider using Docker Enterprise when dealing with multiple or large teams and/or many Docker containers.

In conclusion

Docker is, without a doubt, one of the best options when dealing with cloud-powered technologies and applications that are meant to be readily deployable and quick to act. You need to double-check your security measures, though, to make the most out of containerized environments.

Hopefully, you will find the security practices and tips put up together in the article useful. Always remember, it’s essential to continuously upgrade your security standards to keep your applications, data, and—most importantly—your clients safe at all times.

The post Docker Image Security: Get it in Your Sights appeared first on Cloud Academy.

]]>
0
Top 20 Open Source Tools for DevOps Success https://cloudacademy.com/blog/top-20-open-source-tools-for-devops-success/ https://cloudacademy.com/blog/top-20-open-source-tools-for-devops-success/#respond Tue, 09 Jul 2019 12:58:28 +0000 https://cloudacademy.com/?p=33731 Open source tools perform a very specific task, and the source code is openly published for use or modification free of charge. I’ve written about DevOps multiple times on this blog. I reiterate the point that DevOps is not about specific tools. It’s a philosophy for building and improving software...

The post Top 20 Open Source Tools for DevOps Success appeared first on Cloud Academy.

]]>
Open source tools perform a very specific task, and the source code is openly published for use or modification free of charge. I’ve written about DevOps multiple times on this blog. I reiterate the point that DevOps is not about specific tools. It’s a philosophy for building and improving software value streams, and there are three principles: flow, feedback, learning.

The philosophy is simple: Optimize for fast flow from development to production, integrate feedback from production into development, and continuously experiment to improve that process. These principles manifest themselves in software teams as continuous delivery (and hopefully deployment), highly integrated telemetry, and learning and experimentation drive the culture. That said, certain tools make achieving flow, feedback, and learning easier. You don’t have to shell out big bucks to third party vendors though. You can build a DevOps value stream with established open source tools.

Let’s start with the principle flow and what the open source community has to offer for supporting continuous delivery. In this article, we’ll cover the top 20 open source tools to achieve DevOps success. But to dive deeper in deployment pipelines and the role different tools, check out Cloud Academy’s DevOps – Continuous Integration and Continuous Delivery (CI/CD) Tools and Services  Learning Path. DevOps Playbook

Open Source Continuous Delivery

1. Gitlab is a great project for source control management, configuring continuous integration, and managing deployments. Gitlab offers a unified interface for continuous integration and deployment branded as “Auto DevOps.” Team members can trigger deploys or automatically created dedicated environments for a pull-request and see test results all within the same system.

2. Kubernetes and Docker are associated tools like docker-compose to make it easy to maintain development environments and work with any language or framework. Kubernetes is the go-to container orchestration platform today, so look here first for deploying containerized applications to production (and dev, test, staging, etc).

3. Spinnaker is designed for continuous delivery. Spinnaker removes grunt work from packaging and deploying applications. It has built-in support for continuous delivery practices like canary deployments, blue-green deploys, and even percentage based rollouts. Spinnaker abstracts away the underlying infrastructure so you can build a continuous delivery pipeline on AWS, GCP, or even on your own Kubernetes cluster.

Infrastructure-as-Code

The underlying infrastructure must be created and configured regardless of it being on a cloud provider or container orchestration. Infrastructure-as-code is the DevOps way.

4. Terraform (from Hashicorp) is the best tool for open source infrastructure-as-code. It supports AWS, GCP, Azure, Digital Open, and more using a declarative language. Terraform handles the underlying infrastructure such as EC2 instances, networking, and load balancers. It’s not intended to configure software running on that infrastructure. That’s where configuration management and immutable infrastructure tools have a role to play.

5. Packer (also from Hashicorp) is a tool for building immutable infrastructure. Packer can build Docker images, Amazon Machine Images, and other virtual machine formats. Its flexibility makes an easy choice for the “package” step in cloud based deployment processes. You can even integrate Packer and Spinnaker for golden image deployments.

6-9. Ansible, Chef, Puppet, and SaltStack are configuration management tools. Each vary slightly in design intended uses. They’re all intended to configure mutable state across your infrastructure. The odds are you’ll end up mixing Terraform, Ansible, and Packer for a complete infrastructure-as-code solution. Cloud Academy’s Cloud Configuration Management Tools Learning Path gives you an overview of configuration management, and then introduces you to three of the most common tools used today: Ansible, Puppet, and Chef. Cloud Academy’s Ansible Learning Path, developed in partnership with Ansible, teaches configuration management and application deployment. It demonstrates how Ansible’s flexibility can used to solve common DevOps problems.

DevOps: Open Source Tools & Cloud Configuration Management Learning Path

Open Source Telemetry

The SDLC really starts when code enters production. The DevOps principle of feedback calls for using production telemetry to inform development work. Or in other words: use real time operational data such as time series data, logs, and alerts to understand the reality and act accordingly. The FOSS community supports multiple projects to bring telemetry into your day-to-day work.

10. Prometheus is a Cloud Native Computing Foundation (CNCF) project for managing time series data and alerts. It’s integrated into Kubernetes, another CNCF project, as well. In fact, many of the CNCF projects prefer Prometheus for metric data. Support is not limited to CNCF projects either. Prometheus is a strong choice for many different infrastructures because it uses an open API format, includes alert support, and integrates with many common components.

11. Statsd is a Prometheus alternative for time series data. Prometheus uses a pull approach. This is good for understanding if a monitored system is unavailable but requires registering new systems with Prometheus. Statsd on the other hand uses a push model. Any system can push data into a statsd server but data is sent over UDP. Statsd, unlike Prometheus, only support time series data, so you’ll need another tool to manage alerts.

12. Grafana is for data visualization. Projects like Prometheus and statsd only handle data collection. They rely on other tools for visualization. There is where Gafana comes in. Grafana is a flexible visualization system with integrations for popular data sources like Promotheus, Statsd, and AWS Cloudwatch. Grafana dashboards are just text files which makes it a natural fit for infrastructure-as-code practices.

13. The Elastic Stack is a complete solution for time series data and logs. The Elastic Stack uses ElasticSearch for time series data and log storage paired with Kibana for visualization. Log stash connects and transforms logs from various components like web server logs or redis server logs into a standard format.

14. Flutend is another CNCF telemetry project. It acts like a unified logging layer for ingress, transformation, and routing. Data steams may be forwarded to multiple sources like statsd for real time interactions or sent to S3 for archiving. Fluentd supports many data sources and data outputs. Projects like Fluentd are especially useful for connecting disparate to a standard set of upstream tools.

15. Jaeger is a distributed request tracing project compatible with Open Tracing. Traces track individual interactions within a system across all instrumented components with latency and other metadata. This is a must for micro service and other distributed architectures since engineers can pinpoint where, what, and when.

Expanding Out

The third way of DevOps calls for continuous improvement through experimentation and learning. Once the continuous delivery pipeline is established along with telemetry to improve velocity, quality, and customer satisfaction. Here are some projects that help teams improve different aspects of their process.

16. Chaos Monkey is a project by Netflix to introduce chaos into running systems. The idea is to introduce faults into system to increase reliability and durability. This is part of the principles of chaos engineering and further described in Release It! and Google’s Site Reliability Engineering book. The idea that willingly breaking your production environment may sound foreign but doing so will reveal unknowns and train teams to design away possible failure scenarios. You don’t have to go all in at once either. You can rules and restrictions so you don’t destroy your production environment until you’re ready.

17. Vault by Hashicorp is a tool for securing, storing, and controlling access to tokens, passwords, certificates, encryption keys and other sensitive data using a UI, CLI, or HTTP API. It’s great for info-sec minded teams looking for a better solution than text files or environment variables.

Building and deploying software

You’ll encounter some of these tools building and deploying software. This list isn’t exhaustive by any means.

18.Nomad is light-weight Kubernetes alternative.

19.GoCD is another deployment pipeline and CI option.

20.The serverless framework opens the door into an entirely new architecture. Just consider the list of CNCF projects. You’re likely to uncover tools for scenarios you never considered. DevOps-focused teams will assuredly use a mix of FOSS and proprietary software when building their systems. Engineers must understand how the different projects fit into their overall architecture and leverage them for best effect.

Also keep in mind these projects are not infrastructure specific. You can use them for your on-premises infrastructure, AWS, GCP, or Azure systems. Cloud Academy’s Terraform Learning Path teach students to achieve DevOps success with Terraform and infrastructure-as-code, covering AWS and Azure. Engineers can learn these tools and keep skills portable across different setups.

Don’t get lost in tooling though. You can achieve DevOps success irrespective to the underlying tools if the right culture is in place — check out the DevOps Playbook – Moving to a DevOps Culture. The secret is to build on the philosophy that values flow, feedback, and learning and realizes their practices via tools. Learn the ideas, build a culture, and the rest will sort itself out.

The post Top 20 Open Source Tools for DevOps Success appeared first on Cloud Academy.

]]>
0
What is Kubernetes? An Introductory Overview https://cloudacademy.com/blog/what-is-kubernetes/ https://cloudacademy.com/blog/what-is-kubernetes/#respond Wed, 12 Jun 2019 00:44:48 +0000 https://cloudacademy.com/blog/?p=17869 In part 1 of my webinar series on Kubernetes, I introduced Kubernetes at a high level with hands-on demos aiming to answer the question, “What is Kubernetes?” After polling our audience, we found that most of the webinar attendees had never used Kubernetes before, or had only been exposed to...

The post What is Kubernetes? An Introductory Overview appeared first on Cloud Academy.

]]>
In part 1 of my webinar series on Kubernetes, I introduced Kubernetes at a high level with hands-on demos aiming to answer the question, “What is Kubernetes?” After polling our audience, we found that most of the webinar attendees had never used Kubernetes before, or had only been exposed to it through demos. This post is meant to complement the session with more introductory-level information about Kubernetes.

Containers have been helping teams of all sizes to solve issues with consistency, scalability, and security. Using containers, such as Docker, allow you to separate the application from the underlying infrastructure. Gaining that separation requires some new tools in order to get the most value out of containers, and one of the most popular tools used for container management and orchestration is Kubernetes. It’s is an open-source container orchestration tool designed to automate deploying, scaling, and operating containerized applications.

Kubernetes was born from Google’s 15-year experience running production workloads. It is designed to grow from tens, thousands, or even millions of containers. Kubernetes is container-runtime agnostic. You actually use Kubernetes to manage Rocket containers today. This article covers the basics about Kubernetes, but you can deep dive into the tool with Cloud Academy’s Intro to Kubernetes Learning Path.

Intro to Kubernetes Course

Intro to Kubernetes Course

|
What can Kubernetes do?

Kubernetes’ features provide everything you need to deploy containerized applications. Here are the highlights:

  • Container Deployments and Rollout Control. Describe your containers and how many you want with a “Deployment.” Kubernetes will keep those containers running and handle deploying changes (such as updating the image or changing environment variables) with a “rollout.” You can pause, resume, and rollback changes as you like.
  • Resource Bin Packing. You can declare minimum and maximum compute resources (CPU and memory) for your containers. Kubernetes will slot your containers into where ever they fit. This increases your compute efficiency and ultimately lowers costs.
  • Built-in Service Discovery and Autoscaling. Kubernetes can automatically expose your containers to the internet or other containers in the cluster. It automatically load balances traffic across matching containers. Kubernetes supports service discovery via environment variables and DNS, out of the box. You can also configure CPU-based autoscaling for containers for increased resource utilization.
  • Heterogeneous Clusters. Kubernetes runs anywhere. You can build your Kubernetes cluster for a mix of virtual machines (VMs) running the cloud, on-premises, or bare metal in your datacenter. Simply choose the composition according to your requirements.
  • Persistent Storage. Kubernetes includes support for persistent storage connected to stateless application containers. There is support for Amazon Web Services EBS, Google Cloud Platform persistent disks, and many, many more.
  • High Availability Features. Kubernetes is planet scale. This requires special attention to high-availability features such as multi-master or cluster federation. Cluster federation allows linking clusters together so that if one cluster goes down, containers can automatically move to another cluster.

These key features make Kubernetes well suited for running different application architectures from monolithic web applications, to highly distributed microservice applications, and even batch driven applications.

How does Kubernetes compare to other tools?

Container orchestration is a deservedly popular trend in cloud computing. At the beginning, the industry focused on pushing container adoption, advancing next to deploying containers in production at scale. There are many useful tools in this area. To learn about some of the other tools in this space, we will explore a few of them by comparing their features to Kubernetes.

The key players here are Apache Mesos/DCOS, Amazon’s ECS, and Docker’s Swarm Mode. Each has its own niche and unique strengths.

DCOS (or DataCenter OS) is similar to Kubernetes in many ways. DCOS pools compute resources into a uniform task pool. The big difference is that DCOS targets many different types of workloads, including but not limited to, containerized applications. This makes DCOS attractive for organizations that are not using containers for all of their applications. DCOS also includes a kind of package manager to easily deploy systems like Kafka or Spark. You can even run Kubernetes on DCOS given its flexibility for different types of workloads.

ECS is AWS’s entry in container orchestration. ECS allows you create pools of EC2 instances and uses API calls to orchestrate containers across them. It’s only available inside AWS and is less feature complete compared to open source solutions. It may be useful for those deep into the AWS ecosystem.

Docker’s Swarm Mode is the official orchestration tool from Docker Inc. Swarm Mode builds a cluster from multiple Docker hosts. It offers similar features compared to Kubernetes or DCOS with one notable exception. Swarm Mode is the only tool to work natively with the docker command. This means that associated tools like docker-compose can target Swarm Mode clusters without any changes.

Here are my general recommendations:

  • Use Kubernetes if you’re only working with containerized applications that may or may not be only Docker.
  • If you have a mix of container and non-containerized applications, use DCOS.
  • Use ECS if you enjoy AWS products and first-party integrations.
  • If you want a first party solution or direct integration with the Docker toolchain, use Docker Swarm.

Now you have some context and understanding of what Kubernetes can do for you. The demo in the webinar covered the key features. Today, I’ll be able to cover some of the details that we didn’t have time for in the webinar session. We will start by introducing some Kubernetes vocabulary and architecture.

What is Kubernetes?

Kubernetes is a distributed system. It introduces its own vernacular to the orchestration space. Therefore, understanding the vernacular and architecture is crucial.

Terminology

Kubernetes “clusters” are composed of “nodes.” The term cluster refers to nodes in the aggregate. “Cluster” refers to the entire running system. A node is a worker machine within Kubernetes, (previously known as “minion”). A node may be a VM or a physical machine. Each node has software configured to run containers managed by Kubernetes’ control plane. The control plane is the set of APIs and software (such as kubectl) that Kubernetes users interact with. The control plane services run on master nodes. Clusters may have multiple masters for high availability scenarios.

The control plane schedules containers onto nodes. In this context, the term “scheduling” does not refer to time. Think of it from a kernel perspective: The kernel “schedules” processes onto the CPU according to many factors. Certain processes need more or less compute, or have different quality-of-service rules. The scheduler does its best to ensure that every process gets CPU time. In this case, scheduling means deciding where to run containers according to factors like per-node hardware constraints and the requested CPU/Memory.

Architecture

Containers are grouped into “pods.” Pods may include one or more containers. All containers in a pod run on the same node. The “pod” is the lowest building block in Kubernetes. More complex (and useful) abstractions come on top of “pods.”

“Services” define networking rules for exposing pods to other pods or exposing pods to the public internet. Kubernetes uses “deployments” to manage deploying configuration changes to running pods and horizontal scaling. A deployment is a template for creating pods. Deployments are scaled horizontally by creating more “replica” pods from the template. Changes to the deployment template trigger a rollout. Kubernetes uses rolling deploys to apply changes to all running pods in a deployment.

Kubernetes provides two ways to interact with the control plane. The kubectl command is the primary way to do anything with Kubernetes. There is also a web UI with basic functionality.

Most of these terms were introduced in some way during the webinar. I suggest reading the Kubernetes glossary for more information.

What is Kubernetes? Demo walkthrough

In the webinar demo, we showed how to deploy a sample application. The sample application is a boiled-down micro-service. It includes enough to demonstrate features that real applications require.

There wasn’t enough time during the session to include everything I planned, so here is an outline for what did make it into the demo:

  • Interacting with Kubernetes with kubectl
  • Creating namespaces
  • Creating deployments
  • Connecting pods with services
  • Service discovery via environment variables
  • Horizontally scaling with replicas
  • Triggering a new rollout
  • Pausing and resuming a rollout
  • Accessing container logs
  • Configuring probes

I suggest keeping this post handy while watching the webinar for greater insight into the demo.

The “server” container is a simple Node.js application. It accepts a POST request to increment a counter and a GET request to retrieve the counter. The counter is stored in redis. The “poller” container continually makes the GET request to the server to print the counter’s value. The “counter” container starts a loop and makes a POST request to the server to increment the counter with random values.

I used Google Container Engine for the demo. You can follow along with Minikube if you like. All you need is a running Kubernetes cluster and access to the kubectl command.

How do I deploy Kubernetes?

First, I created a Kubernetes namespace to hold all the different Kubernetes resources for the demo. While it is not strictly required in this case, I opted for this because it demonstrates how to create a namespace and using namespaces is a general best practice.

I created a deployment for redis with one replica. There should only be one redis container running. Running multiple replicas, thus multiple databases, would create multiple sources of truth. This is a stateful data tier. It does not scale horizontally. Then, a data tier service was created. The data tier service matches containers in the data pod and exposes them via an internal IP and port.

The same process repeats for the app tier. A Kubernetes deployment describes the server container. The redis location is specified via an environment variable. Kubernetes sets environment variables for each service on all containers in the same namespace. The server uses REDIS_URL to specify the host, port, and other information. Kubernetes supports environment variable interpolation with $() syntax. The demo shows that composing application-specific environment variable names from Kubernetes provides environment variables. An app tier service is created as well.

Next comes the support tier. The support tier includes the counter and poller. Another deployment is created for this tier. Both containers find the server container via the API_URL environment variable. This value is composed of the app tier service host and port.

At this point, we have a running application. We can access logs via the kubectl logs command, and we can scale the application up and down. The demo configures both types of Kubernetes probes (aka “health checks”). The liveness probe tests that the server accepts HTTP requests. The readiness probe tests that the server is up and has a connection to redis and is thus “ready” to serve API requests.

Security: It’s never too early to start

Even if you’re new to all of this, it’s a good idea to lay a foundation of robust security when digging into an important service like Kubernetes.

You can start with some high-level best practices such as:

  • Authenticate securely: use OpenID connect tokens which are based on OAuth 2.0, an open and modern form of authentication.
  • Role Based Access Control (RBAC): use RBAC here and everywhere in your cloud deployments to always consider who actually needs access, especially at the admin level.
  • Secure all the traffic: whether it’s with network policies or at the more basic pod level via pos security contexts, it’s important to control access within and throughout via the network.
  • Stay up-to-date: an easy habit to enact immediately is to conduct updates at a regular cadence in order to help maintain your security posture.

What is Kubernetes? Part 2 – stay tuned

Part 1 of the series focused on answering the question “What is Kubernetes?” and introducing core concepts in a hands-on demo. Part 2 covers using Kubernetes in production operations. You’ll gain insight into how to use Kubernetes in a live ecosystem with real-world complications. I hope to see you there!

The post What is Kubernetes? An Introductory Overview appeared first on Cloud Academy.

]]>
0
New on Cloud Academy, January ’18: Security, Machine Learning, Containers, and more https://cloudacademy.com/blog/new-on-cloud-academy-in-january-security-machine-learning-containers-and-more/ https://cloudacademy.com/blog/new-on-cloud-academy-in-january-security-machine-learning-containers-and-more/#respond Thu, 18 Jan 2018 08:00:04 +0000 https://cloudacademy.com/blog/?p=22787   LEARNING PATHS Introduction to Kubernetes Kubernetes allows you to deploy and manage containers at scale. Created by Google, and now supported by Azure, AWS, and Docker, Kubernetes is the container orchestration platform of choice for many deployments. For teams deploying containerized applications, this learning path will serve as an...

The post New on Cloud Academy, January ’18: Security, Machine Learning, Containers, and more appeared first on Cloud Academy.

]]>
 

LEARNING PATHS

Introduction to Kubernetes
Introduction to Kubernetes
Kubernetes allows you to deploy and manage containers at scale. Created by Google, and now supported by Azure, AWS, and Docker, Kubernetes is the container orchestration platform of choice for many deployments. For teams deploying containerized applications, this learning path will serve as an introduction to the Kubernetes ecosystem and a primer for preparing for production. You will work alongside us as we explore key features by deploying a sample application, and you will be able to practice deploying stateful and stateless applications in two hands-on labs.

COURSES

Introduction to Azure Machine Learning Studio
Introduction to Azure Machine Learning
Azure Machine Learning Studio is machine learning at its most accessible. With this web-based software, you can train and deploy machine learning models using a drag-and-drop interface, without any coding whatsoever. This course will help teams get started using Machine Learning Studio. You will learn how to prepare your data, train a machine learning model, and deploy it as a predictive web service across a series of hands-on demos.

Introduction to VMware Cloud on AWS
Introduction to VMware Cloud on AWS
VMware Cloud on AWS allows you to seamlessly transition your VM workloads to the AWS cloud to take advantage of on-demand resourcing, scalability, flexibility, security and other benefits of the public cloud. With this course, business managers evaluating a hybrid cloud solution will be able to understand how VMware’s private on-premises architecture works with AWS and the benefits that this combination offers the enterprise.

Getting Started with Migrating to the Cloud
Getting Started With Migrating to the Cloud
Now that you’ve decided to migrate to the cloud, it’s time to shift from theory to practice. This course features hands-on strategies, techniques, and best practices that teams can apply in migrating business applications to public cloud services. Your teams will get practical guidance for building your migration business case, moving up the maturity curve, creating your migration roadmap, and more.

An Overview of AWS Trusted Advisor
An Overview of AWS Trusted Advisor
As your infrastructure grows, how can you make sure you’re deploying your resources in the best way while ensuring tight security and resiliency against failure? AWS Trusted Advisor recommends improvements across your account to help optimize and hone your environment based on AWS best practices. In this course, you’ll get the hands-on practice and actionable knowledge you need to start using AWS Trusted Advisor to improve your AWS infrastructure.

HANDS-ON LABS

Azure Key Vault and Disk Encryption
Azure Key Vault and Disk Encryption
Azure Key Vault is a service for managing and encrypting keys, secrets, and digital certificates that streamlines the key management process. With Key Vault, you can encrypt keys and secrets (authentication keys, .PFX files, passwords, and more) using keys protected by hardware security modules (HSMs). In this hands-on lab, you will use PowerShell to build the Azure Key Vault to store keys and secrets used to encrypt an Azure Virtual Machine.

Diagnose Cancer with an Amazon Machine Learning Classifier
Diagnose Cancer with an Amazon Machine Learning Classifier
Can a computer predict a diagnosis in a way that is faster and less expensive? Researchers and teams working with binary classification models will find an effective tool in Amazon Machine Learning Classifier. In this hands-on lab, you will use the service to train a model with medical data, evaluate the model’s performance, and use the model to make diagnoses for predictions in real time.

Using an MXNet Neural Network to Style Images
Using an MXNet Neural Network to Style Images
The AWS Deep Learning AMI has everything you need to start building an AI system. The MXNet deep learning framework on AWS supports training and deployment of neural networks on a variety of devices. In this hands-on lab, you will use the AWS Deep Learning AMI and a GPU instance to perform neural style transfers and examine performance and cost metrics in Amazon CloudWatch.

Serverless Web Development with Python for AWS
Serverless Web Development with Python for AWS
Cloud providers make it easy for software engineers to focus on writing their code without having to focus on the underlying server. For DevOps Engineers, Developers, or Site Reliability Engineers experimenting with serverless, AWS offers a variety of services for creating fully serverless applications. In this lab, you will practice serverless web development with Python by testing and deploying a multi-user to-do list application that uses the AWS Serverless Application Model, Amazon Cognito, and DynamoDB.

Manage Access to Azure with Role-Based Access Control
Manage Access to Azure With Role-Based Access Control
Limiting access to resources based on a user’s role is a simple way to ensure security within your environment. Microsoft Azure provides fine-grained role-based access control (RBAC) mechanisms to secure your resources. In this hands-on lab, you will follow the principle of least privilege for users as you manage access to Azure with RBAC. You will use Azure PowerShell to create a custom role, learn how to assign roles to users, and get tips on how to define your own custom roles.

The post New on Cloud Academy, January ’18: Security, Machine Learning, Containers, and more appeared first on Cloud Academy.

]]>
0
8 Hands-on Labs to Master Docker in the Enterprise https://cloudacademy.com/blog/8-hands-on-labs-to-master-docker-in-the-enterprise/ https://cloudacademy.com/blog/8-hands-on-labs-to-master-docker-in-the-enterprise/#respond Thu, 16 Nov 2017 08:00:17 +0000 https://cloudacademy.com/blog/?p=22280 Docker containers are known for bringing a level of ease and portability to the process of developing and deploying applications. Where developers have embraced them for development and testing, enterprise DevOps professionals consider container technologies like Docker to be a strategic path toward faster time to production and cloud-native applications....

The post 8 Hands-on Labs to Master Docker in the Enterprise appeared first on Cloud Academy.

]]>
Docker containers are known for bringing a level of ease and portability to the process of developing and deploying applications. Where developers have embraced them for development and testing, enterprise DevOps professionals consider container technologies like Docker to be a strategic path toward faster time to production and cloud-native applications.
Now is a great time for teams to explore the potential of containers. Both Docker and DC/OS parent Mesosphere have recently announced their native support of Kubernetes for container orchestration. Using these tools, you can quickly and predictably deploy your applications, scale applications to meet demand, roll out updates with zero downtime, and utilize the underlying hardware more efficiently. The container orchestration engine you choose can run in any cloud or on-premises, giving you the flexibility to migrate or leverage multiple clouds.

Our Hands-on Labs are a great way to get comfortable using containers before deploying them in your own environment. Start with our beginner-level labs to launch your first Docker container in AWS or Azure. Next, practice using Docker for development and testing. Then, use Marathon to deploy Docker containers on DC/OS. Finally, practice managing clusters using Kubernetes and Docker Swarm.

Get started with Docker on AWS and Azure

In this first series of Hands-on Labs, you’ll be able to try Docker using AWS (Linux) or Microsoft Azure in Windows and Linux. We will guide you through the process of setting up Docker, running your first Docker container, and creating your first Docker images and files.

Getting Started with Docker on Linux for AWS

Docker on Linux for AWS

Getting Started with Docker on Windows

Getting Started wtih Docker on Windows

Getting Started with Docker on Linux for Azure

Getting Started with Docker on Linux for Azure

Docker for Software Development, Testing, & Delivery (Enterprise Lab)

In this lab, you will learn how you can use Docker to simplify your software processes during development, test, and delivery. We’ll containerize an existing application, use Docker Compose to create a multi-container environment, create a Docker Registry, and release updates to the production environment.
Docker for Software Development, Testing,

Using Marathon to Schedule Docker Containers on DC/OS

In this Hands-on Lab, you will deploy a Marathon application using a Docker container. Using the DC/OS command-line interface (CLI), you will use additional Docker containers to load balance, scale the application, and persist the application data in a database.
Use Marathon to Schedule Docker Containers on DC/OS

Manage Your Clusters with Docker Swarm (Enterprise Lab)

Docker Swarm allows developers to build and ship multi-container distributed applications. You’ll start with a pre-created cluster that is Swarm-ready and use Swarm to manage it, learning the common commands and capabilities. We’ll show you how to use Docker Stack to deploy multi-service applications to your Swarm.
Manage Clusters with Docker Swarm

Deploy a Stateless Application in a Kubernetes Cluster (Enterprise Lab)

With Kubernetes, you can automatically deploy, scale, rollout updates, rollback, and recover container applications. In this Lab, you will learn how to deploy a stateless application in a Kubernetes cluster that you build from the ground up using Linux virtual machines.
Deploy a Stateless Application in a Kubernetes Cluster

Deploy a Stateful Application in a Kubernetes Cluster (Enterprise Lab)

Applications that are “stateful” have a memory of the past. For example, if an enterprise wants to migrate a legacy application that they haven’t had time to re-architect, it can be helpful to deploy it as a stateful application in the meantime. As adoption of Kubernetes for stateful applications grows, your teams will want to know when to use Kubernetes for stateful, and how to use it to successfully move/deploy applications.
Deploy a Stateful Application in a Kubernetes Cluster

The post 8 Hands-on Labs to Master Docker in the Enterprise appeared first on Cloud Academy.

]]>
0