Docker Swarm and Kubernetes

Docker Swarm

Docker Swarm is a container orchestration tool for clustering and scheduling Docker containers. With Swarm, IT administrators and developers can establish and manage a cluster of Docker nodes as a single virtual system.

Docker Swarm lets developers join multiple physical or virtual machines into a cluster. Nodes or daemons
are the names given to these single computers. Administrators and developers may also start Docker containers
using Docker Swarm, link them to many hosts, manage the resources of each node, and increase the availability
of applications across the system.
Swarm mode is a native feature of Docker Engine, which sits between the operating system and container images.
Docker Engine 1.12 and later versions incorporate the orchestration capabilities of Docker Swarm through the
use of swarm mode. Docker Swarm uses the standard Docker API to interface with other Docker tools, such as
Docker Machine.

Features of Docker Swarm

Some of the most essential features of Docker Swarm are:

  • Decentralized access: Swarm makes it very easy for teams to access and manage the

environment 

  • High security: Any communication between the manager and client nodes within the

Swarm is highly secure 

  • Autoload balancing: There is autoload balancing within your environment, and you can

script that into how you write out and structure the Swarm environment. 

  • High scalability: Load balancing converts the Swarm environment into a highly scalable

infrastructure

  • Roll-back a task: Swarm allows you to roll back environments to previous safe environments.

Docker Swarm Architecture

There are two types of nodes in Docker Swarm:

  1. Manager node: Carries out and oversees cluster-level duties. 

  2. Worker node: Receives and completes the tasks set by the manager node.


A single manager node can be created, but the worker node can not be created without a manager node.
The ideal number for the count of the manager node is seven. Increasing the number of the manager nodes
does not mean that the scalability will increase.


Docker Swarm Filters

The following are the docker swarm filters:


Constraints: Based on conditions of use, users are restricted from creating containers on particular Docker hosts.

Drain Node: The Docker swarm will not allocate replicas to any node if we apply Drain Nodes.

Port: Installing the same port apps on two distinct nodes prevents port conflicts between programs. 


Docker Swarm Mode Key Concepts

The following are the docker swarm mode key concepts:

  • Node: A Node is an instance of a Docker engine that connects to the Swarm. You can run

one or more nodes on a single physical computer or cloud server. Nodes can be either
managers or workers. The manager node dispatches units of work called tasks to worker
nodes. Worker nodes receive and execute tasks dispatched from manager nodes.
  • Services: A service is a high-level concept relating to a collection of tasks to be executed

by workers. An example of a service is an HTTP Server running as a Docker Container
on three nodes.
  • Load Balancing: Docker includes a load balancer to process requests across all containers

in the service.

Benefits of Docker Swarm

The following are the benefits of Docker Swarm:

  • Simplified Setup and Management: Docker Swarm provides an easy-to-use and

integrated toolset for orchestrating containers, making it straightforward to set up and manage a cluster of Docker nodes.
  • Scalability: Docker Swarm allows seamless scaling of services up or down with simple

commands, enabling dynamic adjustment of resources based on demand.

  • High Availability: Swarm mode ensures that services are replicated across multiple nodes,

providing fault tolerance and resilience by automatically redistributing tasks in case of node failures.
  • Integrated Load Balancing: Docker Swarm includes built-in load balancing to distribute

network traffic across multiple containers, ensuring optimal performance and resource utilization.


Docker Swarm Mode CLI Commands

The following are the docker swarm mode CLI commands:

  • docker swarm init: This command is used to initialize the swarm.      

docker swarm init [OPTIONS]

  • docker swarm join: By using this command, you can join a node to a swarm. The node joins
    as a manager node or worker node based on the token you pass with the –token flag.

docker swarm join [OPTIONS] HOST:PORT

  • docker service creates: This is a cluster management command and must be executed on
    a Swarm manager node.

docker service create [OPTIONS] IMAGE [COMMAND] [ARG...]

  • Docker service inspects: This command is used to inspect a particular service, and all
    the details will be displayed in JSON format.

docker service inspect [OPTIONS] SERVICE [SERVICE...]

  • docker service ls: This command is used to see the complete list of all the services in
    that network.

docker service ls [OPTIONS]

  • docker service rm: This command is used to remove the specific service you want to
    remove.

docker service rm SERVICE [SERVICE...]















Kubernetes


Kubernetes is a lightweight, expandable, open-source platform that makes automation and declarative configuration

easier for managing containerized workloads and services. It boasts a huge and quickly expanding ecology.

Services, tools, and support for Kubernetes are readily accessible.


The name Kubernetes originates from Greek, meaning helmsman or pilot. Counting the eight letters that separate
the letter "K" from the letter "s" yields the acronym K8s. In 2014, Google made the Kubernetes project publicly
available. Over 15 years of Google's expertise managing production workloads at scale is combined with cutting-edge
concepts and community practices in Kubernetes.




Key Features


Automated Operations: Kubernetes automates many of the operational tasks involved in managing containerized applications.

This includes deploying applications, rolling out changes, scaling applications based on demand, and

monitoring application health.


Service Discovery and Load Balancing: Kubernetes can expose containers using DNS names or their IP addresses.

It can also balance network traffic to distribute the load across multiple containers, ensuring no single container

is overwhelmed.


Storage Orchestration: Kubernetes can automatically mount storage systems of your choice, whether from local storage, public cloud
providers, or network storage systems like iSCSI or NFS.

Self-Healing:

Kubernetes continuously monitors the health of containers and can automatically restart those that fail,
replace and reschedule containers when nodes die, and kill containers that do not respond to user-defined health
checks.

Horizontal Scaling: Kubernetes can scale applications up and down with a simple command, through a user interface, or automatically
based on CPU usage.



Architecture of Kubernetes 

Kubernetes architecture consists of two main parts: the control plane and the worker nodes. Thanks to this design, the cluster of computers can effectively manage and orchestrate containerized applications. Here's a summary of the essential elements:



 Control Plane Components


The control plane is responsible for managing the overall state of the cluster and making global decisions. It includes:


1. kube-Episerver: The Kubernetes API is accessible through the central management point. It manages all correspondence between the various parts and outside customers.


2. etcd:A distributed key-value store that maintains high availability and consistency while storing all cluster data

3. kube-scheduler: Pods are assigned to nodes according to hardware/software limitations, resource requirements, and other criteria.

4. kube-controller-manager: runs controller processes, such as the replication controller and node controller, that govern the cluster's status.

5. cloud-controller-manager: When deploying Kubernetes in a cloud context, this module communicates with the API of the underlying cloud provider to handle cloud-specific components.


 Node Components

Worker nodes are the machines that run containerized applications. Each node contains:

1. kubelet: An agent that runs on each node, ensuring containers run in a pod and communicate with the

control plane.

2. kube-proxy: Maintains network rules on nodes, enabling communication to pods from inside or outside the

cluster.

3. Container runtime: Software responsible for running containers, such as Docker, containers, or CRI-O.


Additional Components


1. DNS: Kubernetes clusters typically include a DNS server for service discovery

2. Dashboard: A web-based UI for managing and troubleshooting applications and the cluster itself

3. Networking plugins: Implement the Container Network Interface (CNI) for pod networking


Key Architectural Principles

Kubernetes architecture is designed with several principles in mind:

1. High Availability: Both applications and infrastructure are designed for high availability through replication and distributed storage

2. Scalability: Supports automatic scaling of applications and clusters

3. Portability: Enables running applications across various environments, from on-premises to multiple cloud providers

4. Security: Implements features like authentication, authorization, and encryption for cluster communication


This architecture allows Kubernetes to manage containerized applications efficiently, providing features such

as automated deployment, scaling, and self-healing capabilities across a distributed cluster of machines.


Benefits

Portability and Flexibility:

Kubernetes is designed to run anywhere, allowing applications to be deployed across on-premises, public clouds,
or hybrid environments. This flexibility enables organizations to adopt a cloud-native approach to application
development and deployment.


Efficiency and Resource Optimization:


By orchestrating containers across multiple hosts, Kubernetes optimizes the use of resources, ensuring that

applications run efficiently. It supports both critical and best-effort workloads, driving up resource utilization

and saving costs.


Development Velocity:


Kubernetes supports the development of cloud-native microservices-based applications and the containerization

of existing applications. This capability accelerates application development and modernization, enabling faster

deployment cycles.


Common Use Cases


Microservices Architecture: Kubernetes is often used to deploy and manage microservices, enabling applications to be built cloud-native.


Continuous Integration/Continuous Deployment (CI/CD): Kubernetes supports CI/CD workflows, allowing

for automated testing, deployment, and scaling of applications.

Hybrid and Multi-Cloud Deployments: Kubernetes provides a consistent platform for running applications

across various environments, from on-premises data centers to multiple public clouds.



Docker Swarm

  Kubernetes

Docker Swarm is intended to be a lightweight and

easy-to-use container orchestration system.

Kubernetes is an open-source platform used for

maintaining and deploying a group of containers.

Multiple containers may operate on the same

hardware considerably more effectively with

Docker swarm than in a virtual machine (VM),

and Docker productivity is relatively high. 

In actual use, Docker and Kubernetes work

together to improve the deployment and

management of containerized apps.

Apps are deployed in the form of services.

Applications are deployed as a combination of

pods, Deployment, and services. 

Although Docker Swarm supports auto-scaling,

it is less effective than Docker Swarm. 

It supports auto-scaling of the container in a

cluster with more efficient 

Health checks are limited to services

Health checks are of two kinds: liveness and

readiness.

Docker swarm setup and installation are easy.

Hard to set up and configure

The documentation for the Docker swarm is more

powerful, comprehensive, and capable. It covers

everything from deployment and installation to

quick-start guides and in-depth tutorials. 

It has a limited amount of documentation but is

quite less than Docker Swarm. But it does include

everything from installation to deployment.

Installing Docker Swarm on your virtual machine

or even in the cloud is quite simple and requires

fewer commands.

Installation of Kubernetes is said to be more

challenging than that of Docker swarm, and the

Kubernetes command is also said to be more

intricate.

Citizens Bank and MetLife companies are using

Docker swarm. 

Azure, buffer, intel, Evernote, and Shopify

Using Kubernetes. 



Conclusion 

Docker Swarm is a powerful and effective technology for container orchestration that makes managing Docker

containers easier. It allows developers and IT managers to cluster several physical or virtual machines into a

single system, guaranteeing resource optimization, scalability, and high availability. Docker Swarm is an attractive

option for administering containerized applications because of its essential characteristics, which include decentralized

access, high security, autoload balancing, and rollback capabilities.


The manager and worker nodes in the Docker Swarm architecture provide effective cluster management and workload

distribution. The integrated load balancing and straightforward CLI commands improve its usefulness and usability.

Despite being lightweight and simple to set up, Docker Swarm offers a robust documentation suite to help users.


In contrast, additional features like load balancing, automated operations, service discovery, and self-healing capabilities
are available with Kubernetes, another popular container orchestration platform. Kubernetes works well in situations
needing high availability, scalability, and multi-cloud deployments, but it is more difficult to set up and configure.

Organizations use both Kubernetes and Docker Swarm, each with special advantages, to enhance the deployment
and administration of containerized applications. Particular use cases, organizational requirements, and the required
degree of control and complexity frequently influence the decision between them.

Karamveer Singh  (C0893963)

Comments