Wednesday, December 13, 2023

Kubernetes

Prerequisites

We assume anyone who wants to understand Kubernetes should have an understating of how the Docker works, how the Docker images are created, and how they work as a standalone unit. To reach to an advanced configuration in Kubernetes one should understand basic networking and how the protocol communication works.


It is also called as KS8. 

How this KS8 has been defined. 

That is nothing but the letter starting from the u in kubernetes till e in kubernetes. we have total of 8 characters.











Kubernetes is an extensible, portable, and open-source platform designed by Google in 2014. It is mainly used to automate the deployment, scaling, and operations of the container-based applications across the cluster of nodes


Features 

Pod

It is the smallest and simplest basic unit of the Kubernetes application. This object indicates the processes which are running in the cluster. We can also say that it is  a docker Container.

It is a deployment unit in Kubernetes with a single Internet protocol address.


ReplicaSet

ReplicaSet in the Kubernetes is used to identify the particular number of pod replicas are running at a given time. It replaces the replication controller because it is more powerful and allows a user to use the "set-based" label selector.

Persistent Storage: Kubernetes provides an essential feature called 'persistent storage' for storing the data, which cannot be lost after the pod is killed or rescheduled. Kubernetes supports various storage systems for storing the data, such as Google Compute Engine's Persistent Disks (GCE PD) or Amazon Elastic Block Storage (EBS). It also provides the distributed file systems: NFS or GFS.

Automatic Bin Packing: Kubernetes helps the user to declare the maximum and minimum resources of computers for their containers.

Self-Healing: This feature plays an important role in the concept of Kubernetes. Those containers which are failed during the execution process, Kubernetes restarts them automatically. And, those containers which do not reply to the user-defined health check, it stops them from working automatically.

Automated rollouts and rollbacks: Using the rollouts, Kubernetes distributes the changes and updates to an application or its configuration. If any problem occurs in the system, then this technique rollbacks those changes for you immediately.

Service Discovery and load balancing: Kubernetes assigns the IP addresses and a Name of DNS for a set of containers, and also balances the load across them.


Kubernetes Architecture













The architecture of Kubernetes actually follows the client-server architecture. It consists of the following two main components:

  1. Master Node (Control Plane)
  2. worker node

The master node in a Kubernetes architecture is used to manage the states of a cluster. It is actually an entry point for all types of administrative tasks.

Following are the four different components which exist in the Master node or Kubernetes Control plane:

  1. API Server
  2. Scheduler
  3. Controller Manager
  4. ETCD

API Server

The Kubernetes API server receives the REST commands which are sent by the user. After receiving, it validates the REST requests, process, and then executes them. After the execution of REST commands, the resulting state of a cluster is saved in 'etcd' as a distributed key-value store.


Scheduler

The scheduler in a master node schedules the tasks to the worker nodes. 
In other words, it is a process that is responsible for assigning pods to the available worker nodes

Controller Manager

The Controller manager is also known as a controller. It is a daemon that executes in the non-terminating control loops. The controllers in a master node perform a task and manage the state of the cluster. In the Kubernetes, the controller manager executes the various types of controllers for handling the nodes, endpoints, etc.

ETCD

It is an open-source, simple, distributed key-value storage which is used to store the cluster data.

Worker Node

The Worker node in a Kubernetes is also known as minions. A worker node is a physical machine that executes the applications using pods.

Kubelet

This component is an agent service that executes on each worker node in a cluster. It ensures that the pods and their containers are running smoothly. Every kubelet in each worker node communicates with the master node. It also starts, stops, and maintains the containers which are organized into pods directly by the master node.

Pods

pod is a combination of one or more containers which logically execute together on nodes. One worker node can easily execute multiple pods.

Kube-proxy

It is a proxy service of Kubernetes, which is executed simply on each worker node in the cluster. The main aim of this component is request forwarding. Each node interacts with the Kubernetes services through Kube-proxy.

































Installation










Friday, December 8, 2023

Docker Swarm

Why?

Docker is a great tool (the "de facto" standard) to build Linux containers.

Docker Compose is great to develop locally with Docker, in a replicable way.

Docker Swarm Mode is great to deploy your application stacks to production, in a distributed cluster, using the same files used by Docker Compose locally.


What are Docker and Docker Container?

Docker is an open-source tool used to automate application deployment in the form of lightweight containers. And Docker containers are not only lightweight but also make the application platform-independent. So, the application that runs on your computer will run the same way on your friend’s computer!

Docker Container logo


The Docker Containers overcome a lot of problems faced by the virtual machines. Docker Containers are faster, portable, provide isolation, use less memory, etc.


If you have Docker installed, you already have Docker Swarm, it's integrated into Docker.

You don't have to install anything else.


docker info 








































Now, that is inactive, we will make it active.

Drawbacks of Containers

If I am running million of containers, then we need to maintain it .








Scale Out:

Out/In/Up/Down:

How much we use, that much only we pay. 


Ensure that Containers are recreated, if they fail.

Replace Containers , without down time(Deployment)

Tracking:

We need to track control , where and when they get started.


Scalability


In the below pic, we can see, if the load is more on one of the server, than it should automatically switch to next server. 
We should not depend on the single server.











































































































































Now, go to AWS account and create a instances
























































































Once all the instances are created. We need to login into MoboXTerm.


Now, Add Six instances of it.




























Now, go to each instances of AWS and then paste it hear























Rename each of the instance in the aws account with the below names




 



























Click on Connect and paste each instance id in the moboxterm 
















Repeat the same in each of the instances.



































After running all the instances, next step is to check 
























Installing parallelly in  all the machines.









































Next, we can rename all the tabs.











Now, in manager 1 Tab, 

To activate docker swarn, 








When ever we are fried , than the first node becomes the leader(manager)























Before running add worker and add manager, let us check how many nodes are running.







Now, you can run the instances and re run the docker node ls command



Next step is copy the add worker node and run in the worker 1, worker2, worker 3 instances.
















Now, re run the docker node ls command









Similarly add worker node 2, and worker node 3 and re run the docker node ls command in the manager instance


Note: docker node ls -->Should always run on the manager node.













From the above screen shot, we can see that one manager node and three worker nodes have been created.



After that, we want to add a manager, for that , copy the add a manager command and run in the manger2, and manager 3 instances




















































Now, go and check the status of Docker Swarn

Type docker info now





































































































Now, go to the manager 1 instance and kill it. 


docker swarn leave command is used , so that the manger 2 will become active now























Now, fire docker node ls in the second manager instance and see the difference





































Kubernetes

Prerequisites We assume anyone who wants to understand Kubernetes should have an understating of how the Docker works, how the Docker images...