ulimit (user limit)

ulimit is admin access required Linux shell command which is used to see, set, or limit the resource usage of the current user. It is used to return the number of open file descriptors for each process. It is also used to set restrictions on the resources used by a process.

Syntax:

To check the ulimit value use the following command:

ulimit -n (open files)
ulimit -a
ulimit values

Working with ulimit commands:

1. To display maximum users process or for showing maximum user process limit for the logged-in user.

ulimit -u
showing maximum users per process

2. For showing the maximum file size a user can have.

ulimit -f
For showing the maximum file size a user can have

3. For showing maximum memory size for the current user.

ulimit -m
For showing maximum memory size for the current user.

4. For showing maximum memory size limit.

ulimit -v
For showing maximum memory size limit.

What are Soft limits and Hard limits in Linux? 

The soft limits are the limits which are allocated for actual processing of application or users while the Hard limits are nothing but an upper bound to the values of soft limits. Hence,  

(soft limits <= hard limit)

Working with Hard and Soft limit values:

1. For displaying the Hard limit. Hard limits are a restriction to the maximum value of soft limits

ulimit -Hn
For displaying the Hard limit

2. For displaying Soft Limit. The soft limits are the limits that are there for processing.

ulimit -Sn
Displaying soft limit values

3. To change Soft Limit values:

sysctl -w fs.file-max=<value>

Note: Replace <value> with the value you want to set for the soft limit and also remember size can not exceed the Hard Limit!

4. Displaying current Values for opened files

cat /proc/sys/fs/file-max
Displaying current Values for opened files

Kubernetes deployment

  • User perform Kubernetes deployment directly
    • Deployment performs replication
    • POD creation
    • Container creation
  • Kubernetes deployment can be done by
    • Command (Internally a yaml file is created when command fired)
    • Yaml file

Create and Edit deployment

Syntax to Create Deployment

kubectl create deployment <deployment-name> --image=<image_name>

Example :

kubectl create deployment nginx-depl --image=nginx

Syntax to edit Deployment

kubectl edit deployment <deployment-name>

Example :

kubectl edit deployment nginx-depl

Now you can check if the old pod is terminated and new pod is recreated

Debug a deployment

  • logs
  • describe
  • bash terminal

Example of mongo db

logs
describe
Visit the container

Delete Deployment

Nginx Deployment on k8 with Service

Deployment File – nginx-deployment.yaml

apiVersion: apps/v1
kind: Deployment
metadata:
  name: order-deployment
  labels:
    app: orderlabel
spec:
  replicas: 3
  selector:
    matchLabels:
      app: orderlabel
  template:
    metadata:
      labels:
        app: orderlabel
    spec:
      containers:
      - name: orderapi
        image: nginx:1.14.2
        ports:
        - containerPort: 80

Servcie File – service.yaml

apiVersion: v1
kind: Service
metadata:
  name: order-service
spec:
  selector:
    app: orderlabel
  type: LoadBalancer
  ports:
    - protocol: TCP
      port: 8083
      targetPort: 80

How to execute these files on k8 ?

kubectl get all #display all status
cd /to/the/yaml/files/directory/
kubectl apply -f nginx-deployment.yaml
kubectl apply -f service.yaml
kubectl get all
kubectl get pods
kubectl get service

How to check if it is working ? – http://localhost:8083/

hit the URL – http://localhost:8083/

How to stop the deployment ?

kubectl scale --replicas=0 deployment.apps/order-deployment
kubectl get all #check if no pods exist
kubectl delete deploy order-deployment
kubectl delete service order-service

Kubernetes namespace Information

  • Namespaces are a way to organize clusters into virtual sub-clusters
  • They can be helpful when different teams or projects share a Kubernetes cluster
  • Any number of namespaces are supported within a cluster, each logically separated from others but with the ability to communicate with each other. 
  • Namespaces cannot be nested within each other.

  • Any resource that exists within Kubernetes exists either in the default namespace or a namespace that is created by the cluster operator
  • Only nodes and persistent storage volumes exist outside of the namespace; these low-level resources are always visible to every namespace in the cluster.

What is the “default” namespace in Kubernetes?

Kubernetes comes with three namespaces out-of-the-box. They are:

  1. default: As its name implies, this is the namespace that is referenced by default for every Kubernetes command, and where every Kubernetes resource is located by default. Until new namespaces are created, the entire cluster resides in ‘default’.
  2. kube-system: Used for Kubernetes components and should be avoided.
  3. kube-public: Used for public resources. Not recommended for use by users.

Namespace can be used on

  • deployment
  • pods
  • service
  • configmap
  • secrets

  • List of namespace
kubectl get namespace
  • Create namespace

Kubernetes get command

kubectl get <resources>

isplay one or many resources.

Prints a table of the most important information about the specified resources. You can filter the
list using a label selector and the –selector flag. If the desired resource type is namespaced you
will only see results in your current namespace unless you pass –all-namespaces.

By specifying the output as ‘template’ and providing a Go template as the value of the –template
flag, you can filter the attributes of the fetched resources.

Use “kubectl api-resources” for a complete list of supported resources.

Examples:

kubectl get pods

kubectl get deployments

kubectl get services

kubectl get configmaps

kubectl get secrets

kubectl get ingress


# List all pods in ps output format
kubectl get pods

# List all pods in ps output format with more information (such as node name)
kubectl get pods -o wide

# List a single replication controller with specified NAME in ps output format
kubectl get replicationcontroller web

# List deployments in JSON output format, in the “v1” version of the “apps” API group
kubectl get deployments.v1.apps -o json

# List a single pod in JSON output format
kubectl get -o json pod web-pod-13je7

# List a pod identified by type and name specified in “pod.yaml” in JSON output format
kubectl get -f pod.yaml -o json

# List resources from a directory with kustomization.yaml – e.g. dir/kustomization.yaml
kubectl get -k dir/

# Return only the phase value of the specified pod
kubectl get -o template pod/web-pod-13je7 –template={{.status.phase}}

# List resource information in custom columns
kubectl get pod test-pod -o
custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image

# List all replication controllers and services together in ps output format
kubectl get rc,services

# List one or more resources by their type and names
kubectl get rc/web service/frontend pods/web-pod-13je7

# List status subresource for a single pod.
kubectl get pod web-pod-13je7 –subresource status

Architecture of Kubernetes

 Kubernetes- Master Node Components

Kubernetes master is responsible for managing the entire cluster, coordinates all activities inside the cluster, and communicates with the worker nodes to keep the Kubernetes and your application running. This is the entry point of all administrative tasks. When we install Kubernetes on our system we have four primary components of Kubernetes Master that will get installed. 

The components of the Kubernetes Master node are: 

a.) API Server– The API server is the entry point for all the REST commands used to control the cluster. All the administrative tasks are done by the API server within the master node. If we want to create, delete, update or display in Kubernetes object it has to go through this API server.API server validates and configures the API objects such as ports, services, replication, controllers, and deployments and it is responsible for exposing APIs for every operation. We can interact with these APIs using a tool called kubectl‘kubectl’ is a very tiny go language binary that basically talks to the API server to perform any operations that we issue from the command line. It is a command-line interface for running commands against Kubernetes clusters 

b.) Scheduler– It is a service in the master responsible for distributing the workload. It is responsible for tracking the utilization of the working load of each worker node and then placing the workload on which resources are available and can accept the workload. The scheduler is responsible for scheduling pods across available nodes depending on the constraints you mention in the configuration file it schedules these pods accordingly. The scheduler is responsible for workload utilization and allocating the pod to the new node. 

c.) Controller Manager– Also known as controllers. It is a daemon that runs in a non terminating loop and is responsible for collecting and sending information to the API server. It regulates the Kubernetes cluster by performing lifestyle functions such as namespace creation and lifecycle event garbage collections, terminated pod garbage collection, cascading deleted garbage collection, node garbage collection, and many more. Basically, the controller watches the desired state of the cluster if the current state of the cluster does not meet the desired state then the control loop takes the corrective steps to make sure that the current state is the same as that of the desired state. The key controllers are the replication controller, endpoint controller, namespace controller, and service account, controller. So in this way controllers are responsible for the overall health of the entire cluster by ensuring that nodes are up and running all the time and correct pods are running as mentioned in the specs file. 

d.) etc– It is a distributed key-value lightweight database. In Kubernetes, it is a central database for storing the current cluster state at any point in time and is also used to store the configuration details such as subnets, config maps, etc. It is written in the Go programming language.

2. Kubernetes- Worker Node Components –

Kubernetes Worker node contains all the necessary services to manage the networking between the containers, communicate with the master node, and assign resources to the containers scheduled. The components of the Kubernetes Worker node are: 

a.) Kubelet– It is a primary node agent which communicates with the master node and executes on each worker node inside the cluster. It gets the pod specifications through the API server and executes the container associated with the pods and ensures that the containers described in the pods are running and healthy. If kubelet notices any issues with the pods running on the worker nodes then it tries to restart the pod on the same node. If the issue is with the worker node itself then the Kubernetes master node detects the node failure and decides to recreate the pods on the other healthy node.

b.) Kube-Proxy– It is the core networking component inside the Kubernetes cluster. It is responsible for maintaining the entire network configuration. Kube-Proxy maintains the distributed network across all the nodes, pods, and containers and exposes the services across the outside world. It acts as a network proxy and load balancer for a service on a single worker node and manages the network routing for TCP and UDP packets. It listens to the API server for each service endpoint creation and deletion so for each service endpoint it sets up the route so that you can reach it. 

c.) Pods– A pod is a group of containers that are deployed together on the same host. With the help of pods, we can deploy multiple dependent containers together so it acts as a wrapper around these containers so we can interact and manage these containers primarily through pods. 

d.) Docker– Docker is the containerization platform that is used to package your application and all its dependencies together in the form of containers to make sure that your application works seamlessly in any environment which can be development or test or production. Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Docker is the world’s leading software container platform. It was launched in 2013 by a company called Dot cloud. It is written in the Go language. It has been just six years since Docker was launched yet communities have already shifted to it from VMs. Docker is designed to benefit both developers and system administrators making it a part of many DevOps toolchains. Developers can write code without worrying about the testing and production environment. Sysadmins need not worry about infrastructure as Docker can easily scale up and scale down the number of systems. Docker comes into play at the deployment stage of the software development cycle.

Introduction to Kubernetes

  • Kubernetes is an open-source Container Management tool that automates container deployment, container scaling, descaling, and container load balancing (also called a container orchestration tool).
  • It is written in Golang and has a vast community because it was first developed by Google and later donated to CNCF (Cloud Native Computing Foundation).
  • Kubernetes can group ‘n’ number of containers into one logical unit for managing and deploying them easily.
  • It works brilliantly with all cloud vendors i.e. public, hybrid, and on-premises. 

Features of Kubernetes:

  1. Automated Scheduling– Kubernetes provides an advanced scheduler to launch containers on cluster nodes. It performs resource optimization.
  2. Self-Healing Capabilities– It provides rescheduling, replacing, and restarting the containers which are dead.
  3. Automated Rollouts and Rollbacks– It supports rollouts and rollbacks for the desired state of the containerized application.
  4. Horizontal Scaling and Load Balancing– Kubernetes can scale up and scale down the application as per the requirements.
  5. Resource Utilization– Kubernetes provides resource utilization monitoring and optimization, ensuring containers are using their resources efficiently.
  6. Support for multiple clouds and hybrid clouds– Kubernetes can be deployed on different cloud platforms and run containerized applications across multiple clouds.
  7. Extensibility– Kubernetes is very extensible and can be extended with custom plugins and controllers.
  8. Community Support- Kubernetes has a large and active community with frequent updates, bug fixes, and new features being added.