Short cuts :
- Press u to sort by user
Scripting is sorcery
Short cuts :
ulimit is admin access required Linux shell command which is used to see, set, or limit the resource usage of the current user. It is used to return the number of open file descriptors for each process. It is also used to set restrictions on the resources used by a process.
Syntax:
To check the ulimit value use the following command:
ulimit -n (open files) ulimit -a
Working with ulimit commands:
1. To display maximum users process or for showing maximum user process limit for the logged-in user.
ulimit -u
2. For showing the maximum file size a user can have.
ulimit -f
3. For showing maximum memory size for the current user.
ulimit -m
4. For showing maximum memory size limit.
ulimit -v
What are Soft limits and Hard limits in Linux?
The soft limits are the limits which are allocated for actual processing of application or users while the Hard limits are nothing but an upper bound to the values of soft limits. Hence,
(soft limits <= hard limit)
Working with Hard and Soft limit values:
1. For displaying the Hard limit. Hard limits are a restriction to the maximum value of soft limits
ulimit -Hn
2. For displaying Soft Limit. The soft limits are the limits that are there for processing.
ulimit -Sn
3. To change Soft Limit values:
sysctl -w fs.file-max=<value>
Note: Replace <value> with the value you want to set for the soft limit and also remember size can not exceed the Hard Limit!
4. Displaying current Values for opened files
cat /proc/sys/fs/file-max
Syntax to Create Deployment
kubectl create deployment <deployment-name> --image=<image_name>
Example :
kubectl create deployment nginx-depl --image=nginx
Syntax to edit Deployment
kubectl edit deployment <deployment-name>
Example :
kubectl edit deployment nginx-depl
Now you can check if the old pod is terminated and new pod is recreated
Example of mongo db
Deployment File – nginx-deployment.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: order-deployment labels: app: orderlabel spec: replicas: 3 selector: matchLabels: app: orderlabel template: metadata: labels: app: orderlabel spec: containers: - name: orderapi image: nginx:1.14.2 ports: - containerPort: 80
Servcie File – service.yaml
apiVersion: v1 kind: Service metadata: name: order-service spec: selector: app: orderlabel type: LoadBalancer ports: - protocol: TCP port: 8083 targetPort: 80
How to execute these files on k8 ?
kubectl get all #display all status cd /to/the/yaml/files/directory/ kubectl apply -f nginx-deployment.yaml kubectl apply -f service.yaml kubectl get all kubectl get pods kubectl get service
How to check if it is working ? – http://localhost:8083/
hit the URL – http://localhost:8083/
How to stop the deployment ?
kubectl scale --replicas=0 deployment.apps/order-deployment kubectl get all #check if no pods exist kubectl delete deploy order-deployment kubectl delete service order-service
Kubernetes comes with three namespaces out-of-the-box. They are:
Namespace can be used on
kubectl get namespace
kubectl get <resources>
isplay one or many resources.
Prints a table of the most important information about the specified resources. You can filter the
list using a label selector and the –selector flag. If the desired resource type is namespaced you
will only see results in your current namespace unless you pass –all-namespaces.
By specifying the output as ‘template’ and providing a Go template as the value of the –template
flag, you can filter the attributes of the fetched resources.
Use “kubectl api-resources” for a complete list of supported resources.
kubectl get pods
kubectl get deployments
kubectl get services
kubectl get configmaps
kubectl get secrets
kubectl get ingress
# List all pods in ps output format
kubectl get pods
# List all pods in ps output format with more information (such as node name)
kubectl get pods -o wide
# List a single replication controller with specified NAME in ps output format
kubectl get replicationcontroller web
# List deployments in JSON output format, in the “v1” version of the “apps” API group
kubectl get deployments.v1.apps -o json
# List a single pod in JSON output format
kubectl get -o json pod web-pod-13je7
# List a pod identified by type and name specified in “pod.yaml” in JSON output format
kubectl get -f pod.yaml -o json
# List resources from a directory with kustomization.yaml – e.g. dir/kustomization.yaml
kubectl get -k dir/
# Return only the phase value of the specified pod
kubectl get -o template pod/web-pod-13je7 –template={{.status.phase}}
# List resource information in custom columns
kubectl get pod test-pod -o
custom-columns=CONTAINER:.spec.containers[0].name,IMAGE:.spec.containers[0].image
# List all replication controllers and services together in ps output format
kubectl get rc,services
# List one or more resources by their type and names
kubectl get rc/web service/frontend pods/web-pod-13je7
# List status subresource for a single pod.
kubectl get pod web-pod-13je7 –subresource status
kubectl config current-context
kubectl config get-contexts
kubectl config set-contexts <context-name>
Kubernetes master is responsible for managing the entire cluster, coordinates all activities inside the cluster, and communicates with the worker nodes to keep the Kubernetes and your application running. This is the entry point of all administrative tasks. When we install Kubernetes on our system we have four primary components of Kubernetes Master that will get installed.
The components of the Kubernetes Master node are:
a.) API Server– The API server is the entry point for all the REST commands used to control the cluster. All the administrative tasks are done by the API server within the master node. If we want to create, delete, update or display in Kubernetes object it has to go through this API server.API server validates and configures the API objects such as ports, services, replication, controllers, and deployments and it is responsible for exposing APIs for every operation. We can interact with these APIs using a tool called kubectl. ‘kubectl’ is a very tiny go language binary that basically talks to the API server to perform any operations that we issue from the command line. It is a command-line interface for running commands against Kubernetes clusters
b.) Scheduler– It is a service in the master responsible for distributing the workload. It is responsible for tracking the utilization of the working load of each worker node and then placing the workload on which resources are available and can accept the workload. The scheduler is responsible for scheduling pods across available nodes depending on the constraints you mention in the configuration file it schedules these pods accordingly. The scheduler is responsible for workload utilization and allocating the pod to the new node.
c.) Controller Manager– Also known as controllers. It is a daemon that runs in a non terminating loop and is responsible for collecting and sending information to the API server. It regulates the Kubernetes cluster by performing lifestyle functions such as namespace creation and lifecycle event garbage collections, terminated pod garbage collection, cascading deleted garbage collection, node garbage collection, and many more. Basically, the controller watches the desired state of the cluster if the current state of the cluster does not meet the desired state then the control loop takes the corrective steps to make sure that the current state is the same as that of the desired state. The key controllers are the replication controller, endpoint controller, namespace controller, and service account, controller. So in this way controllers are responsible for the overall health of the entire cluster by ensuring that nodes are up and running all the time and correct pods are running as mentioned in the specs file.
d.) etc– It is a distributed key-value lightweight database. In Kubernetes, it is a central database for storing the current cluster state at any point in time and is also used to store the configuration details such as subnets, config maps, etc. It is written in the Go programming language.
Kubernetes Worker node contains all the necessary services to manage the networking between the containers, communicate with the master node, and assign resources to the containers scheduled. The components of the Kubernetes Worker node are:
a.) Kubelet– It is a primary node agent which communicates with the master node and executes on each worker node inside the cluster. It gets the pod specifications through the API server and executes the container associated with the pods and ensures that the containers described in the pods are running and healthy. If kubelet notices any issues with the pods running on the worker nodes then it tries to restart the pod on the same node. If the issue is with the worker node itself then the Kubernetes master node detects the node failure and decides to recreate the pods on the other healthy node.
b.) Kube-Proxy– It is the core networking component inside the Kubernetes cluster. It is responsible for maintaining the entire network configuration. Kube-Proxy maintains the distributed network across all the nodes, pods, and containers and exposes the services across the outside world. It acts as a network proxy and load balancer for a service on a single worker node and manages the network routing for TCP and UDP packets. It listens to the API server for each service endpoint creation and deletion so for each service endpoint it sets up the route so that you can reach it.
c.) Pods– A pod is a group of containers that are deployed together on the same host. With the help of pods, we can deploy multiple dependent containers together so it acts as a wrapper around these containers so we can interact and manage these containers primarily through pods.
d.) Docker– Docker is the containerization platform that is used to package your application and all its dependencies together in the form of containers to make sure that your application works seamlessly in any environment which can be development or test or production. Docker is a tool designed to make it easier to create, deploy, and run applications by using containers. Docker is the world’s leading software container platform. It was launched in 2013 by a company called Dot cloud. It is written in the Go language. It has been just six years since Docker was launched yet communities have already shifted to it from VMs. Docker is designed to benefit both developers and system administrators making it a part of many DevOps toolchains. Developers can write code without worrying about the testing and production environment. Sysadmins need not worry about infrastructure as Docker can easily scale up and scale down the number of systems. Docker comes into play at the deployment stage of the software development cycle.