Skip to main content

Experiment No. 9 Title: Demonstrating Container Orchestration using Kubernetes

 Experiment No. 9

Title: Demonstrating Container Orchestration using Kubernetes


Objective:

The objective of this experiment is to introduce students to container orchestration using Kubernetes and demonstrate how to deploy a containerized web application. By the end of this experiment, students will have a basic understanding of Kubernetes concepts and how to use Kubernetes to manage containers.


Introduction:

Container orchestration is a critical component in modern application deployment, allowing you to manage, scale, and maintain containerized applications efficiently. Kubernetes is a popular container orchestration platform that automates many tasks associated with deploying, scaling, and managing containerized applications. This experiment will demonstrate basic container orchestration using Kubernetes by deploying a simple web application.


Kubernetes, often abbreviated as K8s, is an open-source container orchestration platform designed to automate the deployment, scaling, and management of containerized applications. Developed by Google and later donated to the Cloud Native Computing Foundation (CNCF), Kubernetes has become the de facto standard for container orchestration in modern cloud-native application development.


Key Concepts in Kubernetes:


  • Containerization: Kubernetes relies on containers as the fundamental unit for packaging and running applications. Containers encapsulate an application and its dependencies, ensuring consistency across various environments.

  • Cluster: A Kubernetes cluster is a set of machines, known as nodes, that collectively run containerized applications. A cluster typically consists of a master node (for control and management) and multiple worker nodes (for running containers).

  • Nodes: Nodes are individual machines (virtual or physical) that form part of a Kubernetes cluster. Nodes run containerized workloads and communicate with the master node to manage and orchestrate containers.

  • Pod: A pod is the smallest deployable unit in Kubernetes. It can contain one or more tightly coupled containers that share the same network and storage namespace. Containers within a pod are typically used to run closely related processes.

  • Deployment: A Deployment is a Kubernetes resource that defines how to create, update, and scale instances of an application. It ensures that a specified number of replicas are running at all times.

  • Service: A Service is an abstraction that exposes a set of pods as a network service. It provides a stable IP address and DNS name for accessing the pods, enabling load balancing and discovery.

  • Namespace: Kubernetes supports multiple virtual clusters within the same physical cluster, called namespaces. Namespaces help isolate resources and provide a scope for organizing and managing workloads.


Key Features of Kubernetes:


  • Automated Scaling: Kubernetes can automatically scale the number of replicas of an application based on resource usage or defined metrics. This ensures applications can handle varying workloads efficiently.

  • Load Balancing: Services in Kubernetes can distribute traffic among pods, providing high availability and distributing workloads evenly.

  • Self-healing: Kubernetes monitors the health of pods and can automatically restart or replace failed instances to maintain desired application availability.

  • Rolling Updates and Rollbacks: Kubernetes allows for controlled, rolling updates of applications, ensuring zero-downtime deployments. If issues arise, rollbacks can be performed with ease.

  • Storage Orchestration: Kubernetes provides mechanisms for attaching storage volumes to containers, enabling data persistence and sharing.

  • Configuration Management: Kubernetes supports configuration management through ConfigMaps and Secrets, making it easy to manage application configurations.

  • Extensibility: Kubernetes is highly extensible, with a vast ecosystem of plugins and extensions, including Helm charts for packaging applications and custom resources for defining custom objects.


Kubernetes has become a cornerstone of cloud-native application development, enabling organisations to build, deploy, and scale containerized applications effectively. Its ability to abstract away infrastructure complexities, ensure application reliability, and provide consistent scaling makes it a powerful tool for modern software development and operations.


Materials:

  • A computer with Kubernetes installed (https://kubernetes.io/docs/setup/)

  • Docker installed (https://docs.docker.com/get-docker/)


Experiment Steps:

Step 1: Create a Dockerized Web Application

  • Create a simple web application (e.g., a static HTML page) or use an existing one.

  • Create a Dockerfile to package the web application into a Docker container. Here's an example Dockerfile for a simple web server:


# Use an official Nginx base image

FROM nginx:latest

# Copy the web application files to the Nginx document root

COPY ./webapp /usr/share/nginx/html



  • Build the Docker image:

docker build -t my-web-app .


Step 2: Deploy the Web Application with Kubernetes

Create a Kubernetes Deployment YAML file (web-app-deployment.yaml) to deploy the web application:



apiVersion: apps/v1

kind: Deployment

metadata:

  name: my-web-app-deployment

spec:

  replicas: 3                 # Number of pods to create

  selector:

    matchLabels:

      app: my-web-app         # Label to match pods

  template:

    metadata:

      labels:

        app: my-web-app     # Label assigned to pods

    spec:

      containers:

      - name: my-web-app-container

        image: my-web-app:latest  # Docker image to use

        ports:

        - containerPort: 80      # Port to expose


Explanation of web-app-deployment.yaml:

  • apiVersion: Specifies the Kubernetes API version being used (apps/v1 for Deployments).

  • kind: Defines the type of resource we're creating (a Deployment in this case).

  • metadata: Contains metadata for the Deployment, including its name.

  • spec: Defines the desired state of the Deployment.

  • replicas: Specifies the desired number of identical pods to run. In this example, we want three replicas of our web application.

  • selector: Specifies how to select which pods are part of this Deployment. Pods with the label app: my-web-app will be managed by this Deployment.

  • template: Defines the pod template for the Deployment.

  • metadata: Contains metadata for the pods created by this template.

  • labels: Assigns the label app: my-web-app to the pods created by this template.

  • spec: Specifies the configuration of the pods.

  • containers: Defines the containers to run within the pods. In this case, we have one container named my-web-app-container using the my-web-app:latest Docker image.

  • ports: Specifies the ports to expose within the container. Here, we're exposing port 80.


Step 3: Deploy the Application

  • Apply the deployment configuration to your Kubernetes cluster:

kubectl apply -f web-app-deployment.yaml


Step 4: Verify the Deployment

  • Check the status of your pods:

kubectl get pods


Conclusion:

In this experiment, you learned how to create a Kubernetes Deployment for container orchestration. The web-app-deployment.yaml file defines the desired state of the application, including the number of replicas, labels, and the Docker image to use. Kubernetes automates the deployment and scaling of the application, making it a powerful tool for managing containerized workloads.


Questions/Exercises:

  1. Explain the core concepts of Kubernetes, including pods, nodes, clusters, and deployments. How do these concepts work together to manage containerized applications?

  2. Discuss the advantages of containerization and how Kubernetes enhances the orchestration and management of containers in modern application development.

  3. What is a Kubernetes Deployment, and how does it ensure high availability and scalability of applications? Provide an example of deploying a simple application using a Kubernetes Deployment.

  4. Explain the purpose and benefits of Kubernetes Services. How do Kubernetes Services facilitate load balancing and service discovery within a cluster?

  5. Describe how Kubernetes achieves self-healing for applications running in pods. What mechanisms does it use to detect and recover from pod failures?

  6. How does Kubernetes handle rolling updates and rollbacks of applications without causing downtime? Provide steps to perform a rolling update of a Kubernetes application.

  7. Discuss the concept of Kubernetes namespaces and their use cases. How can namespaces be used to isolate and organize resources within a cluster?

  8. Explain the role of Kubernetes ConfigMaps and Secrets in managing application configurations. Provide examples of when and how to use them.

  9. What is the role of storage orchestration in Kubernetes, and how does it enable data persistence and sharing for containerized applications?

  10. Explore the extensibility of Kubernetes. Describe Helm charts and custom resources, and explain how they can be used to customize and extend Kubernetes functionality.

Comments

Popular posts from this blog

Experiment No. 5 Title: Applying CI/CD Principles to Web Development Using Jenkins, Git, and Local HTTP Server

  Experiment No. 5 Title: Applying CI/CD Principles to Web Development Using Jenkins, Git, and Local HTTP Server  Objective: The objective of this experiment is to set up a CI/CD pipeline for a web development project using Jenkins, Git, and webhooks, without the need for a Jenkinsfile. You will learn how to automatically build and deploy a web application to a local HTTP server whenever changes are pushed to the Git repository, using Jenkins' "Execute Shell" build step. Introduction: Continuous Integration and Continuous Deployment (CI/CD) is a critical practice in modern software development, allowing teams to automate the building, testing, and deployment of applications. This process ensures that software updates are consistently and reliably delivered to end-users, leading to improved development efficiency and product quality. In this context, this introduction sets the stage for an exploration of how to apply CI/CD principles specifically to web development using J

Experiment No. 10 Title: Create the GitHub Account to demonstrate CI/CD pipeline using Cloud Platform.

  Experiment No. 10 Title: Create the GitHub Account to demonstrate CI/CD pipeline using Cloud Platform. Objective: The objective of this experiment is to help you create a GitHub account and set up a basic CI/CD pipeline on GCP. You will learn how to connect your GitHub repository to GCP, configure CI/CD using Cloud Build, and automatically deploy web pages to an Apache web server when code is pushed to your repository. Introduction: Continuous Integration and Continuous Deployment (CI/CD) pipelines are essential for automating the deployment of web applications. In this experiment, we will guide you through creating a GitHub account and setting up a basic CI/CD pipeline using Google Cloud Platform (GCP) to copy web pages for an Apache HTTP web application. Continuous Integration and Continuous Deployment (CI/CD) is a crucial practice in modern software development. It involves automating the processes of code integration, testing, and deployment to ensure that software changes are co

Experiment No. 6 Title: Exploring Containerization and Application Deployment with Docker

  Experiment No. 6 Title: Exploring Containerization and Application Deployment with Docker  Objective: The objective of this experiment is to provide hands-on experience with Docker containerization and application deployment by deploying an Apache web server in a Docker container. By the end of this experiment, you will understand the basics of Docker, how to create Docker containers, and how to deploy a simple web server application. Introduction Containerization is a technology that has revolutionised the way applications are developed, deployed, and managed in the modern IT landscape. It provides a standardised and efficient way to package, distribute, and run software applications and their dependencies in isolated environments called containers. Containerization technology has gained immense popularity, with Docker being one of the most well-known containerization platforms. This introduction explores the fundamental concepts of containerization, its benefits, and how it differs