Skip to main content

Experiment No. 9 Title: Demonstrating Container Orchestration using Kubernetes

 Experiment No. 9

Title: Demonstrating Container Orchestration using Kubernetes


Objective:

The objective of this experiment is to introduce students to container orchestration using Kubernetes and demonstrate how to deploy a containerized web application. By the end of this experiment, students will have a basic understanding of Kubernetes concepts and how to use Kubernetes to manage containers.


Introduction:

Container orchestration is a critical component in modern application deployment, allowing you to manage, scale, and maintain containerized applications efficiently. Kubernetes is a popular container orchestration platform that automates many tasks associated with deploying, scaling, and managing containerized applications. This experiment will demonstrate basic container orchestration using Kubernetes by deploying a simple web application.


Kubernetes, often abbreviated as K8s, is an open-source container orchestration platform designed to automate the deployment, scaling, and management of containerized applications. Developed by Google and later donated to the Cloud Native Computing Foundation (CNCF), Kubernetes has become the de facto standard for container orchestration in modern cloud-native application development.


Key Concepts in Kubernetes:


  • Containerization: Kubernetes relies on containers as the fundamental unit for packaging and running applications. Containers encapsulate an application and its dependencies, ensuring consistency across various environments.

  • Cluster: A Kubernetes cluster is a set of machines, known as nodes, that collectively run containerized applications. A cluster typically consists of a master node (for control and management) and multiple worker nodes (for running containers).

  • Nodes: Nodes are individual machines (virtual or physical) that form part of a Kubernetes cluster. Nodes run containerized workloads and communicate with the master node to manage and orchestrate containers.

  • Pod: A pod is the smallest deployable unit in Kubernetes. It can contain one or more tightly coupled containers that share the same network and storage namespace. Containers within a pod are typically used to run closely related processes.

  • Deployment: A Deployment is a Kubernetes resource that defines how to create, update, and scale instances of an application. It ensures that a specified number of replicas are running at all times.

  • Service: A Service is an abstraction that exposes a set of pods as a network service. It provides a stable IP address and DNS name for accessing the pods, enabling load balancing and discovery.

  • Namespace: Kubernetes supports multiple virtual clusters within the same physical cluster, called namespaces. Namespaces help isolate resources and provide a scope for organizing and managing workloads.


Key Features of Kubernetes:


  • Automated Scaling: Kubernetes can automatically scale the number of replicas of an application based on resource usage or defined metrics. This ensures applications can handle varying workloads efficiently.

  • Load Balancing: Services in Kubernetes can distribute traffic among pods, providing high availability and distributing workloads evenly.

  • Self-healing: Kubernetes monitors the health of pods and can automatically restart or replace failed instances to maintain desired application availability.

  • Rolling Updates and Rollbacks: Kubernetes allows for controlled, rolling updates of applications, ensuring zero-downtime deployments. If issues arise, rollbacks can be performed with ease.

  • Storage Orchestration: Kubernetes provides mechanisms for attaching storage volumes to containers, enabling data persistence and sharing.

  • Configuration Management: Kubernetes supports configuration management through ConfigMaps and Secrets, making it easy to manage application configurations.

  • Extensibility: Kubernetes is highly extensible, with a vast ecosystem of plugins and extensions, including Helm charts for packaging applications and custom resources for defining custom objects.


Kubernetes has become a cornerstone of cloud-native application development, enabling organisations to build, deploy, and scale containerized applications effectively. Its ability to abstract away infrastructure complexities, ensure application reliability, and provide consistent scaling makes it a powerful tool for modern software development and operations.


Materials:

  • A computer with Kubernetes installed (https://kubernetes.io/docs/setup/)

  • Docker installed (https://docs.docker.com/get-docker/)


Experiment Steps:

Step 1: Create a Dockerized Web Application

  • Create a simple web application (e.g., a static HTML page) or use an existing one.

  • Create a Dockerfile to package the web application into a Docker container. Here's an example Dockerfile for a simple web server:


# Use an official Nginx base image

FROM nginx:latest

# Copy the web application files to the Nginx document root

COPY ./webapp /usr/share/nginx/html



  • Build the Docker image:

docker build -t my-web-app .


Step 2: Deploy the Web Application with Kubernetes

Create a Kubernetes Deployment YAML file (web-app-deployment.yaml) to deploy the web application:



apiVersion: apps/v1

kind: Deployment

metadata:

  name: my-web-app-deployment

spec:

  replicas: 3                 # Number of pods to create

  selector:

    matchLabels:

      app: my-web-app         # Label to match pods

  template:

    metadata:

      labels:

        app: my-web-app     # Label assigned to pods

    spec:

      containers:

      - name: my-web-app-container

        image: my-web-app:latest  # Docker image to use

        ports:

        - containerPort: 80      # Port to expose


Explanation of web-app-deployment.yaml:

  • apiVersion: Specifies the Kubernetes API version being used (apps/v1 for Deployments).

  • kind: Defines the type of resource we're creating (a Deployment in this case).

  • metadata: Contains metadata for the Deployment, including its name.

  • spec: Defines the desired state of the Deployment.

  • replicas: Specifies the desired number of identical pods to run. In this example, we want three replicas of our web application.

  • selector: Specifies how to select which pods are part of this Deployment. Pods with the label app: my-web-app will be managed by this Deployment.

  • template: Defines the pod template for the Deployment.

  • metadata: Contains metadata for the pods created by this template.

  • labels: Assigns the label app: my-web-app to the pods created by this template.

  • spec: Specifies the configuration of the pods.

  • containers: Defines the containers to run within the pods. In this case, we have one container named my-web-app-container using the my-web-app:latest Docker image.

  • ports: Specifies the ports to expose within the container. Here, we're exposing port 80.


Step 3: Deploy the Application

  • Apply the deployment configuration to your Kubernetes cluster:

kubectl apply -f web-app-deployment.yaml


Step 4: Verify the Deployment

  • Check the status of your pods:

kubectl get pods


Conclusion:

In this experiment, you learned how to create a Kubernetes Deployment for container orchestration. The web-app-deployment.yaml file defines the desired state of the application, including the number of replicas, labels, and the Docker image to use. Kubernetes automates the deployment and scaling of the application, making it a powerful tool for managing containerized workloads.


Questions/Exercises:

  1. Explain the core concepts of Kubernetes, including pods, nodes, clusters, and deployments. How do these concepts work together to manage containerized applications?

  2. Discuss the advantages of containerization and how Kubernetes enhances the orchestration and management of containers in modern application development.

  3. What is a Kubernetes Deployment, and how does it ensure high availability and scalability of applications? Provide an example of deploying a simple application using a Kubernetes Deployment.

  4. Explain the purpose and benefits of Kubernetes Services. How do Kubernetes Services facilitate load balancing and service discovery within a cluster?

  5. Describe how Kubernetes achieves self-healing for applications running in pods. What mechanisms does it use to detect and recover from pod failures?

  6. How does Kubernetes handle rolling updates and rollbacks of applications without causing downtime? Provide steps to perform a rolling update of a Kubernetes application.

  7. Discuss the concept of Kubernetes namespaces and their use cases. How can namespaces be used to isolate and organize resources within a cluster?

  8. Explain the role of Kubernetes ConfigMaps and Secrets in managing application configurations. Provide examples of when and how to use them.

  9. What is the role of storage orchestration in Kubernetes, and how does it enable data persistence and sharing for containerized applications?

  10. Explore the extensibility of Kubernetes. Describe Helm charts and custom resources, and explain how they can be used to customize and extend Kubernetes functionality.

Comments

Popular posts from this blog

Example of Maven project that interacts with a MySQL database and includes testing

Example Maven project that interacts with a MySQL database and includes testing To install Java, MySQL, Maven, and write a Java program to fetch table data, execute, and create a JAR file using Maven on Ubuntu, you can follow these steps: Step 1: Install Java You can install Java using the following commands: sudo apt update sudo apt install default-jre sudo apt install default-jdk Verify the installation by running: java -version Step 2: Install MySQL You can install MySQL using the following commands: sudo apt update sudo apt install mysql-server During the installation, you'll be prompted to set a root password for MySQL or you can set password at latter stage using following steps.  sudo mysql ALTER USER 'root'@'localhost' IDENTIFIED WITH mysql_native_password BY 'password'; exit Step 3: Install Maven You can install Maven using the following commands: sudo apt update sudo apt install maven Verify the installation by running: mvn -version Step 4: Create ...

Maven Create and Build Artifacts

In Maven, you can create and build artifacts using the package phase of the build lifecycle. The package phase is responsible for taking the compiled code and other project resources and packaging them into a distributable format, such as a JAR (Java Archive), WAR (Web Application Archive), or other custom formats. Here are the steps to create and build artifacts using Maven: Configure the Build Output: In your project's pom.xml file, you need to configure the output of the build. This includes specifying the type of artifact you want to create (e.g., JAR, WAR) and any additional resources to include. You do this in the <build> section of your pom.xml: <build>     <finalName>my-artifact</finalName> <!-- Name of the artifact without the extension -->     <plugins>         <!-- Plugin configurations for creating the artifact -->         <!-- For example, maven-jar-plugin or maven-war-p...

Maven Repositories (local, central, global)

Maven relies on repositories to manage dependencies, plugins, and other artifacts required for a project. There are typically three types of repositories in Maven: local, central, and remote/global repositories. Local Repository: Location: The local repository is located on your local development machine. By default, it's in the .m2 directory within your user home directory (e.g., C:\Users\<username>\.m2\repository on Windows or /Users/<username>/.m2/repository on macOS and Linux). Purpose: The local repository is used to store artifacts (JARs, POMs, and other files) that your machine has downloaded or built during previous Maven builds. These artifacts are specific to your local development environment. Benefits: Using a local repository improves build performance since it caches dependencies locally, reducing the need to download them repeatedly. It also ensures reproducibility by maintaining a local copy of dependencies. Central Repository: Location: The central repo...