Experiment No. 9
Title: Demonstrating Container Orchestration using Kubernetes
The objective of this experiment is to introduce students to container orchestration using Kubernetes and demonstrate how to deploy a containerized web application. By the end of this experiment, students will have a basic understanding of Kubernetes concepts and how to use Kubernetes to manage containers.
Container orchestration is a critical component in modern application deployment, allowing you to manage, scale, and maintain containerized applications efficiently. Kubernetes is a popular container orchestration platform that automates many tasks associated with deploying, scaling, and managing containerized applications. This experiment will demonstrate basic container orchestration using Kubernetes by deploying a simple web application.
Kubernetes, often abbreviated as K8s, is an open-source container orchestration platform designed to automate the deployment, scaling, and management of containerized applications. Developed by Google and later donated to the Cloud Native Computing Foundation (CNCF), Kubernetes has become the de facto standard for container orchestration in modern cloud-native application development.
Key Concepts in Kubernetes:
Containerization: Kubernetes relies on containers as the fundamental unit for packaging and running applications. Containers encapsulate an application and its dependencies, ensuring consistency across various environments.
Cluster: A Kubernetes cluster is a set of machines, known as nodes, that collectively run containerized applications. A cluster typically consists of a master node (for control and management) and multiple worker nodes (for running containers).
Nodes: Nodes are individual machines (virtual or physical) that form part of a Kubernetes cluster. Nodes run containerized workloads and communicate with the master node to manage and orchestrate containers.
Pod: A pod is the smallest deployable unit in Kubernetes. It can contain one or more tightly coupled containers that share the same network and storage namespace. Containers within a pod are typically used to run closely related processes.
Deployment: A Deployment is a Kubernetes resource that defines how to create, update, and scale instances of an application. It ensures that a specified number of replicas are running at all times.
Service: A Service is an abstraction that exposes a set of pods as a network service. It provides a stable IP address and DNS name for accessing the pods, enabling load balancing and discovery.
Namespace: Kubernetes supports multiple virtual clusters within the same physical cluster, called namespaces. Namespaces help isolate resources and provide a scope for organizing and managing workloads.
Key Features of Kubernetes:
Automated Scaling: Kubernetes can automatically scale the number of replicas of an application based on resource usage or defined metrics. This ensures applications can handle varying workloads efficiently.
Load Balancing: Services in Kubernetes can distribute traffic among pods, providing high availability and distributing workloads evenly.
Self-healing: Kubernetes monitors the health of pods and can automatically restart or replace failed instances to maintain desired application availability.
Rolling Updates and Rollbacks: Kubernetes allows for controlled, rolling updates of applications, ensuring zero-downtime deployments. If issues arise, rollbacks can be performed with ease.
Storage Orchestration: Kubernetes provides mechanisms for attaching storage volumes to containers, enabling data persistence and sharing.
Configuration Management: Kubernetes supports configuration management through ConfigMaps and Secrets, making it easy to manage application configurations.
Extensibility: Kubernetes is highly extensible, with a vast ecosystem of plugins and extensions, including Helm charts for packaging applications and custom resources for defining custom objects.
Kubernetes has become a cornerstone of cloud-native application development, enabling organisations to build, deploy, and scale containerized applications effectively. Its ability to abstract away infrastructure complexities, ensure application reliability, and provide consistent scaling makes it a powerful tool for modern software development and operations.
A computer with Kubernetes installed (https://kubernetes.io/docs/setup/)
Docker installed (https://docs.docker.com/get-docker/)
Step 1: Create a Dockerized Web Application
Create a simple web application (e.g., a static HTML page) or use an existing one.
Create a Dockerfile to package the web application into a Docker container. Here's an example Dockerfile for a simple web server:
# Use an official Nginx base image
# Copy the web application files to the Nginx document root
COPY ./webapp /usr/share/nginx/html
Build the Docker image:
docker build -t my-web-app .
Step 2: Deploy the Web Application with Kubernetes
Create a Kubernetes Deployment YAML file (web-app-deployment.yaml) to deploy the web application:
replicas: 3 # Number of pods to create
app: my-web-app # Label to match pods
app: my-web-app # Label assigned to pods
- name: my-web-app-container
image: my-web-app:latest # Docker image to use
- containerPort: 80 # Port to expose
Explanation of web-app-deployment.yaml:
apiVersion: Specifies the Kubernetes API version being used (apps/v1 for Deployments).
kind: Defines the type of resource we're creating (a Deployment in this case).
metadata: Contains metadata for the Deployment, including its name.
spec: Defines the desired state of the Deployment.
replicas: Specifies the desired number of identical pods to run. In this example, we want three replicas of our web application.
selector: Specifies how to select which pods are part of this Deployment. Pods with the label app: my-web-app will be managed by this Deployment.
template: Defines the pod template for the Deployment.
metadata: Contains metadata for the pods created by this template.
labels: Assigns the label app: my-web-app to the pods created by this template.
spec: Specifies the configuration of the pods.
containers: Defines the containers to run within the pods. In this case, we have one container named my-web-app-container using the my-web-app:latest Docker image.
ports: Specifies the ports to expose within the container. Here, we're exposing port 80.
Step 3: Deploy the Application
Apply the deployment configuration to your Kubernetes cluster:
kubectl apply -f web-app-deployment.yaml
Step 4: Verify the Deployment
Check the status of your pods:
kubectl get pods
In this experiment, you learned how to create a Kubernetes Deployment for container orchestration. The web-app-deployment.yaml file defines the desired state of the application, including the number of replicas, labels, and the Docker image to use. Kubernetes automates the deployment and scaling of the application, making it a powerful tool for managing containerized workloads.
Explain the core concepts of Kubernetes, including pods, nodes, clusters, and deployments. How do these concepts work together to manage containerized applications?
Discuss the advantages of containerization and how Kubernetes enhances the orchestration and management of containers in modern application development.
What is a Kubernetes Deployment, and how does it ensure high availability and scalability of applications? Provide an example of deploying a simple application using a Kubernetes Deployment.
Explain the purpose and benefits of Kubernetes Services. How do Kubernetes Services facilitate load balancing and service discovery within a cluster?
Describe how Kubernetes achieves self-healing for applications running in pods. What mechanisms does it use to detect and recover from pod failures?
How does Kubernetes handle rolling updates and rollbacks of applications without causing downtime? Provide steps to perform a rolling update of a Kubernetes application.
Discuss the concept of Kubernetes namespaces and their use cases. How can namespaces be used to isolate and organize resources within a cluster?
Explain the role of Kubernetes ConfigMaps and Secrets in managing application configurations. Provide examples of when and how to use them.
What is the role of storage orchestration in Kubernetes, and how does it enable data persistence and sharing for containerized applications?
Explore the extensibility of Kubernetes. Describe Helm charts and custom resources, and explain how they can be used to customize and extend Kubernetes functionality.