Skip to main content

Experiment to Demonstrate Docker Orchestration with Kubernetes

Title: Docker Orchestration with Kubernetes

Objective:

The objective of this experiment is to demonstrate Docker orchestration using Kubernetes. Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. In this experiment, we will deploy a simple multi-container application using Kubernetes to showcase its orchestration capabilities.

Prerequisites:

  • Docker installed on your system.

  • Minikube installed for local Kubernetes cluster deployment (optional).

Experiment Steps:

Step 1: Set Up Kubernetes Cluster

  1. If you don't have access to a Kubernetes cluster, you can use Minikube to set up a local single-node cluster for testing purposes. Install Minikube following the official documentation (https://minikube.sigs.k8s.io/docs/start/).

  2. Start the Minikube cluster by running:


minikube start


Step 2: Prepare the Application

  1. Create a new directory for your application (e.g., "my-k8s-app").

  2. Inside the "my-k8s-app" directory, create two Dockerfiles, one for each container you want to deploy in the Kubernetes cluster:

  3. Dockerfile-backend for the backend service (e.g., Flask or Node.js backend).

  4. Dockerfile-frontend for the frontend service (e.g., React or Angular frontend).

Step 3: Build Docker Images

  1. Open a terminal or command prompt.

  2. Navigate to the "my-k8s-app" directory.

  3. Build the Docker images for the backend and frontend services using the respective Dockerfiles:


docker build -t my-backend-service -f Dockerfile-backend .

docker build -t my-frontend-service -f Dockerfile-frontend .


Step 4: Deploy Kubernetes Pods


  1. Create Kubernetes deployment YAML files for the backend and frontend services. For each service, create a .yaml file with the deployment configuration. Example configurations are as follows:


backend-deployment.yaml:


apiVersion: apps/v1

kind: Deployment

metadata:

  name: backend-deployment

spec:

  replicas: 2

  selector:

    matchLabels:

      app: backend

  template:

    metadata:

      labels:

        app: backend

    spec:

      containers:

        - name: backend-container

          image: my-backend-service

          ports:

            - containerPort: 8000


frontend-deployment.yaml:

apiVersion: apps/v1

kind: Deployment metadata: name: frontend-deployment spec: replicas: 3 selector: matchLabels: app: frontend template: metadata: labels: app: frontend spec: containers: - name: frontend-container image: my-frontend-service ports: - containerPort: 80


  • Apply the deployment configurations to create the backend and frontend deployments in the Kubernetes cluster:


kubectl apply -f backend-deployment.yaml

kubectl apply -f frontend-deployment.yaml


Step 5: Expose Services

  1. Create Kubernetes service YAML files for the backend and frontend services. For each service, create a .yaml file with the service configuration. Example configurations are as follows:


backend-service.yaml:

apiVersion: v1

kind: Service

metadata:

  name: backend-service

spec:

  selector:

    app: backend

  ports:

    - protocol: TCP

      port: 80

      targetPort: 8000

  type: LoadBalancer


frontend-service.yaml:


apiVersion: v1 kind: Service metadata: name: frontend-service spec: selector: app: frontend ports: - protocol: TCP port: 80 targetPort: 80 type: LoadBalancer


  • Apply the service configurations to create the backend and frontend services in the Kubernetes cluster:

kubectl apply -f backend-service.yaml

kubectl apply -f frontend-service.yaml


Step 6: Access the Application


  1. Retrieve the external IP addresses of the services to access the application:

kubectl get services

  • Use the external IP address of the frontend service to access the application in a web browser.

Step 7: Scaling the Application

  1. To demonstrate Kubernetes' ability to scale the application, increase the number of frontend replicas:

kubectl scale deployment frontend-deployment --replicas=5

  • Observe how Kubernetes automatically scales the number of frontend pods.


Step 8: Clean Up

  1. Delete the Kubernetes deployments and services when you're done experimenting:

kubectl delete deployment frontend-deployment backend-deployment

kubectl delete service frontend-service backend-service


Conclusion:

In this experiment, we demonstrated Docker orchestration using Kubernetes. We deployed a multi-container application by defining Kubernetes deployments and services. Kubernetes automatically managed the application's lifecycle, including scaling the frontend replicas as needed. Docker orchestration with Kubernetes enables efficient management and scaling of containerized applications, making it a powerful tool for modern software development and deployment.


Comments

Popular posts from this blog

Experiment No. 5 Title: Applying CI/CD Principles to Web Development Using Jenkins, Git, and Local HTTP Server

  Experiment No. 5 Title: Applying CI/CD Principles to Web Development Using Jenkins, Git, and Local HTTP Server  Objective: The objective of this experiment is to set up a CI/CD pipeline for a web development project using Jenkins, Git, and webhooks, without the need for a Jenkinsfile. You will learn how to automatically build and deploy a web application to a local HTTP server whenever changes are pushed to the Git repository, using Jenkins' "Execute Shell" build step. Introduction: Continuous Integration and Continuous Deployment (CI/CD) is a critical practice in modern software development, allowing teams to automate the building, testing, and deployment of applications. This process ensures that software updates are consistently and reliably delivered to end-users, leading to improved development efficiency and product quality. In this context, this introduction sets the stage for an exploration of how to apply CI/CD principles specifically to web development using J

Experiment No. 10 Title: Create the GitHub Account to demonstrate CI/CD pipeline using Cloud Platform.

  Experiment No. 10 Title: Create the GitHub Account to demonstrate CI/CD pipeline using Cloud Platform. Objective: The objective of this experiment is to help you create a GitHub account and set up a basic CI/CD pipeline on GCP. You will learn how to connect your GitHub repository to GCP, configure CI/CD using Cloud Build, and automatically deploy web pages to an Apache web server when code is pushed to your repository. Introduction: Continuous Integration and Continuous Deployment (CI/CD) pipelines are essential for automating the deployment of web applications. In this experiment, we will guide you through creating a GitHub account and setting up a basic CI/CD pipeline using Google Cloud Platform (GCP) to copy web pages for an Apache HTTP web application. Continuous Integration and Continuous Deployment (CI/CD) is a crucial practice in modern software development. It involves automating the processes of code integration, testing, and deployment to ensure that software changes are co

Experiment No. 6 Title: Exploring Containerization and Application Deployment with Docker

  Experiment No. 6 Title: Exploring Containerization and Application Deployment with Docker  Objective: The objective of this experiment is to provide hands-on experience with Docker containerization and application deployment by deploying an Apache web server in a Docker container. By the end of this experiment, you will understand the basics of Docker, how to create Docker containers, and how to deploy a simple web server application. Introduction Containerization is a technology that has revolutionised the way applications are developed, deployed, and managed in the modern IT landscape. It provides a standardised and efficient way to package, distribute, and run software applications and their dependencies in isolated environments called containers. Containerization technology has gained immense popularity, with Docker being one of the most well-known containerization platforms. This introduction explores the fundamental concepts of containerization, its benefits, and how it differs