Skip to main content

Experiment to Demonstrate Docker Orchestration with Kubernetes

Title: Docker Orchestration with Kubernetes

Objective:

The objective of this experiment is to demonstrate Docker orchestration using Kubernetes. Kubernetes is an open-source container orchestration platform that automates the deployment, scaling, and management of containerized applications. In this experiment, we will deploy a simple multi-container application using Kubernetes to showcase its orchestration capabilities.

Prerequisites:

  • Docker installed on your system.

  • Minikube installed for local Kubernetes cluster deployment (optional).

Experiment Steps:

Step 1: Set Up Kubernetes Cluster

  1. If you don't have access to a Kubernetes cluster, you can use Minikube to set up a local single-node cluster for testing purposes. Install Minikube following the official documentation (https://minikube.sigs.k8s.io/docs/start/).

  2. Start the Minikube cluster by running:


minikube start


Step 2: Prepare the Application

  1. Create a new directory for your application (e.g., "my-k8s-app").

  2. Inside the "my-k8s-app" directory, create two Dockerfiles, one for each container you want to deploy in the Kubernetes cluster:

  3. Dockerfile-backend for the backend service (e.g., Flask or Node.js backend).

  4. Dockerfile-frontend for the frontend service (e.g., React or Angular frontend).

Step 3: Build Docker Images

  1. Open a terminal or command prompt.

  2. Navigate to the "my-k8s-app" directory.

  3. Build the Docker images for the backend and frontend services using the respective Dockerfiles:


docker build -t my-backend-service -f Dockerfile-backend .

docker build -t my-frontend-service -f Dockerfile-frontend .


Step 4: Deploy Kubernetes Pods


  1. Create Kubernetes deployment YAML files for the backend and frontend services. For each service, create a .yaml file with the deployment configuration. Example configurations are as follows:


backend-deployment.yaml:


apiVersion: apps/v1

kind: Deployment

metadata:

  name: backend-deployment

spec:

  replicas: 2

  selector:

    matchLabels:

      app: backend

  template:

    metadata:

      labels:

        app: backend

    spec:

      containers:

        - name: backend-container

          image: my-backend-service

          ports:

            - containerPort: 8000


frontend-deployment.yaml:

apiVersion: apps/v1

kind: Deployment metadata: name: frontend-deployment spec: replicas: 3 selector: matchLabels: app: frontend template: metadata: labels: app: frontend spec: containers: - name: frontend-container image: my-frontend-service ports: - containerPort: 80


  • Apply the deployment configurations to create the backend and frontend deployments in the Kubernetes cluster:


kubectl apply -f backend-deployment.yaml

kubectl apply -f frontend-deployment.yaml


Step 5: Expose Services

  1. Create Kubernetes service YAML files for the backend and frontend services. For each service, create a .yaml file with the service configuration. Example configurations are as follows:


backend-service.yaml:

apiVersion: v1

kind: Service

metadata:

  name: backend-service

spec:

  selector:

    app: backend

  ports:

    - protocol: TCP

      port: 80

      targetPort: 8000

  type: LoadBalancer


frontend-service.yaml:


apiVersion: v1 kind: Service metadata: name: frontend-service spec: selector: app: frontend ports: - protocol: TCP port: 80 targetPort: 80 type: LoadBalancer


  • Apply the service configurations to create the backend and frontend services in the Kubernetes cluster:

kubectl apply -f backend-service.yaml

kubectl apply -f frontend-service.yaml


Step 6: Access the Application


  1. Retrieve the external IP addresses of the services to access the application:

kubectl get services

  • Use the external IP address of the frontend service to access the application in a web browser.

Step 7: Scaling the Application

  1. To demonstrate Kubernetes' ability to scale the application, increase the number of frontend replicas:

kubectl scale deployment frontend-deployment --replicas=5

  • Observe how Kubernetes automatically scales the number of frontend pods.


Step 8: Clean Up

  1. Delete the Kubernetes deployments and services when you're done experimenting:

kubectl delete deployment frontend-deployment backend-deployment

kubectl delete service frontend-service backend-service


Conclusion:

In this experiment, we demonstrated Docker orchestration using Kubernetes. We deployed a multi-container application by defining Kubernetes deployments and services. Kubernetes automatically managed the application's lifecycle, including scaling the frontend replicas as needed. Docker orchestration with Kubernetes enables efficient management and scaling of containerized applications, making it a powerful tool for modern software development and deployment.


Comments

Popular posts from this blog

Maven Create and Build Artifacts

In Maven, you can create and build artifacts using the package phase of the build lifecycle. The package phase is responsible for taking the compiled code and other project resources and packaging them into a distributable format, such as a JAR (Java Archive), WAR (Web Application Archive), or other custom formats. Here are the steps to create and build artifacts using Maven: Configure the Build Output: In your project's pom.xml file, you need to configure the output of the build. This includes specifying the type of artifact you want to create (e.g., JAR, WAR) and any additional resources to include. You do this in the <build> section of your pom.xml: <build>     <finalName>my-artifact</finalName> <!-- Name of the artifact without the extension -->     <plugins>         <!-- Plugin configurations for creating the artifact -->         <!-- For example, maven-jar-plugin or maven-war-plugin -->     </plugins> </build> Depend

Experiment No. 5 Title: Applying CI/CD Principles to Web Development Using Jenkins, Git, and Local HTTP Server

  Experiment No. 5 Title: Applying CI/CD Principles to Web Development Using Jenkins, Git, and Local HTTP Server  Objective: The objective of this experiment is to set up a CI/CD pipeline for a web development project using Jenkins, Git, and webhooks, without the need for a Jenkinsfile. You will learn how to automatically build and deploy a web application to a local HTTP server whenever changes are pushed to the Git repository, using Jenkins' "Execute Shell" build step. Introduction: Continuous Integration and Continuous Deployment (CI/CD) is a critical practice in modern software development, allowing teams to automate the building, testing, and deployment of applications. This process ensures that software updates are consistently and reliably delivered to end-users, leading to improved development efficiency and product quality. In this context, this introduction sets the stage for an exploration of how to apply CI/CD principles specifically to web development using J

Maven Repositories (local, central, global)

Maven relies on repositories to manage dependencies, plugins, and other artifacts required for a project. There are typically three types of repositories in Maven: local, central, and remote/global repositories. Local Repository: Location: The local repository is located on your local development machine. By default, it's in the .m2 directory within your user home directory (e.g., C:\Users\<username>\.m2\repository on Windows or /Users/<username>/.m2/repository on macOS and Linux). Purpose: The local repository is used to store artifacts (JARs, POMs, and other files) that your machine has downloaded or built during previous Maven builds. These artifacts are specific to your local development environment. Benefits: Using a local repository improves build performance since it caches dependencies locally, reducing the need to download them repeatedly. It also ensures reproducibility by maintaining a local copy of dependencies. Central Repository: Location: The central repo