Day 7: Scaling Web Applications with Kubernetes

Overview:
Kubernetes has become the de facto standard for deploying, scaling, and managing containerized applications. With Kubernetes, you can ensure your web applications run reliably in any environment, from local development to cloud-based production. In this blog, we’ll go through the steps to deploy a scalable web application on Kubernetes using a local environment (Minikube) and later extend it to cloud platforms like AWS or Google Cloud.

By the end of this guide, you’ll have a web application running on a Kubernetes cluster that can scale based on traffic demands.


Prerequisites:

  • Install Docker.
  • Install Minikube.
  • Basic understanding of Docker and Kubernetes.

Step 1: Setting Up a Web Application

We’ll start by building a simple web application using Node.js and Docker. This application will be containerized and later deployed on Kubernetes.

  1. Create the project directory:
mkdir k8s-app
cd k8s-app
  1. Initialize a Node.js project:
npm init -y
  1. Install Express:
npm install express
  1. Create a server.js file:
touch server.js

Add the following code to server.js:

const express = require('express');
const app = express();
const PORT = process.env.PORT || 8080;

app.get('/', (req, res) => {
  res.send('Hello from Kubernetes!');
});

app.listen(PORT, () => {
  console.log(`Server is running on port ${PORT}`);
});

Step 2: Dockerizing the Application

Next, we’ll containerize the Node.js application using Docker.

  1. Create a Dockerfile:
touch Dockerfile

Add the following content to Dockerfile:

# Use an official Node.js runtime as a base image
FROM node:18-alpine

# Set the working directory in the container
WORKDIR /usr/src/app

# Copy the package.json and install dependencies
COPY package*.json ./
RUN npm install

# Copy the rest of the application code
COPY . .

# Expose port 8080 for the application
EXPOSE 8080

# Start the application
CMD ["node", "server.js"]
  1. Build the Docker image:
docker build -t k8s-app .
  1. Run the Docker container locally to test it:
docker run -p 8080:8080 k8s-app

Visit http://localhost:8080 in your browser, and you should see “Hello from Kubernetes!”.

See also  Comparing Amazon RDS vs. Self-Managed MySQL: A Comprehensive Guide

Step 3: Installing Minikube (Local Kubernetes Cluster)

Minikube is a tool that allows you to run Kubernetes locally. It is perfect for testing and development before moving to a cloud provider.

  1. Install Minikube:
    Follow the instructions here to install Minikube on your machine.
  2. Start a Minikube cluster:
minikube start
  1. Verify that Minikube is running:
kubectl get nodes

You should see one node listed as part of your local Kubernetes cluster.


Step 4: Creating Kubernetes Configuration Files

To deploy our application to Kubernetes, we need to define a few Kubernetes resources: a Deployment and a Service.

4.1 Create the Deployment File

A Deployment in Kubernetes ensures that the desired number of pod replicas are running your application at all times.

  1. Create a deployment.yaml file:
touch deployment.yaml
  1. Add the following content to deployment.yaml:
apiVersion: apps/v1
kind: Deployment
metadata:
  name: k8s-app-deployment
spec:
  replicas: 3  # Number of application instances
  selector:
    matchLabels:
      app: k8s-app
  template:
    metadata:
      labels:
        app: k8s-app
    spec:
      containers:
      - name: k8s-app
        image: k8s-app:latest  # The image you built earlier
        ports:
        - containerPort: 8080

4.2 Create the Service File

A Kubernetes Service exposes your application to the network, allowing external users to access it.

  1. Create a service.yaml file:
touch service.yaml
  1. Add the following content to service.yaml:
apiVersion: v1
kind: Service
metadata:
  name: k8s-app-service
spec:
  type: NodePort
  selector:
    app: k8s-app
  ports:
    - protocol: TCP
      port: 8080
      targetPort: 8080
      nodePort: 30007  # Expose the app on this port

Step 5: Deploying the Application to Kubernetes

Now that we have our Deployment and Service files, we can deploy the application to Kubernetes.

  1. Load the Docker image into Minikube:

Since Minikube runs in a VM, you need to load your Docker image into Minikube’s Docker registry:

minikube image load k8s-app
  1. Apply the Deployment and Service:
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
  1. Verify the Deployment and Pods:
kubectl get deployments
kubectl get pods

You should see your application running as three replicas (pods) under the Deployment.

  1. Access the Application:
See also  Day 2: Automating Infrastructure with Terraform

To access the application, run the following command to retrieve the Minikube IP address:

minikube ip

Open a browser and visit http://<minikube-ip>:30007. You should see “Hello from Kubernetes!” served by one of the pods.


Step 6: Scaling the Application

One of the key benefits of Kubernetes is its ability to scale your application up or down based on traffic or other metrics.

  1. Scale the Deployment to 5 replicas:
kubectl scale deployment k8s-app-deployment --replicas=5
  1. Verify the scaling:
kubectl get deployments
kubectl get pods

You should now see five pods running your application.


Step 7: Setting Up Auto-Scaling (Optional)

Kubernetes can automatically scale your application based on CPU usage using the Horizontal Pod Autoscaler.

  1. Enable metrics server (required for auto-scaling):
minikube addons enable metrics-server
  1. Create an autoscaler:
kubectl autoscale deployment k8s-app-deployment --cpu-percent=50 --min=2 --max=10

This command creates a Horizontal Pod Autoscaler that will automatically scale the pods between 2 and 10 replicas, based on CPU usage.

  1. Check the autoscaler:
kubectl get hpa

Step 8: Deploying to Cloud (AWS EKS or Google GKE)

Once you’re confident with your local Kubernetes deployment, you can easily extend it to cloud providers like AWS or Google Cloud. Both provide managed Kubernetes services: EKS for AWS and GKE for Google Cloud.

For AWS EKS:

  1. Set up AWS CLI and eksctl (instructions here).
  2. Create an EKS cluster:
eksctl create cluster --name k8s-app-cluster --region us-west-2
  1. Deploy your application:

Once your EKS cluster is set up, you can apply the same deployment.yaml and service.yaml files using kubectl apply.

For Google Cloud GKE:

  1. Install Google Cloud SDK (instructions here).
  2. Create a GKE cluster:
gcloud container clusters create k8s-app-cluster --zone us-central1-a
  1. Deploy your application:
See also  Part 1: Implementing Firewalls, VPNs, and Encryption

Apply the same Kubernetes YAML files to your GKE cluster using kubectl.


Conclusion

In this tutorial, you learned how to deploy and scale a web application using Kubernetes, starting with a local Minikube cluster and extending the concepts to cloud-based deployments like AWS EKS and Google GKE. Kubernetes provides a robust and scalable platform for containerized applications, with features like automatic scaling, self-healing, and service discovery.

As a next step, you can explore advanced Kubernetes topics such as configuring Ingress controllers, setting up persistent storage, and deploying stateful applications.

Feel free to copy the code and configurations to build and scale your own cloud-native applications!

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.