Overview:
Kubernetes has become the de facto standard for deploying, scaling, and managing containerized applications. With Kubernetes, you can ensure your web applications run reliably in any environment, from local development to cloud-based production. In this blog, we’ll go through the steps to deploy a scalable web application on Kubernetes using a local environment (Minikube) and later extend it to cloud platforms like AWS or Google Cloud.
By the end of this guide, you’ll have a web application running on a Kubernetes cluster that can scale based on traffic demands.
Prerequisites:
Step 1: Setting Up a Web Application
We’ll start by building a simple web application using Node.js and Docker. This application will be containerized and later deployed on Kubernetes.
- Create the project directory:
mkdir k8s-app
cd k8s-app
- Initialize a Node.js project:
npm init -y
- Install Express:
npm install express
- Create a
server.js
file:
touch server.js
Add the following code to server.js
:
const express = require('express');
const app = express();
const PORT = process.env.PORT || 8080;
app.get('/', (req, res) => {
res.send('Hello from Kubernetes!');
});
app.listen(PORT, () => {
console.log(`Server is running on port ${PORT}`);
});
Step 2: Dockerizing the Application
Next, we’ll containerize the Node.js application using Docker.
- Create a
Dockerfile
:
touch Dockerfile
Add the following content to Dockerfile
:
# Use an official Node.js runtime as a base image
FROM node:18-alpine
# Set the working directory in the container
WORKDIR /usr/src/app
# Copy the package.json and install dependencies
COPY package*.json ./
RUN npm install
# Copy the rest of the application code
COPY . .
# Expose port 8080 for the application
EXPOSE 8080
# Start the application
CMD ["node", "server.js"]
- Build the Docker image:
docker build -t k8s-app .
- Run the Docker container locally to test it:
docker run -p 8080:8080 k8s-app
Visit http://localhost:8080
in your browser, and you should see “Hello from Kubernetes!”.
Step 3: Installing Minikube (Local Kubernetes Cluster)
Minikube is a tool that allows you to run Kubernetes locally. It is perfect for testing and development before moving to a cloud provider.
- Install Minikube:
Follow the instructions here to install Minikube on your machine. - Start a Minikube cluster:
minikube start
- Verify that Minikube is running:
kubectl get nodes
You should see one node listed as part of your local Kubernetes cluster.
Step 4: Creating Kubernetes Configuration Files
To deploy our application to Kubernetes, we need to define a few Kubernetes resources: a Deployment and a Service.
4.1 Create the Deployment File
A Deployment in Kubernetes ensures that the desired number of pod replicas are running your application at all times.
- Create a
deployment.yaml
file:
touch deployment.yaml
- Add the following content to
deployment.yaml
:
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-app-deployment
spec:
replicas: 3 # Number of application instances
selector:
matchLabels:
app: k8s-app
template:
metadata:
labels:
app: k8s-app
spec:
containers:
- name: k8s-app
image: k8s-app:latest # The image you built earlier
ports:
- containerPort: 8080
4.2 Create the Service File
A Kubernetes Service exposes your application to the network, allowing external users to access it.
- Create a
service.yaml
file:
touch service.yaml
- Add the following content to
service.yaml
:
apiVersion: v1
kind: Service
metadata:
name: k8s-app-service
spec:
type: NodePort
selector:
app: k8s-app
ports:
- protocol: TCP
port: 8080
targetPort: 8080
nodePort: 30007 # Expose the app on this port
Step 5: Deploying the Application to Kubernetes
Now that we have our Deployment and Service files, we can deploy the application to Kubernetes.
- Load the Docker image into Minikube:
Since Minikube runs in a VM, you need to load your Docker image into Minikube’s Docker registry:
minikube image load k8s-app
- Apply the Deployment and Service:
kubectl apply -f deployment.yaml
kubectl apply -f service.yaml
- Verify the Deployment and Pods:
kubectl get deployments
kubectl get pods
You should see your application running as three replicas (pods) under the Deployment.
- Access the Application:
To access the application, run the following command to retrieve the Minikube IP address:
minikube ip
Open a browser and visit http://<minikube-ip>:30007
. You should see “Hello from Kubernetes!” served by one of the pods.
Step 6: Scaling the Application
One of the key benefits of Kubernetes is its ability to scale your application up or down based on traffic or other metrics.
- Scale the Deployment to 5 replicas:
kubectl scale deployment k8s-app-deployment --replicas=5
- Verify the scaling:
kubectl get deployments
kubectl get pods
You should now see five pods running your application.
Step 7: Setting Up Auto-Scaling (Optional)
Kubernetes can automatically scale your application based on CPU usage using the Horizontal Pod Autoscaler.
- Enable metrics server (required for auto-scaling):
minikube addons enable metrics-server
- Create an autoscaler:
kubectl autoscale deployment k8s-app-deployment --cpu-percent=50 --min=2 --max=10
This command creates a Horizontal Pod Autoscaler that will automatically scale the pods between 2 and 10 replicas, based on CPU usage.
- Check the autoscaler:
kubectl get hpa
Step 8: Deploying to Cloud (AWS EKS or Google GKE)
Once you’re confident with your local Kubernetes deployment, you can easily extend it to cloud providers like AWS or Google Cloud. Both provide managed Kubernetes services: EKS for AWS and GKE for Google Cloud.
For AWS EKS:
- Set up AWS CLI and eksctl (instructions here).
- Create an EKS cluster:
eksctl create cluster --name k8s-app-cluster --region us-west-2
- Deploy your application:
Once your EKS cluster is set up, you can apply the same deployment.yaml
and service.yaml
files using kubectl apply
.
For Google Cloud GKE:
- Install Google Cloud SDK (instructions here).
- Create a GKE cluster:
gcloud container clusters create k8s-app-cluster --zone us-central1-a
- Deploy your application:
Apply the same Kubernetes YAML files to your GKE cluster using kubectl
.
Conclusion
In this tutorial, you learned how to deploy and scale a web application using Kubernetes, starting with a local Minikube cluster and extending the concepts to cloud-based deployments like AWS EKS and Google GKE. Kubernetes provides a robust and scalable platform for containerized applications, with features like automatic scaling, self-healing, and service discovery.
As a next step, you can explore advanced Kubernetes topics such as configuring Ingress controllers, setting up persistent storage, and deploying stateful applications.
Feel free to copy the code and configurations to build and scale your own cloud-native applications!