How to run your first application on Kubernetes

Let’s get on with running your very first container, nginx ( http://nginx.org/), which is an open source reverse proxy server, load balancer, and web server. You will create a simple nginx application and expose it to the outside world.

We will use the official Docker image of nginx as an example. The image is provided in the Docker Hub ( https://store.docker.com/images/nginx), and also the Docker Store ( https://hub.docker.com/_/nginx/).

Before you start to run your first container in Kubernetes, it’s better to check if your cluster is in a healthy mode.

Step 1 : Checking the master daemons. Check whether the Kubernetes components are running:

kubectl get cs

Step 2 : Check the status of the Kubernetes master:

kubectl cluster-info

Step 3 : Check whether all the nodes are ready:

kubectl get nodes

Deploying NGINX on a Kubernetes cluster

Step 1 : On the Kubernetes master, we can use kubectl run to create a certain number of containers. The Kubernetes master will then schedule the pods for the nodes to run, with general command formatting, as follows:

kubectl run <replication controller name> --image=<image name> --replicas=<number of replicas> [--port=<exposing port>]

Ex: run a deployment with 2 replicas for the image nginx and expose the container port 80

kubectl run my-first-nginx --image=nginx --replicas=2 --port=80

  • The name of deployment <my-first-nginx> cannot be duplicated
  • The resource (pods, services, deployment, and so on) in one Kubernetes namespace cannot be duplicated.

Step 2 : Let’s move on and see the current status of all the pods by kubectl get pods. Normally the status of the pods will hold on Pending for a while, since it takes some time for the nodes to pull the image from the registry:

kubectl get pods

Step 3 : You can also check the details about the deployment to see whether all the pods are ready:

kubectl get pods

Exposing the port for external access

We might also want to create an external IP address for the nginx deployment. On cloud providers that support an external load balancer (such as Google compute engine), using the LoadBalancer type will provision a load balancer for external access.

On the other hand, you can still expose the port by creating a Kubernetes service as follows, even though you’re not running on platforms that support an external load balancer.

We’ll describe how to access this externally later:

kubectl expose deployment my-first-nginx --port=80 --type=LoadBalancer

We can see the service status we just created:

kubectl get service

You may find an additional service named kubernetes if the service daemon run as a container. The pending state of my-first-nginx service’s external IP indicates that it is waiting for a specific public IP from cloud provider.

Let’s take a look at the insight of the service using describe in the kubectl command. We will create one Kubernetes service with the type LoadBalancer, which will dispatch the traffic into two endpoints, 192.168.79.9 and 192.168.79.10 with port 80:

kubectl describe service my-first-nginx

The port here is an abstract service port, which will allow any other resources to access the service within the cluster. The nodePort will be indicating the external port to allow external access. The targetPort is the port the container allows traffic into; by default, it will be the same port.

In the following diagram, external access will access the service with nodePort. The service acts as a load balancer to dispatch the traffic to the pod using port 80. The pod will then pass through the traffic into the corresponding container using targetPort 80:

In any nodes or master, once the inter-connection network is set up, you should be able to access the nginx service using ClusterIP 192.168.61.150 with port 80:

curl 10.105.164.40

It will be the same result if we curl to the target port of the pod directly:

curl 192.168.205.193:80

If you’d like to try out external access, use your browser to access the external IP address. Please note that the external IP address depends on which environment you’re running in.

In the Google compute engine, you could access it via a ClusterIP with a proper rewall rules setting:

curl 192.168.205.193:80

In a custom environment, such as on-premise data center, you could go through the IP address of nodes to access :

curl http://<nodeIP>:<nodePort>

Stopping the application

We can stop the application using commands such as the delete deployment and service. Before this, we suggest you read through the following code first to understand more about how it works:

Stop deployment named my-first-nginx

kubectl delete deployment my-first-nginx

Stop service named my-first-nginx

kubectl delete service my-first-nginx

Add Comment