Google Cloud Kubernetes Tutorial
Introduction
In the era of Internet, instead of setting up a server on our own, we deploy our website applications widely on cloud service providers. Google Cloud provides several hosting options , including App Engine and Kubernetes Engine, for our websites. For simple applications, App Engine is desired because it is simple and Google Cloud manages everything for you, so you don’t have to worry about scalability or anything else. However, when the applications become sophisticated and require complicated environment setups, Kubernetes Engine becomes a good choice.
In this blog post, I am going to present a self-contained tutorial for setting up a containerized application on Google Cloud Kubernetes Engine.
Dependencies
- Google Cloud SDK
- Docker
- Git
Google Cloud Platform Console actually has all the dependencies installed. We could directly run our command in the console.
Tutorial
Application
In this project, we are going to use Google’s official application as an example. In case Google is going to change its official example, I have forked the repository to my account.
1 | # Git |
The app is a simple Flask app which runs at port 8080
by default and prints “Hello World”. In app.py
, we have
1 | import os |
The Dockerfile sets up the most basic environment the app requires and starts app.
1 | # Use the official Python image. |
We could almost treat the app as a black box for now, and just have to remember this app will run and listen to port 8080
in the container.
Docker Container
We set up some configuration variables.
1 | $ gcloud config set project [PROJECT_ID] |
In our case, we use
1 | $ gcloud config set project leimao-app |
We have to first build Docker image for our application on our local machine.
1 | # Set an environment variable for project_id for convenience. |
The next step is to upload the Docker image to the Google Cloud container registry. To do that, we need to authenticate the Google Cloud container registry.
1 | $ gcloud auth configure-docker |
Upload the Docker image to the Google Cloud container registry.
1 | $ docker push gcr.io/${PROJECT_ID}/hello-app:v1 |
To check the Docker image has been uploaded successfully.
1 | # Check the image |
Create Kubernetes Container Cluster
The best way to configure a container cluster is actually to go to https://console.cloud.google.com/kubernetes
, select the right configurations, and click command line
to get the command line for creating the cluster.
Here, for simplicity, we create a cluster of 3 nodes and use the rest of the configurations by default. Among the 3 nodes, one node is the master node and the rest two nodes are worker nodes. This step may take a while.
1 | $ gcloud container clusters create hello-cluster --num-nodes=3 |
To check the cluster available in the account.
1 | $ gcloud container clusters list |
To check the VM instances created.
1 | $ gcloud compute instances list |
To check the node information.
1 | $ kubectl describe nodes |
Deploy Application on Cluster
You may have created a Kubernetes engine cluster previously, or you would like to choose one of the many existing clusters, we need to specify the cluster we want to deploy our app container.
1 | $ gcloud container clusters get-credentials hello-cluster |
To deploy our application with the app name of hello-web
using the image registry we just created.
We create a yaml
file named deployment.yaml
.
1 | apiVersion: apps/v1 |
Then we run
1 | $ kubectl apply -f deployment.yaml |
The app would be deployed then. Apps in Kubernetes are called Pods.
To check the deployment status,
1 | $ kubectl get deployment |
To check the pods status,
1 | $ kubectl get pods |
Sometimes the STATUS
will be abnormal values, which means that something is going wrong in your container. You would have to delete the deployment by
1 | $ kubectl delete deployment hello-web |
In some tutorials from Google Cloud, they suggest creating deployment using command line arguments rather than using yaml
file.
1 | $ kubectl create deployment hello-web --image=gcr.io/${PROJECT_ID}/hello-app:v1 |
I think this method is inferior because you would not configure a lot of things in detail and you would not be able to pass an environment variable to the container. In our case, the environment variable is necessary for the container to start correctly.
Expose Application to Internet
Although the application has been deployed, it has not been exposed to the internet so outsiders would not be able to see.
To expose the application to the internet, again we would be using yaml
file. Create a service.yaml
.
1 | # The hello service provides a load-balancing proxy over the hello-app |
1 | $ kubectl apply -f service.yaml |
Check the services. Allocating external IP might be slow.
1 | $ kubectl get services |
Check the services again after a while.
1 | $ kubectl get services |
It is equivalent to do this using command line arguments.
1 | $ kubectl expose deployment hello-web --type=LoadBalancer --port 80 --target-port 8080 |
The --port
flag specifies the port number configured on the Load Balancer, and the --target-port
flag specifies the port number that the hello-app container is listening on.
Check whether the application has been exposed.
1 | $ curl 35.202.208.161 |
Alternatively, open our web browser and go to http://35.202.208.161/
or http://35.202.208.161:80/
. Note that currently this application only supports http
protocol.
Upgrade Kubernetes Cluster
We would like to add two additional replicas (cluster candidates) to deployment for the same application. We modify the deployment.yaml
file.
1 | apiVersion: apps/v1 |
To apply the changes,
1 | $ kubectl apply -f deployment.yaml |
To check the deployment status,
1 | $ kubectl get deployment hello-web |
To check the pod status,
1 | $ kubectl get pods |
But we still use one external IP.
1 | $ kubectl get service |
Now, you have multiple instances of your application running independently of each other and you can use the kubectl scale command to adjust the capacity of your application.
The load balancer you provisioned in the previous step will start routing traffic to these new replicas automatically.
Deploy New Version of Application
We would like to change our “Hello World” application to “Hello Underworld” application. So we are going to change app.py
.
1 | import os |
Build and upload the new container.
1 | $ docker build -t gcr.io/${PROJECT_ID}/hello-app:v2 . |
We also change our deployment.yaml
to use the new container.
1 | apiVersion: apps/v1 |
To apply the changes,
1 | $ kubectl apply -f deployment.yaml |
We could check that the update was successful.
1 | $ curl 35.202.208.161 |
Clearn Up Application
To delete service,
1 | $ kubectl delete service hello |
We could see the service was deleted.
1 | $ kubectl get service |
But the container cluster instance was still there.
1 | $ gcloud container clusters list |
We delete the container cluster.
1 | $ gcloud container clusters delete hello-cluster |
The Docker image was still in the container registry. We check the container image information.
1 | $ gcloud container images list --repository=gcr.io/${PROJECT_ID} |
Check the tag information.
1 | $ gcloud container images list-tags gcr.io/leimao-app/hello-app |
We delete all the images related to hello-app
.
1 | $ gcloud container images delete gcr.io/${PROJECT_ID}/hello-app:v1 gcr.io/${PROJECT_ID}/hello-app:v2 |
Now there are no images for hello-app
in the container registry.
1 | $ gcloud container images list --repository=gcr.io/${PROJECT_ID} |
Final Remarks
It seems that it is more desirable to use yaml
to configure the Kubernetes settings instead of using command line arguments.
References
Google Cloud Kubernetes Tutorial
https://leimao.github.io/blog/Google-Cloud-Kubernetes-Tutorial/