We will containerize the KeystoneJS application and host it on Google Kubernetes Engine (GKE). GKE is powered by the container management system, Kubernetes. Containers are built to do one specific task, and so we'll separate the application and the database as we did for App Engine.
The MongoDB container will host the MongoDB database with the data stored on external disks. The data within a container is transient, and so we need an external disk to safely store the MongoDB data. The App Container includes a Node.js runtime, that will run our KeystoneJS application.
It will communicate with the Mongo Container and also expose itself to the end user:
Note
You'll be using the following services and others for this recipe:
- Google Kubernetes Engine
- GCE
- Google Container Registry
The following are the initial setup verification steps to be taken before the recipe can be executed:
- Create or select a GCP project.
- Enable billing and enable the default APIs (some APIs such as BigQuery, storage, monitoring, and a few others are enabled automatically).
- Verify that Google Cloud SDK is installed on your development machine.
- Verify that the default project is set properly.
- Install Docker on your development machine.
- Install
kubectl
, the command-line tool for running commands against Kubernetes clusters:
$ gcloud components install kubectl
The steps involved are:
- Creating a cluster on GKE to host the containers
- Containerizing the KeystoneJS application
- Creating a replicated deployment for the application and MongoDB
- Creating a load-balanced service to route traffic to the deployed application
The container engine cluster runs on top of GCE. For this recipe, we'll create a two-node cluster which will be internally managed by Kubernetes:
- We'll create the cluster using the following command:
$ gcloud container clusters create mysite-cluster --scopes "cloud-platform" --num-nodes 2 --zone us-east1-c
The gcloud
command automatically generates a kubeconfig
entry that enables us to use kubectl
on the cluster:
- Using kubectl, verify that you have access to the created cluster:
$ kubectl get nodes
The gcloud
command is used to manage resources on Google Cloud Project and kubectl
is used to manage resources on the Container Engine/Kubernetes cluster.
- Clone the repository in your development space:
$ git clone https://github.com/legorie/gcpcookbook.git
- Navigate to the directory where the
mysite
application is stored:
$ cd gcpcookbook/Chapter01/mysite-gke
- With your favorite editor, create a filename
.env
in themysite
folder:
PORT=8080 COOKIE_SECRET=<a very long string> MONGO_URI=mongodb://mongo/mysite
A custom port of 8080
is used for the KeystoneJS application. This port will be mapped to port 80
later in the Kubernetes service configuration. Similarly, mongo
will be the name of the load-balanced MongoDB service that will be created later.
- The Dockerfile in the folder is used to create the application's Docker image. First, it pulls a Node.js image from the registry, then it copies the application code into the container, installs the dependencies, and starts the application. Navigate to
/Chapter01/mysite-gke/Dockerfile
:
# https://github.com/GoogleCloudPlatform/nodejs-getting-started/blob/master/optional-container-engine/Dockerfile # Dockerfile extending the generic Node image with application files for a # single application. FROM gcr.io/google_appengine/nodejs # Check to see if the version included in the base runtime satisfies # '>=0.12.7', if not then do an npm install of the latest available # version that satisfies it. RUN /usr/local/bin/install_node '>=0.12.7' COPY . /app/ # You have to specify "--unsafe-perm" with npm install # when running as root. Failing to do this can cause # install to appear to succeed even if a preinstall # script fails, and may have other adverse consequences # as well. # This command will also cat the npm-debug.log file after the # build, if it exists. RUN npm install --unsafe-perm || \ ((if [ -f npm-debug.log ]; then \ cat npm-debug.log; \ fi) && false) CMD npm start
- The
.dockerignore
file contains the file paths which will not be included in the Docker container. - Build the Docker image:
$ docker build -t gcr.io/<Project ID>/mysite .
Note
Troubleshooting:
- Error: Cannot connect to the Docker daemon. Is the Docker daemon running on this host?
- Solution: Add the current user to the Docker group and restart the shell. Create a new Docker group if needed.
- You can list the created Docker image:
$ docker images
- Push the created image to Google Container Registry so that our cluster can access this image:
$ gcloud docker --push gcr.io/<Project ID>/mysite
- To create an external disk, we'll use the following command:
$ gcloud compute disks create --size 1GB mongo-disk \ --zone us-east1-c
- We'll first create the MongoDB deployment because the application expects the database's presence. A deployment object creates the desired number of pods indicated by our replica count. Notice the label given to the pods that are created. The Kubernetes system manages the pods, the deployment, and their linking to their corresponding services via label selectors. Navigate to
/Chapter01/mysite-gke/db-deployment.yml
:
apiVersion: apps/v1beta1 kind: Deployment metadata: name: mongo-deployment spec: replicas: 1 template: metadata: labels: name: mongo spec: containers: - image: mongo name: mongo ports: - name: mongo containerPort: 27017 hostPort: 27017 volumeMounts: - name: mongo-persistent-storage mountPath: /data/db volumes: - name: mongo-persistent-storage gcePersistentDisk: pdName: mongo-disk #The created disk name fsType: ext4
Note
You can refer to the following link for more information on Kubernetes objects: https://kubernetes.io/docs/user-guide/walkthrough/k8s201/.
$ kubectl create -f db-deployment.yml
- You can view the deployments using the command:
$ kubectl get deployments
- The pods created by the deployment can be viewed using the command:
$ kubectl get pods
apiVersion: v1 kind: Service metadata: labels: name: mongo name: mongo spec: ports: - port: 27017 targetPort: 27017 selector: name: mongo #The key-value pair is matched with the label on the deployment
- The
kubectl
command to create a service is:
$ kubectl create -f db-service.yml
- You can view the status of the creation using the commands:
$ kubectl get services
$ kubectl describe service mongo
- We'll repeat the same process for the Node.js application. For the deployment, we'll choose to have two replicas of the application pod to serve the web requests. Navigate to
/Chapter01/mysite-gke/web-deployment.yml
and update the<Project ID>
in the image item:
apiVersion: apps/v1beta1 kind: Deployment metadata: name: mysite-app labels: name: mysite spec: replicas: 2 template: metadata: labels: name: mysite spec: containers: - image: gcr.io/<Project ID>/mysite name: mysite ports: - name: http-server containerPort: 8080 #KeystoneJS app is exposed on port 8080
- Use
kubectl
to create the deployment:
$ kubectl create -f web-deployment.yml
- Finally, we'll create the service to manage the application pods. Navigate to
/Chapter01/mysite-gke/web-service.yml
:
apiVersion: v1 kind: Service metadata: name: mysite labels: name: mysite spec: type: LoadBalancer ports: - port: 80 #The application is exposed to the external world on port 80 targetPort: http-server protocol: TCP selector: name: mysite
To create the service execute the below command:
$ kubectl create -f web-service.yml
$ kubectl get services NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE kubernetes 10.27.240.1 <none> 443/TCP 49m mongo 10.27.246.117 <none> 27017/TCP 30m mysite 10.27.240.33 1x4.1x3.38.164 80:30414/TCP 2m
Note
After the service is created, the External IP will be unavailable for a short period; you can retry after a few seconds. The Google Cloud Console has a rich interface to view the cluster components, in addition to the Kubernetes dashboard. In case of any errors, you can view the logs and verify the configurations on the Console. The Workloads
submenu of GKE provides details of Deployments
, the Discovery & load balancing
submenu gives us all the services created.