10-18-2018, 07:42 AM
Run and Scale a Distributed Crossword Puzzle App with CI/CD on Kubernetes (Part 3)
<div style="margin: 5px 5% 10px 5%;"><img src="http://www.sickgaming.net/blog/wp-content/uploads/2018/10/run-and-scale-a-distributed-crossword-puzzle-app-with-ci-cd-on-kubernetes-part-3.png" width="256" height="256" title="" alt="" /></div><div><p><span><span>In </span><span>Part 2</span><span> of our series, we deployed a Jenkins pod into our Kubernetes cluster, and used Jenkins to set up a CI/CD pipeline that automated building and deploying our containerized Hello-Kenzan application in Kubernetes. </span></span></p>
<p><span><span>In Part 3, we are going to set aside the Hello-Kenzan application and get to the main event: running our Kr8sswordz Puzzle application. We will showcase the built-in UI functionality to scale backend service pods up and down using the Kubernetes API, and also simulate a load test. We will also touch on showing caching in etcd and persistence in MongoDB. </span></span></p>
<p><span><span>Before we start the install, it’s helpful to take a look at the pods we’ll run as part of the Kr8sswordz Puzzle app:</span></span></p>
<ul>
<li>
<p><span><span>kr8sswordz</span><span> – A React container with our Node.js frontend UI. </span></span></p>
</li>
<li>
<p><span><span>puzzle</span><span> – The primary backend service that handles submitting and getting answers to the crossword puzzle via persistence in MongoDB and caching in ectd. </span></span></p>
</li>
<li>
<p><span><span>mongo</span><span> – A MongoDB container for persisting crossword answers. </span></span></p>
</li>
<li>
<p><span><span>etcd</span><span> – An etcd cluster for caching crossword answers (this is separate from the etcd cluster used by the K8s Control Plane). </span></span></p>
</li>
<li>
<p><span><span>monitor-scale</span><span> – A backend service that handles functionality for scaling the </span><span>puzzle</span><span> service up and down. This service also interacts with the UI by broadcasting websockets messages. </span></span></p>
</li>
</ul>
<p><span><span>We will go into the main service endpoints and architecture in more detail after running the application. For now, let’s get going! </span></span></p>
<p><em>Read all the articles in the series:</em></p>
<div> </p>
<div>
<table>
<tbody>
<tr>
<td><span><img alt="3di6imeKV7hPtEx3cDcZM3dUG6aW4CWOPmdGOIFA" height="20" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/10/run-and-scale-a-distributed-crossword-puzzle-app-with-ci-cd-on-kubernetes-part-3.png" width="20" /></span></td>
<td>
<p><span><span>This tutorial only runs locally in Minikube and will not work on the cloud. You’ll need a computer running an up-to-date version of Linux or macOS. Optimally, it should have 16 GB of RAM. Minimally, it should have 8 GB of RAM. For best performance, reboot your computer and keep the number of running apps to a minimum. </span></span></p>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<h2><span><span>Running the Kr8sswordz Puzzle App</span></span></h2>
<p><span><span>First make sure you’ve run through the steps in </span><span>Part 1</span><span> and </span><span>Part 2</span><span>, in which we set up our image repository and Jenkins pods—you will need these to proceed with Part 3 (to do so quickly, you can run the part1 and part2 </span><a href="https://docs.google.com/document/d/1oB6E036-UHJomLr3Na6Le63_w7fkaMaTucrWMVrYHeA/edit#heading=h.vuj44njqkr03"><span>automated scripts detailed below</span></a><span>). If you previously stopped Minikube, you’ll need to start it up again. Enter the following terminal command, and wait for the cluster to start:</span></span></p>
<pre>
<span><span>minikube start</span></span></pre>
<p><span><span>You can check the cluster status and view all the pods that are running. </span></span></p>
<pre>
<span><span>kubectl cluster-info</span></span> <span><span>kubectl get pods --all-namespaces</span></span></pre>
<pre>
<span><span><span><span>Make sure the </span><strong><span>registry</span></strong><span> and<strong> </strong></span><strong><span>jenkins</span></strong><span> pods are up and running. </span></span></span></span>
</pre>
<div>
<table>
<tbody>
<tr>
<td><span><img alt="goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9" height="20" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/10/run-and-scale-a-distributed-crossword-puzzle-app-with-ci-cd-on-kubernetes-part-3.png" width="20" /></span></td>
<td>
<p><span><span>So far we have been creating deployments directly using K8s manifests, and have not yet used </span><a href="https://helm.sh/"><span>Helm</span></a><span>. Helm is a package manager that deploys a Chart (or package) onto a K8s cluster with all the resources and dependencies needed for the application. Underneath, the chart generates Kubernetes deployment manifests for the application using templates that replace environment configuration values. Charts are stored in a repository and versioned with releases so that cluster state can be maintained. </span></span><br class="kix-line-break" /><span><span>Helm is very powerful because it allows you to templatize, version, reuse, and share the deployments you create for Kubernetes. See </span><a href="https://hub.kubeapps.com/"><span>https://hub.kubeapps.com/</span></a><span> for a look at some of the open source charts available. We will be using Helm to install an etcd operator directly onto our cluster using a pre-built chart. </span></span></p>
</td>
</tr>
</tbody>
</table>
</div>
<p><span><span>1. Initialize Helm. This will install Tiller (Helm’s server) into our Kubernetes cluster. </span></span></p>
<pre>
<span><span>helm init --wait --debug; kubectl rollout status deploy/tiller-deploy -n kube-system</span></span></pre>
<p><span><span>2. We will deploy an </span><span>e<strong>tcd operator</strong></span><span> onto the cluster using a Helm Chart. </span></span></p>
<pre>
<span><span>helm install stable/etcd-operator --version 0.8.0 --name etcd-operator --debug --wait</span></span></pre>
<div>
<div>
<table>
<tbody>
<tr>
<td><span><img alt="goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9" height="20" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/10/run-and-scale-a-distributed-crossword-puzzle-app-with-ci-cd-on-kubernetes-part-3.png" width="20" /></span></td>
<td>
<p><span><span>An </span><span>operator</span><span> is a custom controller for managing complex or stateful applications. As a separate watcher, it monitors the state of the application, and acts to align the application with a given specification as events occur. In the case of etcd, as nodes terminate, the operator will bring up replacement nodes using snapshot data. </span></span></p>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<p><span><span>3. Deploy the </span><strong><span>etcd cluster</span></strong><span> and K8s Services for accessing the cluster.</span></span></p>
<p><span><span>kubectl create -f manifests/etcd-cluster.yaml</span><br class="kix-line-break" /><span>kubectl create -f manifests/etcd-service.yaml</span></span></p>
<p><span><span>You can see these new pods by entering </span><span>kubectl get pods</span><span> in a separate terminal window. The cluster runs as three pod instances for redundancy.</span></span></p>
<p><span><span>4. The crossword application is a multi-tier application whose services depend on each other. We will create three K8s Services so that the applications can communicate with one another.</span></span></p>
<p><span><span>kubectl apply -f manifests/all-services.yaml</span></span></p>
<p><span><span>5. Now we’re going to walk through an initial build of the </span><strong><span>monitor-scale</span></strong><span> application.</span></span></p>
<pre>
<span><span>docker build -t 127.0.0.1:30400/monitor-scale:`git </span></span><span><span>rev-parse </span></span>
<span><span> --short </span></span><span><span>HEAD` -f applications/monitor-scale/Dockerfile </span></span>
<span><span> applications/monitor-scale</span></span></pre>
<div>
<div>
<table>
<tbody>
<tr>
<td><span><img alt="goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9" height="20" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/10/run-and-scale-a-distributed-crossword-puzzle-app-with-ci-cd-on-kubernetes-part-3.png" width="20" /></span></td>
<td>
<p><span><span>To simulate a real life scenario, we are leveraging the github commit id to tag all our service images, as shown in this command (</span><span>git rev-parse –short HEAD</span><span>). </span></span></p>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<p><span><span>6. Once again we’ll need to set up the Socat Registry proxy container to push the monitor-scale image to our registry, so let’s build it. Feel free to skip this step in case the socat-registry image already exists from Part 2 (to check, run </span><span>docker images</span><span>).</span></span></p>
<pre>
<span><span>docker build -t socat-registry -f applications/socat/Dockerfile </span></span>
<span><span> applications/socat</span></span></pre>
<p><span><span>7. Run the proxy container from the newly created image.</span></span></p>
<pre>
<span><span>docker stop socat-registry; docker rm socat-registry; docker run </span></span>
<span><span> -d -e "REG_IP=`minikube ip`" -e "REG_PORT=30400" --name </span></span>
<span><span> socat-registry -p 30400:5000 socat-registry</span></span></pre>
<div>
<div>
<table>
<tbody>
<tr>
<td><span><img alt="goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9" height="20" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/10/run-and-scale-a-distributed-crossword-puzzle-app-with-ci-cd-on-kubernetes-part-3.png" width="20" /></span></td>
<td>
<p><span><span>This step will fail if local port 30400 is currently in use by another process. You can check if there’s any process currently using this port by running the command</span><br class="kix-line-break" /><span>lsof -i :30400</span></span></p>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<p><span><span>8. Push the monitor-scale image to the registry.</span></span></p>
<pre>
<span><span>docker push 127.0.0.1:30400/monitor-scale:`git rev-parse --short HEAD`</span></span></pre>
<p><span><span>9. The proxy’s work is done, so go ahead and stop it.</span></span></p>
<pre>
<span><span>docker stop socat-registry</span></span></pre>
<p><span><span>10. Open the registry UI and verify that the monitor-scale image is in our local registry. </span></span></p>
<pre>
<span><span>minikube service registry-ui</span></span></pre>
<div><span><span><img alt="_I4gSkKcakXTMxLSD_qfzVLlTlfLiabRf3fOZzrm" height="315" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/10/run-and-scale-a-distributed-crossword-puzzle-app-with-ci-cd-on-kubernetes-part-3-1.png" width="624" /></span></span></div>
<p><span><span>11. Monitor-scale has the functionality to let us scale our puzzle app up and down through the Kr8sswordz UI, therefore we’ll need to do some </span><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/"><span>RBAC</span></a><span> work in order to provide monitor-scale with the proper rights. </span></span></p>
<pre>
<span><span>kubectl apply -f manifests/monitor-scale-serviceaccount.yaml</span></span></pre>
<div>
<div>
<table>
<tbody>
<tr>
<td><span><img alt="ANM4b9RSNsAb4CFeAbJNUYr6IlIzulAIb0sEvwVJ" height="20" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/10/run-and-scale-a-distributed-crossword-puzzle-app-with-ci-cd-on-kubernetes-part-3.png" width="20" /></span></td>
<td>
<p><span><span>In the </span><strong><span>manifests/monitor-scale-serviceaccount.yaml</span></strong><span><strong> </strong>you’ll find the specs for the following K8s Objects. </span></span></p>
<p><span><strong><span>Role</span></strong><span><strong>: </strong>The custom “puzzle-scaler” role allows “Update” and “Get” actions to be taken over the Deployments and Deployments/scale kinds of resources, specifically to the resource named “puzzle”. This is not a ClusterRole kind of object, which means it will only work on a specific namespace (in our case “default”) as opposed to being cluster-wide.</span></span></p>
<p><span><strong><span>ServiceAccount</span></strong><span><strong>:</strong> A “monitor-scale” ServiceAccount is assigned to the monitor-scale deployment.</span></span></p>
<p><span><strong><span>RoleBinding</span></strong><span><strong>:</strong> A “monitor-scale-puzzle-scaler” RoleBinding binds together the aforementioned objects.</span><span> </span></span></p>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<p><span><span>12. </span><span>Create the monitor-scale deployment and the Ingress defining the hostname by which this service will be accessible to the other services. </span></span></p>
<pre>
<span><span>sed 's#127.0.0.1:30400/monitor-scale:$BUILD_TAG#127.0.0.1:30400/</span></span>
<span><span> monitor-scale:'`git rev-parse --short HEAD`'#' </span></span>
<span><span> applications/monitor-scale/k8s/deployment.yaml | kubectl apply -f -</span></span></pre>
<div>
<div>
<table>
<tbody>
<tr>
<td><span><img alt="goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9" height="20" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/10/run-and-scale-a-distributed-crossword-puzzle-app-with-ci-cd-on-kubernetes-part-3.png" width="20" /></span></td>
<td>
<p><span><span>The sed command is replacing the $BUILD_TAG substring from the manifest file with the actual build tag value used in the previous docker build command. We’ll see later how Jenkins plugin can do this automatically.</span></span></p>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<p><span><span>13. Wait for the monitor-scale deployment to finish. </span></span></p>
<pre>
<span><span>kubectl rollout status deployment/monitor-scale</span></span></pre>
<p><span><span>14. View pods to see the monitor-scale pod running.</span></span></p>
<pre>
<span><span>kubectl get pods</span></span></pre>
<p><span><span>15. View services to see the monitor-scale service. </span></span></p>
<pre>
<span><span>kubectl get services</span></span></pre>
<p><span><span>16. View ingress rules to see the monitor-scale ingress rule. </span></span></p>
<pre>
<span><span>kubectl get ingress</span></span></pre>
<p><span><span>17. View deployments to see the monitor-scale deployment. </span></span></p>
<pre>
<span><span>kubectl get deployments</span></span></pre>
<p><span><span>18. We will run a script to bootstrap the </span><strong><span>puzzle</span></strong><span> and </span><strong><span>mongo</span></strong><span> services, creating Docker images and storing them in the local registry. The puzzle.sh script runs through the same build, proxy, push, and deploy steps we just ran through manually for both services.</span></span></p>
<pre>
<span><span>scripts/puzzle.sh</span></span></pre>
<p><span><span>19. Check to see if the </span><strong><span>puzzle</span></strong><span> and </span><strong><span>mongo</span></strong><span> services have been deployed. </span></span></p>
<pre>
<span><span>kubectl rollout status deployment/puzzle</span>
<span>kubectl rollout status deployment/mongo</span></span></pre>
<p><span><span>20. Bootstrap the </span><strong><span>kr8sswordz</span></strong><span> frontend web application. This script follows the same build proxy, push, and deploy steps that the other services followed. </span></span></p>
<pre>
<span><span>scripts/kr8sswordz-pages.sh</span></span></pre>
<p><span><span>21. Check to see if the frontend has been deployed.</span></span></p>
<pre>
<span><span>kubectl rollout status deployment/kr8sswordz</span></span></pre>
<p><span><span>22. Check to see that all the pods are running.</span></span></p>
<pre>
<span><span>kubectl get pods</span></span></pre>
<p><span><span>23. Start the web application in your default browser. </span></span></p>
<pre>
<span><span>minikube service kr8sswordz</span></span>
</pre>
<h2><span><span>Giving the Kr8sswordz Puzzle a Spin</span></span></h2>
<p><span><span>Now that it’s up and running, let’s give the Kr8sswordz puzzle a try. We’ll also spin up several backend service instances and hammer it with a load test to see how Kubernetes automatically balances the load. </span></span></p>
<p><span><span>1. Try filling out some of the answers to the puzzle. You’ll see that any wrong answers are automatically shown in red as letters are filled in. </span></span></p>
<p><span><span>2. Click </span><strong><span>Submit</span></strong><span><strong>.</strong> When you click </span><strong><span>Submit</span></strong><span>, your current answers for the puzzle are stored in MongoDB. </span></span></p>
<p><span><span><img alt="EfPr45Sz_JuXZDzxNUyRsfXnKCis5iwRZLGi3cSo" height="381" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/10/run-and-scale-a-distributed-crossword-puzzle-app-with-ci-cd-on-kubernetes-part-3-2.png" width="624" /></span></span><br class="kix-line-break" /><span><span>3. Try filling out the puzzle a bit more, then click </span><span>Reload </span><span>once. This will perform a GET which retrieves the last submitted puzzle answers in MongoDB.</span></span></p>
<p><span><span>Did you notice the green arrow on the right as you clicked </span><span>Reload</span><span>? The arrow indicates that the application is fetching the data from MongoDB. The GET also caches those same answers in etcd with a 30 sec TTL (time to live). If you immediately press </span><span>Reload</span><span> again, it will retrieve answers from etcd until the TTL expires, at which point answers are again retrieved from MongoDB and re-cached. Give it a try, and watch the arrows.</span></span></p>
<p><span><span>4. Scale the number of instances of the Kr8sswordz puzzle service up to 16 by dragging the upper slider all the way to the right, then click </span><span>Scale</span><span>. Notice the number of puzzle services increase.</span></span></p>
<div>
<div>
<table>
<tbody>
<tr>
<td><span><img alt="goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9" height="20" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/10/run-and-scale-a-distributed-crossword-puzzle-app-with-ci-cd-on-kubernetes-part-3.png" width="20" /></span></td>
<td>
<p><span><span>If you did not allocate 8 GB of memory to Minikube, we suggest not exceeding 6 scaled instances using the slider.</span></span></p>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<p><span><img alt="r5ShVJ4omRX9znIrPLlpBCwatys2yjjdHA2h2Dlq" height="467" src="https://lh3.googleusercontent.com/r5ShVJ4omRX9znIrPLlpBCwatys2yjjdHA2h2DlqTMmwEx2UOqKpWpNjrWAijGPUN48BpEC2ar5eD3Rk2M5yEPXPR0SnKr6Qn66unwO1oylxwZFReyN9QZHlzZ6cqvpufdMigc8C" width="423" /></span></p>
<p><span><span>In a terminal, run </span><em><span>kubectl get pods</span></em><span> to see the new replicas.</span></span></p>
<p><span><span>5. Now run a load test. Drag the lower slider to the right to 250 requests, and click </span><strong><span>Load Test</span></strong><span>. Notice how it very quickly hits several of the puzzle services (the ones that flash white) to manage the numerous requests. Kubernetes is automatically balancing the load across all available pod instances. Thanks, Kubernetes!</span></span><br class="kix-line-break" /><span><span><img alt="P4S5i1UdQg6LHo71fTFLfHiZa1IpGwmXDhg7nhZJ" height="382" src="https://lh5.googleusercontent.com/P4S5i1UdQg6LHo71fTFLfHiZa1IpGwmXDhg7nhZJzDZo2zPuVZX5vT8dsmI_G8w-tM8xEXR3A1cknH9f-CGcv9tMP-DAN-EJ3tu0t9kKiEIutBaHbmpNzw8O3MbWI23udVZKSCSm" width="564" /></span></span></p>
<p>6. <span><span>Drag the middle slider back down to 1 and click </span><span>Scale</span><span>. In a terminal, run </span><span>kubectl get pods</span><span> to see the puzzle services terminating. </span></span></p>
<p><span><span><img alt="g5SHkVKTJQjiRvaG-huPf8aJmLWS19QGlmqgn2OI" height="119" src="https://lh3.googleusercontent.com/g5SHkVKTJQjiRvaG-huPf8aJmLWS19QGlmqgn2OI5ynzShJfN4oefZpQtBedQMXO6ZLrCMBAiTCRJSl5ISVJjDdeeLVqa3ODWCJzmitXO8k53GsK4_-WAhIY8lPGDPDKFLfuaPK4" width="481" /></span></span></p>
<p>7. Now l<span>et’s try deleting the puzzle pod to see Kubernetes restart a pod using its ability to automatically heal downed pods</span></p>
<p><span>a. In a terminal enter </span><span>kubectl get pods</span><span> to see all pods. Copy the puzzle pod name (similar to the one shown in the picture above). </span></p>
<pre>
<span> b. Enter the following command to delete the remaining puzzle pod. </span>
<span>kubectl delete pod [puzzle podname]</span></pre>
<p><span><span>c. Enter </span><span>kubectl get pods</span><span> to see the old pod terminating and the new pod starting. You should see the new puzzle pod appear in the Kr8sswordz Puzzle app. </span></span></p>
<h2>What’s Happening on the Backend</h2>
<p><span><span>We’ve seen a bit of Kubernetes magic, showing how pods can be scaled for load, how Kubernetes automatically handles load balancing of requests, as well as how Pods are self-healed when they go down. Let’s take a closer look at what’s happening on the backend of the Kr8sswordz Puzzle app to make this functionality apparent. </span></span></p>
<p><span><span><img alt="Kr8sswordz.png" height="388" src="https://lh5.googleusercontent.com/aqi_8RG3l634TPbFDz9z2eGalvh3VemseUKKgYQl9dKSUClpcpLMTAjr8DtVi6Mst17vmzJhfFrisqjSWWAAe-PByS0MUghUOcutOJXN20qBHzEcnnS2ehTYtnWmTKRzXp7FFn0_" width="624" /></span></span></p>
<p><span><span>1. pod instance of the </span><strong><span>puzzle</span></strong><span><strong> </strong>service. The puzzle service uses a LoopBack data source to store answers in MongoDB. When the </span><strong><span>Reload</span></strong><span><strong> </strong>button is pressed, answers are retrieved with a GET request in MongoDB, and the etcd client is used to cache answers with a 30 second TTL. </span></span></p>
<p><span><span>2. The </span><strong><span>monitor-scale</span></strong><span><strong> </strong>pod handles scaling and load test functionality for the app. When the </span><span>Scale</span><span> button is pressed, the monitor-scale pod uses the Kubectl API to scale the number of </span><strong><span>puzzle</span></strong><span> pods up and down in Kubernetes. </span></span></p>
<p><span><span>3. When the </span><strong><span>Load Test </span></strong><span>button is pressed, the monitor-scale pod handles the </span><strong><span>loadtest</span></strong><span> by sending several GET requests to the service pods based on the count sent from the front end. The </span><strong><span>puzzle</span></strong><span> service sends </span><strong><span>Hits</span></strong><span> to monitor-scale whenever it receives a request. </span><strong><span>Monitor-scale</span></strong><span><strong> </strong>then uses websockets to broadcast to the UI to have pod instances light up green. </span></span></p>
<p><span><span>4. When a </span><strong><span>puzzle</span></strong><span> pod instance goes </span><strong><span>up</span></strong><span> or </span><strong><span>down</span></strong><span>, the puzzle pod sends this information to the monitor-scale pod. The up and down states are configured as lifecycle hooks in the puzzle pod k8s deployment, which curls the same endpoint on monitor-scale (see </span><strong><span>kubernetes-ci-cd/applications/crossword/k8s/deployment.yml</span></strong><span> to view the hooks). Monitor-scale persists the list of available puzzle pods in etcd with set, delete, and get pod requests.</span></span></p>
<div>
<div>
<table>
<tbody>
<tr>
<td><span><img alt="goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9" height="20" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/10/run-and-scale-a-distributed-crossword-puzzle-app-with-ci-cd-on-kubernetes-part-3.png" width="20" /></span></td>
<td>
<p><span><span>We do not recommend stopping Minikube (</span><span>minikube stop</span><span>) before moving on to do the tutorial in </span><a href="https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/set-cicd-distributed-crossword-puzzle-app-kubernetes-part-4"><span>Part 4</span></a><span>. Upon restart, it may create some issues with the etcd cluster.</span></span></p>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<div>
<table>
<tbody>
<tr>
<td>
<h3><span><span>Automated Scripts</span></span></h3>
<p><span><span>If you need to walk through the steps we did again (or do so quickly), we’ve provided npm scripts that will automate running the same commands in a terminal. </span></span></p>
<p><span><span>1. To use the automated scripts, you’ll need to install NodeJS and npm. </span></span></p>
<p><span><span>On </span><span>Linux</span><span>, follow the </span><a href="https://nodejs.org/en/download/package-manager/"><span>NodeJS installation steps</span></a><span> for your distribution. To quickly install NodeJS and npm on Ubuntu 16.04 or higher, use the following terminal commands. </span></span></p>
<pre>
<span><span> a. curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash -</span></span>
<span><span> b. sudo apt-get install -y nodejs</span></span></pre>
<p><span><span>On </span><strong><span>macOS</span></strong><span>, </span><a href="https://nodejs.org/en/"><span>download the NodeJS installer</span></a><span>, and then double-click the .pkg file to install NodeJS and npm.</span></span></p>
<p><span><span>2. Change directories to the cloned repository and install the interactive tutorial script:</span></span></p>
<pre>
<span><span> a. </span></span><span>cd ~/kubernetes-ci-cd</span>
<span> b. </span><span>npm install</span></pre>
<p><span>3. </span><span>Start the script</span></p>
<pre>
<span>npm run part1 (or part2, part3, part4 of the blog series)</span></pre>
<p><span>4. Press </span><strong><span>Enter</span></strong><span> to proceed running each command. </span><span><span> </span></span></p>
</td>
</tr>
</tbody>
</table>
</div>
<h2><span><span>Up Next</span><span> </span></span></h2>
<p><span><span>Now that we’ve run our Kr8sswordz Puzzle app, the next step is to set up CI/CD for our app. Similar to what we did for the Hello-Kenzan app, Part 4 will cover creating a Jenkins pipeline for the Kr8sswordz Puzzle app so that it builds at the touch of a button. We will also modify a bit of code to enhance the application and enable our </span><span>Submit </span><span>button to show white hits on the puzzle service instances in the UI. </span></span></p>
<p><em>Curious to learn more about Kubernetes? <a href="https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x">Enroll in Introduction to Kubernetes</a>, a FREE training course from The Linux Foundation, hosted on edX.org.</em></p>
<p><em>This article was revised and updated by David Zuluaga, a front end developer at Kenzan. He was born and raised in Colombia, where he studied his BE in Systems Engineering. After moving to the United States, he studied received his master’s degree in computer science at Maharishi University of Management. David has been working at Kenzan for four years, dynamically moving throughout a wide range of areas of technology, from front-end and back-end development to platform and cloud computing. David’s also helped design and deliver training sessions on Microservices for multiple client teams.</em></p>
</div>
<div style="margin: 5px 5% 10px 5%;"><img src="http://www.sickgaming.net/blog/wp-content/uploads/2018/10/run-and-scale-a-distributed-crossword-puzzle-app-with-ci-cd-on-kubernetes-part-3.png" width="256" height="256" title="" alt="" /></div><div><p><span><span>In </span><span>Part 2</span><span> of our series, we deployed a Jenkins pod into our Kubernetes cluster, and used Jenkins to set up a CI/CD pipeline that automated building and deploying our containerized Hello-Kenzan application in Kubernetes. </span></span></p>
<p><span><span>In Part 3, we are going to set aside the Hello-Kenzan application and get to the main event: running our Kr8sswordz Puzzle application. We will showcase the built-in UI functionality to scale backend service pods up and down using the Kubernetes API, and also simulate a load test. We will also touch on showing caching in etcd and persistence in MongoDB. </span></span></p>
<p><span><span>Before we start the install, it’s helpful to take a look at the pods we’ll run as part of the Kr8sswordz Puzzle app:</span></span></p>
<ul>
<li>
<p><span><span>kr8sswordz</span><span> – A React container with our Node.js frontend UI. </span></span></p>
</li>
<li>
<p><span><span>puzzle</span><span> – The primary backend service that handles submitting and getting answers to the crossword puzzle via persistence in MongoDB and caching in ectd. </span></span></p>
</li>
<li>
<p><span><span>mongo</span><span> – A MongoDB container for persisting crossword answers. </span></span></p>
</li>
<li>
<p><span><span>etcd</span><span> – An etcd cluster for caching crossword answers (this is separate from the etcd cluster used by the K8s Control Plane). </span></span></p>
</li>
<li>
<p><span><span>monitor-scale</span><span> – A backend service that handles functionality for scaling the </span><span>puzzle</span><span> service up and down. This service also interacts with the UI by broadcasting websockets messages. </span></span></p>
</li>
</ul>
<p><span><span>We will go into the main service endpoints and architecture in more detail after running the application. For now, let’s get going! </span></span></p>
<p><em>Read all the articles in the series:</em></p>
<div> </p>
<div>
<table>
<tbody>
<tr>
<td><span><img alt="3di6imeKV7hPtEx3cDcZM3dUG6aW4CWOPmdGOIFA" height="20" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/10/run-and-scale-a-distributed-crossword-puzzle-app-with-ci-cd-on-kubernetes-part-3.png" width="20" /></span></td>
<td>
<p><span><span>This tutorial only runs locally in Minikube and will not work on the cloud. You’ll need a computer running an up-to-date version of Linux or macOS. Optimally, it should have 16 GB of RAM. Minimally, it should have 8 GB of RAM. For best performance, reboot your computer and keep the number of running apps to a minimum. </span></span></p>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<h2><span><span>Running the Kr8sswordz Puzzle App</span></span></h2>
<p><span><span>First make sure you’ve run through the steps in </span><span>Part 1</span><span> and </span><span>Part 2</span><span>, in which we set up our image repository and Jenkins pods—you will need these to proceed with Part 3 (to do so quickly, you can run the part1 and part2 </span><a href="https://docs.google.com/document/d/1oB6E036-UHJomLr3Na6Le63_w7fkaMaTucrWMVrYHeA/edit#heading=h.vuj44njqkr03"><span>automated scripts detailed below</span></a><span>). If you previously stopped Minikube, you’ll need to start it up again. Enter the following terminal command, and wait for the cluster to start:</span></span></p>
<pre>
<span><span>minikube start</span></span></pre>
<p><span><span>You can check the cluster status and view all the pods that are running. </span></span></p>
<pre>
<span><span>kubectl cluster-info</span></span> <span><span>kubectl get pods --all-namespaces</span></span></pre>
<pre>
<span><span><span><span>Make sure the </span><strong><span>registry</span></strong><span> and<strong> </strong></span><strong><span>jenkins</span></strong><span> pods are up and running. </span></span></span></span>
</pre>
<div>
<table>
<tbody>
<tr>
<td><span><img alt="goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9" height="20" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/10/run-and-scale-a-distributed-crossword-puzzle-app-with-ci-cd-on-kubernetes-part-3.png" width="20" /></span></td>
<td>
<p><span><span>So far we have been creating deployments directly using K8s manifests, and have not yet used </span><a href="https://helm.sh/"><span>Helm</span></a><span>. Helm is a package manager that deploys a Chart (or package) onto a K8s cluster with all the resources and dependencies needed for the application. Underneath, the chart generates Kubernetes deployment manifests for the application using templates that replace environment configuration values. Charts are stored in a repository and versioned with releases so that cluster state can be maintained. </span></span><br class="kix-line-break" /><span><span>Helm is very powerful because it allows you to templatize, version, reuse, and share the deployments you create for Kubernetes. See </span><a href="https://hub.kubeapps.com/"><span>https://hub.kubeapps.com/</span></a><span> for a look at some of the open source charts available. We will be using Helm to install an etcd operator directly onto our cluster using a pre-built chart. </span></span></p>
</td>
</tr>
</tbody>
</table>
</div>
<p><span><span>1. Initialize Helm. This will install Tiller (Helm’s server) into our Kubernetes cluster. </span></span></p>
<pre>
<span><span>helm init --wait --debug; kubectl rollout status deploy/tiller-deploy -n kube-system</span></span></pre>
<p><span><span>2. We will deploy an </span><span>e<strong>tcd operator</strong></span><span> onto the cluster using a Helm Chart. </span></span></p>
<pre>
<span><span>helm install stable/etcd-operator --version 0.8.0 --name etcd-operator --debug --wait</span></span></pre>
<div>
<div>
<table>
<tbody>
<tr>
<td><span><img alt="goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9" height="20" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/10/run-and-scale-a-distributed-crossword-puzzle-app-with-ci-cd-on-kubernetes-part-3.png" width="20" /></span></td>
<td>
<p><span><span>An </span><span>operator</span><span> is a custom controller for managing complex or stateful applications. As a separate watcher, it monitors the state of the application, and acts to align the application with a given specification as events occur. In the case of etcd, as nodes terminate, the operator will bring up replacement nodes using snapshot data. </span></span></p>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<p><span><span>3. Deploy the </span><strong><span>etcd cluster</span></strong><span> and K8s Services for accessing the cluster.</span></span></p>
<p><span><span>kubectl create -f manifests/etcd-cluster.yaml</span><br class="kix-line-break" /><span>kubectl create -f manifests/etcd-service.yaml</span></span></p>
<p><span><span>You can see these new pods by entering </span><span>kubectl get pods</span><span> in a separate terminal window. The cluster runs as three pod instances for redundancy.</span></span></p>
<p><span><span>4. The crossword application is a multi-tier application whose services depend on each other. We will create three K8s Services so that the applications can communicate with one another.</span></span></p>
<p><span><span>kubectl apply -f manifests/all-services.yaml</span></span></p>
<p><span><span>5. Now we’re going to walk through an initial build of the </span><strong><span>monitor-scale</span></strong><span> application.</span></span></p>
<pre>
<span><span>docker build -t 127.0.0.1:30400/monitor-scale:`git </span></span><span><span>rev-parse </span></span>
<span><span> --short </span></span><span><span>HEAD` -f applications/monitor-scale/Dockerfile </span></span>
<span><span> applications/monitor-scale</span></span></pre>
<div>
<div>
<table>
<tbody>
<tr>
<td><span><img alt="goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9" height="20" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/10/run-and-scale-a-distributed-crossword-puzzle-app-with-ci-cd-on-kubernetes-part-3.png" width="20" /></span></td>
<td>
<p><span><span>To simulate a real life scenario, we are leveraging the github commit id to tag all our service images, as shown in this command (</span><span>git rev-parse –short HEAD</span><span>). </span></span></p>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<p><span><span>6. Once again we’ll need to set up the Socat Registry proxy container to push the monitor-scale image to our registry, so let’s build it. Feel free to skip this step in case the socat-registry image already exists from Part 2 (to check, run </span><span>docker images</span><span>).</span></span></p>
<pre>
<span><span>docker build -t socat-registry -f applications/socat/Dockerfile </span></span>
<span><span> applications/socat</span></span></pre>
<p><span><span>7. Run the proxy container from the newly created image.</span></span></p>
<pre>
<span><span>docker stop socat-registry; docker rm socat-registry; docker run </span></span>
<span><span> -d -e "REG_IP=`minikube ip`" -e "REG_PORT=30400" --name </span></span>
<span><span> socat-registry -p 30400:5000 socat-registry</span></span></pre>
<div>
<div>
<table>
<tbody>
<tr>
<td><span><img alt="goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9" height="20" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/10/run-and-scale-a-distributed-crossword-puzzle-app-with-ci-cd-on-kubernetes-part-3.png" width="20" /></span></td>
<td>
<p><span><span>This step will fail if local port 30400 is currently in use by another process. You can check if there’s any process currently using this port by running the command</span><br class="kix-line-break" /><span>lsof -i :30400</span></span></p>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<p><span><span>8. Push the monitor-scale image to the registry.</span></span></p>
<pre>
<span><span>docker push 127.0.0.1:30400/monitor-scale:`git rev-parse --short HEAD`</span></span></pre>
<p><span><span>9. The proxy’s work is done, so go ahead and stop it.</span></span></p>
<pre>
<span><span>docker stop socat-registry</span></span></pre>
<p><span><span>10. Open the registry UI and verify that the monitor-scale image is in our local registry. </span></span></p>
<pre>
<span><span>minikube service registry-ui</span></span></pre>
<div><span><span><img alt="_I4gSkKcakXTMxLSD_qfzVLlTlfLiabRf3fOZzrm" height="315" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/10/run-and-scale-a-distributed-crossword-puzzle-app-with-ci-cd-on-kubernetes-part-3-1.png" width="624" /></span></span></div>
<p><span><span>11. Monitor-scale has the functionality to let us scale our puzzle app up and down through the Kr8sswordz UI, therefore we’ll need to do some </span><a href="https://kubernetes.io/docs/reference/access-authn-authz/rbac/"><span>RBAC</span></a><span> work in order to provide monitor-scale with the proper rights. </span></span></p>
<pre>
<span><span>kubectl apply -f manifests/monitor-scale-serviceaccount.yaml</span></span></pre>
<div>
<div>
<table>
<tbody>
<tr>
<td><span><img alt="ANM4b9RSNsAb4CFeAbJNUYr6IlIzulAIb0sEvwVJ" height="20" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/10/run-and-scale-a-distributed-crossword-puzzle-app-with-ci-cd-on-kubernetes-part-3.png" width="20" /></span></td>
<td>
<p><span><span>In the </span><strong><span>manifests/monitor-scale-serviceaccount.yaml</span></strong><span><strong> </strong>you’ll find the specs for the following K8s Objects. </span></span></p>
<p><span><strong><span>Role</span></strong><span><strong>: </strong>The custom “puzzle-scaler” role allows “Update” and “Get” actions to be taken over the Deployments and Deployments/scale kinds of resources, specifically to the resource named “puzzle”. This is not a ClusterRole kind of object, which means it will only work on a specific namespace (in our case “default”) as opposed to being cluster-wide.</span></span></p>
<p><span><strong><span>ServiceAccount</span></strong><span><strong>:</strong> A “monitor-scale” ServiceAccount is assigned to the monitor-scale deployment.</span></span></p>
<p><span><strong><span>RoleBinding</span></strong><span><strong>:</strong> A “monitor-scale-puzzle-scaler” RoleBinding binds together the aforementioned objects.</span><span> </span></span></p>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<p><span><span>12. </span><span>Create the monitor-scale deployment and the Ingress defining the hostname by which this service will be accessible to the other services. </span></span></p>
<pre>
<span><span>sed 's#127.0.0.1:30400/monitor-scale:$BUILD_TAG#127.0.0.1:30400/</span></span>
<span><span> monitor-scale:'`git rev-parse --short HEAD`'#' </span></span>
<span><span> applications/monitor-scale/k8s/deployment.yaml | kubectl apply -f -</span></span></pre>
<div>
<div>
<table>
<tbody>
<tr>
<td><span><img alt="goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9" height="20" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/10/run-and-scale-a-distributed-crossword-puzzle-app-with-ci-cd-on-kubernetes-part-3.png" width="20" /></span></td>
<td>
<p><span><span>The sed command is replacing the $BUILD_TAG substring from the manifest file with the actual build tag value used in the previous docker build command. We’ll see later how Jenkins plugin can do this automatically.</span></span></p>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<p><span><span>13. Wait for the monitor-scale deployment to finish. </span></span></p>
<pre>
<span><span>kubectl rollout status deployment/monitor-scale</span></span></pre>
<p><span><span>14. View pods to see the monitor-scale pod running.</span></span></p>
<pre>
<span><span>kubectl get pods</span></span></pre>
<p><span><span>15. View services to see the monitor-scale service. </span></span></p>
<pre>
<span><span>kubectl get services</span></span></pre>
<p><span><span>16. View ingress rules to see the monitor-scale ingress rule. </span></span></p>
<pre>
<span><span>kubectl get ingress</span></span></pre>
<p><span><span>17. View deployments to see the monitor-scale deployment. </span></span></p>
<pre>
<span><span>kubectl get deployments</span></span></pre>
<p><span><span>18. We will run a script to bootstrap the </span><strong><span>puzzle</span></strong><span> and </span><strong><span>mongo</span></strong><span> services, creating Docker images and storing them in the local registry. The puzzle.sh script runs through the same build, proxy, push, and deploy steps we just ran through manually for both services.</span></span></p>
<pre>
<span><span>scripts/puzzle.sh</span></span></pre>
<p><span><span>19. Check to see if the </span><strong><span>puzzle</span></strong><span> and </span><strong><span>mongo</span></strong><span> services have been deployed. </span></span></p>
<pre>
<span><span>kubectl rollout status deployment/puzzle</span>
<span>kubectl rollout status deployment/mongo</span></span></pre>
<p><span><span>20. Bootstrap the </span><strong><span>kr8sswordz</span></strong><span> frontend web application. This script follows the same build proxy, push, and deploy steps that the other services followed. </span></span></p>
<pre>
<span><span>scripts/kr8sswordz-pages.sh</span></span></pre>
<p><span><span>21. Check to see if the frontend has been deployed.</span></span></p>
<pre>
<span><span>kubectl rollout status deployment/kr8sswordz</span></span></pre>
<p><span><span>22. Check to see that all the pods are running.</span></span></p>
<pre>
<span><span>kubectl get pods</span></span></pre>
<p><span><span>23. Start the web application in your default browser. </span></span></p>
<pre>
<span><span>minikube service kr8sswordz</span></span>
</pre>
<h2><span><span>Giving the Kr8sswordz Puzzle a Spin</span></span></h2>
<p><span><span>Now that it’s up and running, let’s give the Kr8sswordz puzzle a try. We’ll also spin up several backend service instances and hammer it with a load test to see how Kubernetes automatically balances the load. </span></span></p>
<p><span><span>1. Try filling out some of the answers to the puzzle. You’ll see that any wrong answers are automatically shown in red as letters are filled in. </span></span></p>
<p><span><span>2. Click </span><strong><span>Submit</span></strong><span><strong>.</strong> When you click </span><strong><span>Submit</span></strong><span>, your current answers for the puzzle are stored in MongoDB. </span></span></p>
<p><span><span><img alt="EfPr45Sz_JuXZDzxNUyRsfXnKCis5iwRZLGi3cSo" height="381" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/10/run-and-scale-a-distributed-crossword-puzzle-app-with-ci-cd-on-kubernetes-part-3-2.png" width="624" /></span></span><br class="kix-line-break" /><span><span>3. Try filling out the puzzle a bit more, then click </span><span>Reload </span><span>once. This will perform a GET which retrieves the last submitted puzzle answers in MongoDB.</span></span></p>
<p><span><span>Did you notice the green arrow on the right as you clicked </span><span>Reload</span><span>? The arrow indicates that the application is fetching the data from MongoDB. The GET also caches those same answers in etcd with a 30 sec TTL (time to live). If you immediately press </span><span>Reload</span><span> again, it will retrieve answers from etcd until the TTL expires, at which point answers are again retrieved from MongoDB and re-cached. Give it a try, and watch the arrows.</span></span></p>
<p><span><span>4. Scale the number of instances of the Kr8sswordz puzzle service up to 16 by dragging the upper slider all the way to the right, then click </span><span>Scale</span><span>. Notice the number of puzzle services increase.</span></span></p>
<div>
<div>
<table>
<tbody>
<tr>
<td><span><img alt="goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9" height="20" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/10/run-and-scale-a-distributed-crossword-puzzle-app-with-ci-cd-on-kubernetes-part-3.png" width="20" /></span></td>
<td>
<p><span><span>If you did not allocate 8 GB of memory to Minikube, we suggest not exceeding 6 scaled instances using the slider.</span></span></p>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<p><span><img alt="r5ShVJ4omRX9znIrPLlpBCwatys2yjjdHA2h2Dlq" height="467" src="https://lh3.googleusercontent.com/r5ShVJ4omRX9znIrPLlpBCwatys2yjjdHA2h2DlqTMmwEx2UOqKpWpNjrWAijGPUN48BpEC2ar5eD3Rk2M5yEPXPR0SnKr6Qn66unwO1oylxwZFReyN9QZHlzZ6cqvpufdMigc8C" width="423" /></span></p>
<p><span><span>In a terminal, run </span><em><span>kubectl get pods</span></em><span> to see the new replicas.</span></span></p>
<p><span><span>5. Now run a load test. Drag the lower slider to the right to 250 requests, and click </span><strong><span>Load Test</span></strong><span>. Notice how it very quickly hits several of the puzzle services (the ones that flash white) to manage the numerous requests. Kubernetes is automatically balancing the load across all available pod instances. Thanks, Kubernetes!</span></span><br class="kix-line-break" /><span><span><img alt="P4S5i1UdQg6LHo71fTFLfHiZa1IpGwmXDhg7nhZJ" height="382" src="https://lh5.googleusercontent.com/P4S5i1UdQg6LHo71fTFLfHiZa1IpGwmXDhg7nhZJzDZo2zPuVZX5vT8dsmI_G8w-tM8xEXR3A1cknH9f-CGcv9tMP-DAN-EJ3tu0t9kKiEIutBaHbmpNzw8O3MbWI23udVZKSCSm" width="564" /></span></span></p>
<p>6. <span><span>Drag the middle slider back down to 1 and click </span><span>Scale</span><span>. In a terminal, run </span><span>kubectl get pods</span><span> to see the puzzle services terminating. </span></span></p>
<p><span><span><img alt="g5SHkVKTJQjiRvaG-huPf8aJmLWS19QGlmqgn2OI" height="119" src="https://lh3.googleusercontent.com/g5SHkVKTJQjiRvaG-huPf8aJmLWS19QGlmqgn2OI5ynzShJfN4oefZpQtBedQMXO6ZLrCMBAiTCRJSl5ISVJjDdeeLVqa3ODWCJzmitXO8k53GsK4_-WAhIY8lPGDPDKFLfuaPK4" width="481" /></span></span></p>
<p>7. Now l<span>et’s try deleting the puzzle pod to see Kubernetes restart a pod using its ability to automatically heal downed pods</span></p>
<p><span>a. In a terminal enter </span><span>kubectl get pods</span><span> to see all pods. Copy the puzzle pod name (similar to the one shown in the picture above). </span></p>
<pre>
<span> b. Enter the following command to delete the remaining puzzle pod. </span>
<span>kubectl delete pod [puzzle podname]</span></pre>
<p><span><span>c. Enter </span><span>kubectl get pods</span><span> to see the old pod terminating and the new pod starting. You should see the new puzzle pod appear in the Kr8sswordz Puzzle app. </span></span></p>
<h2>What’s Happening on the Backend</h2>
<p><span><span>We’ve seen a bit of Kubernetes magic, showing how pods can be scaled for load, how Kubernetes automatically handles load balancing of requests, as well as how Pods are self-healed when they go down. Let’s take a closer look at what’s happening on the backend of the Kr8sswordz Puzzle app to make this functionality apparent. </span></span></p>
<p><span><span><img alt="Kr8sswordz.png" height="388" src="https://lh5.googleusercontent.com/aqi_8RG3l634TPbFDz9z2eGalvh3VemseUKKgYQl9dKSUClpcpLMTAjr8DtVi6Mst17vmzJhfFrisqjSWWAAe-PByS0MUghUOcutOJXN20qBHzEcnnS2ehTYtnWmTKRzXp7FFn0_" width="624" /></span></span></p>
<p><span><span>1. pod instance of the </span><strong><span>puzzle</span></strong><span><strong> </strong>service. The puzzle service uses a LoopBack data source to store answers in MongoDB. When the </span><strong><span>Reload</span></strong><span><strong> </strong>button is pressed, answers are retrieved with a GET request in MongoDB, and the etcd client is used to cache answers with a 30 second TTL. </span></span></p>
<p><span><span>2. The </span><strong><span>monitor-scale</span></strong><span><strong> </strong>pod handles scaling and load test functionality for the app. When the </span><span>Scale</span><span> button is pressed, the monitor-scale pod uses the Kubectl API to scale the number of </span><strong><span>puzzle</span></strong><span> pods up and down in Kubernetes. </span></span></p>
<p><span><span>3. When the </span><strong><span>Load Test </span></strong><span>button is pressed, the monitor-scale pod handles the </span><strong><span>loadtest</span></strong><span> by sending several GET requests to the service pods based on the count sent from the front end. The </span><strong><span>puzzle</span></strong><span> service sends </span><strong><span>Hits</span></strong><span> to monitor-scale whenever it receives a request. </span><strong><span>Monitor-scale</span></strong><span><strong> </strong>then uses websockets to broadcast to the UI to have pod instances light up green. </span></span></p>
<p><span><span>4. When a </span><strong><span>puzzle</span></strong><span> pod instance goes </span><strong><span>up</span></strong><span> or </span><strong><span>down</span></strong><span>, the puzzle pod sends this information to the monitor-scale pod. The up and down states are configured as lifecycle hooks in the puzzle pod k8s deployment, which curls the same endpoint on monitor-scale (see </span><strong><span>kubernetes-ci-cd/applications/crossword/k8s/deployment.yml</span></strong><span> to view the hooks). Monitor-scale persists the list of available puzzle pods in etcd with set, delete, and get pod requests.</span></span></p>
<div>
<div>
<table>
<tbody>
<tr>
<td><span><img alt="goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9" height="20" src="http://www.sickgaming.net/blog/wp-content/uploads/2018/10/run-and-scale-a-distributed-crossword-puzzle-app-with-ci-cd-on-kubernetes-part-3.png" width="20" /></span></td>
<td>
<p><span><span>We do not recommend stopping Minikube (</span><span>minikube stop</span><span>) before moving on to do the tutorial in </span><a href="https://www.linux.com/blog/learn/chapter/intro-to-kubernetes/2017/6/set-cicd-distributed-crossword-puzzle-app-kubernetes-part-4"><span>Part 4</span></a><span>. Upon restart, it may create some issues with the etcd cluster.</span></span></p>
</td>
</tr>
</tbody>
</table>
</div>
</div>
<div>
<table>
<tbody>
<tr>
<td>
<h3><span><span>Automated Scripts</span></span></h3>
<p><span><span>If you need to walk through the steps we did again (or do so quickly), we’ve provided npm scripts that will automate running the same commands in a terminal. </span></span></p>
<p><span><span>1. To use the automated scripts, you’ll need to install NodeJS and npm. </span></span></p>
<p><span><span>On </span><span>Linux</span><span>, follow the </span><a href="https://nodejs.org/en/download/package-manager/"><span>NodeJS installation steps</span></a><span> for your distribution. To quickly install NodeJS and npm on Ubuntu 16.04 or higher, use the following terminal commands. </span></span></p>
<pre>
<span><span> a. curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash -</span></span>
<span><span> b. sudo apt-get install -y nodejs</span></span></pre>
<p><span><span>On </span><strong><span>macOS</span></strong><span>, </span><a href="https://nodejs.org/en/"><span>download the NodeJS installer</span></a><span>, and then double-click the .pkg file to install NodeJS and npm.</span></span></p>
<p><span><span>2. Change directories to the cloned repository and install the interactive tutorial script:</span></span></p>
<pre>
<span><span> a. </span></span><span>cd ~/kubernetes-ci-cd</span>
<span> b. </span><span>npm install</span></pre>
<p><span>3. </span><span>Start the script</span></p>
<pre>
<span>npm run part1 (or part2, part3, part4 of the blog series)</span></pre>
<p><span>4. Press </span><strong><span>Enter</span></strong><span> to proceed running each command. </span><span><span> </span></span></p>
</td>
</tr>
</tbody>
</table>
</div>
<h2><span><span>Up Next</span><span> </span></span></h2>
<p><span><span>Now that we’ve run our Kr8sswordz Puzzle app, the next step is to set up CI/CD for our app. Similar to what we did for the Hello-Kenzan app, Part 4 will cover creating a Jenkins pipeline for the Kr8sswordz Puzzle app so that it builds at the touch of a button. We will also modify a bit of code to enhance the application and enable our </span><span>Submit </span><span>button to show white hits on the puzzle service instances in the UI. </span></span></p>
<p><em>Curious to learn more about Kubernetes? <a href="https://www.edx.org/course/introduction-kubernetes-linuxfoundationx-lfs158x">Enroll in Introduction to Kubernetes</a>, a FREE training course from The Linux Foundation, hosted on edX.org.</em></p>
<p><em>This article was revised and updated by David Zuluaga, a front end developer at Kenzan. He was born and raised in Colombia, where he studied his BE in Systems Engineering. After moving to the United States, he studied received his master’s degree in computer science at Maharishi University of Management. David has been working at Kenzan for four years, dynamically moving throughout a wide range of areas of technology, from front-end and back-end development to platform and cloud computing. David’s also helped design and deliver training sessions on Microservices for multiple client teams.</em></p>
</div>