Posted on Leave a comment

Tune Into Free Live Stream of Keynotes at Open Source Summit & ELC + OpenIoT Summit Europe, October 22-24!

Open Source Summit & ELC + OpenIoT Summit Europe is taking place in Edinburgh, UK next week, October 22-24, 2018. Can’t make it? You’ll be missed, but you don’t have to miss out on the action. Tune into the free livestream to catch all of the keynotes live from your desktop, tablet or phone! Sign up now >>

Hear from the leading technologists in open source! Get an inside scoop on:

  • An update on the Linux kernel
  • Diversity & inclusion to fuel open source growth
  • How open source is changing banking
  • How to build an open source culture within organizations
  • Human rights & scientific collaboration
  • The future of AI and Deep Learning
  • The future of energy with open source
  • The parallels between open source & video games

Sign up for free live stream now >>

Read more at The Linux Foundation

Posted on Leave a comment

Understanding Linux Links: Part 1

Along with cp and mv, both of which we talked about at length in the previous installment of this series, links are another way of putting files and directories where you want them to be. The advantage is that links let you have one file or directory show up in several places at the same time.

As noted previously, at the physical disk level, things like files and directories don’t really exist. A filesystem conjures them up for our human convenience. But at the disk level, there is something called a partition table, which lives at the beginning of every partition, and then the data scattered over the rest of the disk.

Although there are different types of partition tables, the ones at the beginning of a partition containing your data will map where each directory and file starts and ends. The partition table acts like an index: When you load a file from your disk, your operating system looks up the entry on the table and the table says where the file starts on the disk and where it finishes. The disk header moves to the start point, reads the data until it reaches the end point and, hey presto: here’s your file.

Hard Links

A hard link is simply an entry in the partition table that points to an area on a disk that has already been assigned to a file. In other words, a hard link points to data that has already been indexed by another entry. Let’s see how this works.

Open a terminal, create a directory for tests and move into it:

mkdir test_dir
cd test_dir

Create a file by touching it:

touch test.txt

For extra excitement (?), open test.txt in a text editor and add some a few words into it.

Now make a hard link by executing:

ln test.txt hardlink_test.txt

Run ls, and you’ll see your directory now contains two files… Or so it would seem. As you read before, really what you are seeing is two names for the exact same file: hardlink_test.txt contains the same content, has not filled any more space in the disk (try with a large file to test this), and shares the same inode as test.txt:

$ ls -li *test*
16515846 -rw-r--r-- 2 paul paul 14 oct 12 09:50 hardlink_test.txt 16515846 -rw-r--r-- 2 paul paul 14 oct 12 09:50 test.txt

ls‘s -i option shows the inode number of a file. The inode is the chunk of information in the partition table that contains the location of the file or directory on the disk, the last time it was modified, and other data. If two files share the same inode, they are, to all practical effects, the same file, regardless of where they are located in the directory tree.

Fluffy Links

Soft links, also known as symlinks, are different: a soft link is really an independent file, it has its own inode and its own little slot on the disk. But it only contains a snippet of data that points the operating system to another file or directory.

You can create a soft link using ln with the -s option:

ln -s test.txt softlink_test.txt

This will create the soft link softlink_test.txt to test.txt in the current directory.

By running ls -li again, you can see the difference between the two different kinds of links:

$ ls -li
total 8 16515846 -rw-r--r-- 2 paul paul 14 oct 12 09:50 hardlink_test.txt 16515855 lrwxrwxrwx 1 paul paul 8 oct 12 09:50 softlink_test.txt -> test.txt 16515846 -rw-r--r-- 2 paul paul 14 oct 12 09:50 test.txt

hardlink_test.txt and test.txt contain some text and take up the same space *literally*. They also share the same inode number. Meanwhile, softlink_test.txt occupies much less and has a different inode number, marking it as a different file altogether. Using the ls‘s -l option also shows the file or directory your soft link points to.

Why Use Links?

They are good for applications that come with their own environment. It often happens that your Linux distro does not come with the latest version of an application you need. Take the case of the fabulous Blender 3D design software. Blender allows you to create 3D still images as well as animated films and who wouldn’t to have that on their machine? The problem is that the current version of Blender is always at least one version ahead of that found in any distribution.

Fortunately, Blender provides downloads that run out of the box. These packages come, apart from with the program itself, a complex framework of libraries and dependencies that Blender needs to work. All these bits and piece come within their own hierarchy of directories.

Every time you want to run Blender, you could cd into the folder you downloaded it to and run:

./blender

But that is inconvenient. It would be better if you could run the blender command from anywhere in your file system, as well as from your desktop command launchers.

The way to do that is to link the blender executable into a bin/ directory. On many systems, you can make the blender command available from anywhere in the file system by linking to it like this:

ln -s /path/to/blender_directory/blender /home/<username>/bin

Another case in which you will need links is for software that needs outdated libraries. If you list your /usr/lib directory with ls -l, you will see a lot of soft-linked files fly by. Take a closer look, and you will see that the links usually have similar names to the original files they are linking to. You may see libblah linking to libblah.so.2, and then, you may even notice that libblah.so.2 links in turn to libblah.so.2.1.0, the original file.

This is because applications often require older versions of alibrary than what is installed. The problem is that, even if the more modern versions are still compatible with the older versions (and usually they are), the program will bork if it doesn’t find the version it is looking for. To solve this problem distributions often create links so that the picky application believes it has found the older version, when, in reality, it has only found a link and ends up using the more up to date version of the library.

Somewhat related is what happens with programs you compile yourself from the source code. Programs you compile yourself often end up installed under /usr/local: the program itself ends up in /usr/local/bin and it looks for the libraries it needs / in the /usr/local/lib directory. But say that your new program needs libblah, but libblah lives in /usr/lib and that’s where all your other programs look for it. You can link it to /usr/local/lib by doing:

ln -s /usr/lib/libblah /usr/local/lib

Or, if you prefer, by cding into /usr/local/lib

cd /usr/local/lib

… and then linking with:

ln -s ../lib/libblah

There are dozens more cases in which linking proves useful, and you will undoubtedly discover them as you become more proficient in using Linux, but these are the most common. Next time, we’ll look at some linking quirks you need to be aware of.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Posted on Leave a comment

Take Our Cloud Providers Survey and Enter to Win a Maker Kit

Today’s most dynamic and innovative FOSS projects boast significant  involvement by well-known cloud service and solution providers. We are launching a survey to better understand the perception of these solution providers by people engaging in open source communities.

Visible participation and application of corporate resources has been one of the key drivers of the success of open source software. However, some companies still face challenges:

  • Code consumption with minimal participation in leveraged projects, impacting ability to influence project direction

  • Hiring FOSS maintainers without a strategy or larger commitment to open source, impacting the ability to retain FOSS developers long-term

  • Compliance missteps and not adhering to FOSS license terms.

The experiences open source community members with different companies impact perception of those organizations among FOSS community participants. If companies want the trust of FOSS project participants, they must invest in building strategies, engaging communities, project participation and license compliance.

Cloud Solutions Providers FOSS Survey

The Linux Foundation has been commissioned to survey FOSS developers and users about their opinions, perceptions, and experiences with 6 top cloud solution and service providers that deploy open source software.  The survey examines respondents’ views of reputation, levels of project engagement, contribution, community citizenship and project sponsorship by six major cloud product and services providers.

By completing this survey, you will be eligible for a drawing for one of ten Maker Hardware kits, complete with case, cables, power supply, and other accessories.  The survey will remain open until 12 a.m. EST on November 18, 2018. 

Take the Survey Now

Drawing Rules

  • At the end of survey period, The Linux Foundation (LF) will randomly choose ten (10) respondents to receive a Maker hardware kit (“prize”).

  • Participants are only eligible to win one prize for this drawing and after winning a first prize will not be entered into any additional prize drawings for this promotion.

  • You must be 18 years or older to participate. Employees, vendors and contractors of The Linux Foundation and their families are not eligible, but LF project  participants and employees of member companies are encouraged to complete the survey and enter the drawing

  • To enter the drawing, you must only complete the contact info (name, email, etc.). Completing the contact info will constitute an “entry”. Any participant submitting multiple entries may be disqualified without notice. The Linux Foundation reserves the right to disqualify any participants if for any reason inaccurate or incomplete information is suspected.

  • There is no cash equivalent and no person other than the winning person may take delivery of the prize(s). The prize may not be exchanged for cash.

  • The deadline for participation in the drawing is open until 12 a.m. EST on December 10, 2018. Any participants completing a survey after the deadline will not be entered into the drawing. The survey may remain open to participate beyond the drawing deadline.

  • Entries will be pooled together and a winner will be randomly selected. The winner will be notified via email. The winner’s name, city, and state of residence will be directly contacted and may be posted on our respective social media/marketing outlets (Linux.com, Twitter, Facebook, Google+, etc.). Winners have 30 days to respond to our contact or a new drawing for the prize will be made.

Posted on Leave a comment

Set Up CI/CD for a Distributed Crossword Puzzle App on Kubernetes (Part 4)

Part 3 had us running our Kr8sswordz Puzzle app, spinning up multiple instances for a load test, and watching Kubernetes gracefully balance numerous requests across the cluster.

Though we set up Jenkins for use with our Hello-Kenzan app in Part 2, we have yet to set up CI/CD hooks for the Kr8sswordz Puzzle app. Part 4 will walk through this set up. We will use a Jenkins 2.0 Pipeline script for the Kr8sswordz Puzzle app, this time with an important difference: triggering builds based on an update to the forked Git repo. The walkthrough will simulate updating our application with a feature change by pushing the code to Git, which will trigger the Jenkins build process to kick off. In a real world lifecycle, this automation enables developers to simply push code to a specific branch to then have their app build, push, and deploy to a specific environment.

Read the previous articles in the series:

 

6yke0wMHy6bUkeiH9ny2ih2c70XRtVHjvJ-6zScz

This tutorial only runs locally in Minikube and will not work on the cloud. You’ll need a computer running an up-to-date version of Linux or macOS. Optimally, it should have 16 GB of RAM. Minimally, it should have 8 GB of RAM. For best performance, reboot your computer and keep the number of running apps to a minimum.

Creating a Kr8sswordz Pipeline in Jenkins

Before you begin, you’ll want to make sure you’ve run through the steps in Parts 1, 2, and 3 so that you have all the components we previously built in Kubernetes (to do so quickly, you can run the automated scripts detailed below). For this tutorial we are assuming that Minikube is still up and running with all the pods from Part 3.

We are ready to create a new pipeline specifically for the puzzle service. This will allow us to quickly re-deploy the service as a part of CI/CD.

1. Enter the following terminal command to open the Jenkins UI in a web browser. Log in to Jenkins using the username and password you previously set up.

minikube service jenkins

2. We’ll want to create a new pipeline for the puzzle service that we previously deployed. On the left in Jenkins, click New Item.

LpJOpWJv-sFIXvfkZjvKhw_nZpmKtFpCT7wAmuQ5

-ae-tsBuoj7wShvgYZQiF9eibNPb4G0kOBvWzO6P

For simplicity we’re only going to create a pipeline for the puzzle service, but we’ve  provided Jenkinsfiles for all the rest of the services, in order to allow the application to be fully CI/CD capable.

3. Enter the item name as Puzzle-Service, click Pipeline, and click OK.

q4S5cCP3PL1uVKn4tWUNSYQ4a1WJ2VQaMJJYz1kQ

4. Under the Build Triggers section, select Poll SCM. For the Schedule, enter the string H/5 * * * * which will poll the Git repo every 5 minutes for changes.
UvhE3i-J2RwjoZnVFR3622HsnMnL_aJ9coBwOF-O

5. In the Pipeline section, change the following.

  a. Definition: Pipeline script from SCM

  b. SCM: Git

  c. Repository URL: Enter the URL for your forked Git repository

  d. Script Path: applications/puzzle/Jenkinsfile

QCeR2WtIvsJ0Mj5X5oEnb65WbDNY_RwafhaiVBj2

-ae-tsBuoj7wShvgYZQiF9eibNPb4G0kOBvWzO6P

Remember in Part 3 we had to manually replace the $BUILD_TAG env var with the git commit ID? The Kubernetes Continuous Deploy plugin we’re using in Jenkins will automatically find variables in K8s manifest files ($VARIABLE or ${VARIABLE})  and replace them with environment variables pre-configured for the pipeline in the Jenkinsfile. Variable substitution is a functionality Kubernetes lacks in v1.11.0, however the Kubernetes CD plugin as a third party tool provides us with it.

6. When you are finished, click Save. On the left, click Build Now to run the new pipeline. This will rebuild the image from the registry, and redeploy the puzzle pod. You should see it successfully run through the build, push, and deploy steps in a few minutes.

Our Puzzle-Service pipeline is now setup to poll the Git repo for changes every 5 minutes and kick off a build if changes are detected.

Pushing a Feature Update Through the Pipeline

Now let’s make a single change that will trigger our pipeline and rebuild the puzzle-service.

On our current Kr8sswordz Puzzle app, hits against the puzzle services show up as white in the UI when pressing Reload or performing a Load Test:
GmeeT13FUZXYN7OByHpxqhhnPed40_3HGa78i5nN

However, you may have seen that the same white hit does not light up when clicking the Submit button. We are going to remedy this with an update to the code.  

7. In a terminal, open up the Kr8sswordz Puzzle app. (If you don’t see the puzzle, you might need to refresh your browser.)

minikube service kr8sswordz

8. Spin up several instances of the puzzle service by moving the slider to the right and clicking Scale. For reference, click Submit and notice that the white hit does not register on the puzzle services.

d-kHXuxVhttDXD2kzYzDAp1L3K2woqa2pO94Inls

If you did not allocate 8 GB of memory to Minikube, we suggest not exceeding 6 scaled instances using the slider.

9. Edit applications/puzzle/common/models/crossword.js in your favorite text editor, or edit it in nano using the commands below.

cd ~/kubernetes-ci-cd
nano applications/puzzle/common/models/crossword.js

You’ll see the following commented section on lines 42-43:

// Part 4: Uncomment the next line to enable puzzle pod
 highlighting when clicking the Submit button
//fireHit();

Uncomment line 43 by deleting the forward slashes, then save the file. (In nano, Press Ctrl+X to close the file, type Y to confirm the filename, and press Enter to write the changes to the file.)

10. Commit and push the change to your forked Git repo (you may need to enter your GitHub credentials):

git commit -am "Enabled hit highlighting on Submit"
git push

11. In Jenkins, open up the Puzzle-Service pipeline and wait until it triggers a build. It should trigger every 5 minutes. (If it doesn’t trigger right away, give it some time.)

12. After it triggers, observe how the puzzle services disappear in the Kr8sswordz Puzzle app, and how new ones take their place.

13. Try clicking Submit to test that hits now register as white.

If you see one of the puzzle instances light up, it means you’ve successfully set up a CI/CD pipeline that automatically builds, pushes, and deploys code changes to a pod in Kubernetes. It’s okay—go ahead and bask in the glory for a minute.

PUbF_s1wwsHhk_vdpiAb2kS_pTtQcdzciITDODih

You’ve completed Part 4 and finished Kenzan’s blog series on CI/CD with Kubernetes!

From a development perspective, it’s worth mentioning a few things that might be done differently in a real-world scenario with our pipeline:

  • You would likely have separate repositories for each of the services that compose the Kr8sswordz Puzzle to enforce separation for microservice develop/build/deploy. Here we’ve combined all services in one repo for ease of use with the tutorial.

  • You would also set up individual pipelines for the monitor-scale and kr8sswordz services. Jenkins files for these services are actually included in the repository, though for the purpose of the tutorial we’ve kept things simple with a single pipeline to demonstrate CI/CD.

  • You would likely set up separate pipelines for each deployment environment, such as Dev, QA, Stage, and Prod environments. For triggering builds for these environments, you could use different Git branches that represent the environments you push code to. (For example, dev branch > deploy to Dev, master branch > deploy to QA, etc.)

  • Though easy to set up, the SCM Polling operation is somewhat resource intensive as it requires Jenkins to scan the entire repo for changes. An alternative is to use the Jenkins Github plugin on your Jenkins server.

Automated Scripts

If you need to walk through the steps we did again (or do so quickly), we’ve provided npm scripts that will automate running the same commands in a terminal.  

  1. To use the automated scripts, you’ll need to install NodeJS and npm.

On Linux, follow the NodeJS installation steps for your distribution. To quickly install NodeJS and npm on Ubuntu 16.04 or higher, use the following terminal commands.

a. curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash -
b. sudo apt-get install -y nodejs

On macOS, download the NodeJS installer, and then double-click the .pkg file to install NodeJS and npm.

2. Change directories to the cloned repository and install the interactive tutorial script:

a. cd ~/kubernetes-ci-cd b. npm install

3. Start the script

npm run part1 (or part2, part3, part4 of the blog series) 

4. Press Enter to proceed running each command.

Going Deeper

Building the Kr8sswordz Puzzle app has shown us some pretty cool continuous integration and container management patterns:

  • How infrastructure such as Jenkins or image repositories can run as pods in Kubernetes.

  • How Kubernetes handles scaling, load balancing, and automatic healing of pods.

  • How Jenkin’s 2.0 Pipeline scripts can be used to automatically run on a Git commit to build the container image, push it to repository, and deploy it as a pod in Kubernetes.

If you are interested in going deeper into the CI/CD Pipeline process with deployment tools like Spinnaker, see Kenzan’s paper Image is Everything: Continuous Delivery with Kubernetes and Spinnaker.

Kenzan is a software engineering and full service consulting firm that provides customized, end-to-end solutions that drive change through digital transformation. Combining leadership with technical expertise, Kenzan works with partners and clients to craft solutions that leverage cutting-edge technology, from ideation to development and delivery. Specializing in application and platform development, architecture consulting, and digital transformation, Kenzan empowers companies to put technology first.

This article was revised and updated by David Zuluaga, a front end developer at Kenzan. He was born and raised in Colombia, where he studied his BE in Systems Engineering. After moving to the United States, he studied received his master’s degree in computer science at Maharishi University of Management. David has been working at Kenzan for four years, dynamically moving throughout a wide range of areas of technology, from front-end and back-end development to platform and cloud computing. David’s also helped design and deliver training sessions on Microservices for multiple client teams.

Curious to learn more about Kubernetes? Enroll in Introduction to Kubernetes, a FREE training course from The Linux Foundation, hosted on edX.org.

Posted on Leave a comment

Arm Launches Mbed Linux and Extends Pelion IoT Service

Politics and international relations may be fraught with acrimony these days, but the tech world seems a bit friendlier of late. Last week Microsoft joined the Open Invention Network and agreed to grant a royalty-free, unrestricted license of its 60,000-patent portfolio to other OIN members, thereby enabling Android and Linux device manufacturers to avoid exorbitant patent payments. This week, Arm and Intel kept up the happy talk by agreeing to a partnership involving IoT device provisioning.

Arm’s recently announced Pelion IoT Platform will align with Intel’s Secure Device Onboard (SDO) provisioning technology to make it easier for IoT vendors and customers to onboard both x86 and Arm-based devices using a common Peleon platform. Arm also announced Pelion related partnerships with myDevices and Arduino (see farther below).

In another nod to Intel, Arm unveiled a new, IoT focused Mbed Linux OS distribution that combines the Linux kernel with tools and recipes from the Intel-backed Yocto Project. The distro also integrates security and IoT connectivity code from its open source Mbed RTOS.

When Pelion was announced, Arm mentioned cross-platform support, but there were few details. Now with the Intel SDO deal and the launch of Mbed Linux OS, Arm has formally expanded Pelion from an MCU-only IoT data aggregation platform to one that supports more advanced x86 and Cortex-A based systems.

Mbed Linux OS

The early stage Mbed Linux OS will be released by the end of the year as an invitation-only developer preview. Both the OS source code and related test suites will eventually be open sourced.

In the Mbed Linux OS announcement, Arm’s Mark Wright pitches the distro as a secure, IoT focused “sibling” to the Cortex-M focused Mbed that is designed for Cortex-A processors. Arm will support Mbed Linux with its MCU-oriented Mbed community of 350,000 developers and will offer support for popular Linux development boards and modules. The Softbank-owned company will also supply optional commercial support.

Like Mbed, Mbed Linux will be “deeply integrated” with the Pelion IoT System in order “to simplify lifecycle management.” The Pelion support provides device provisioning, connectivity, and updates, thereby enabling development teams to update the OS and the applications independently, says Wright. Working with the Pelion Device Management Application, Mbed Linux OS can “simplify in-field provisioning and eradicate the need for legacy serial connections for initial device configuration,” says Arm.

Mbed Linux will support Arm’s Platform Security Architecture and hardware based TrustZone security to enable secure, signed boot and signed updates. It will also enable deployment of applications in secure, OCI-compliant containers.

Arm did not specify which components of the Yocto Project code it would integrate with Mbed. In late August, Arm and Facebook joined Intel and TI as Platinum members of the Yocto Project. The Linux Foundation hosted project was launched by Intel but is now widely used on Arm as well as x86 based IoT devices.

Despite common references to “Yocto Linux,” Yocto Project is not a distribution, but rather a collection of open source templates, tools, and methods for creating custom embedded Linux-based systems. A Yocto foundation underlies most major commercial Linux distributions such as Wind River Linux and Mentor Embedded Linux and is often spun into custom builds by DIY developers, especially for resource constrained IoT devices.

We saw no mention of a contribution for the Arm-backed Linaro initiative for either Mbed Linux or Pelion. Linaro, which oversees the 96Boards project, develops open source embedded Linux and Android software components. The Yocto and Linaro projects were initially seen as rivals, but they have grown increasingly complementary. Linaro’s Arm toolchain can be used within Yocto Project, as well as with the related OpenEmbedded build environment and Bitbake build engine.

Developers can sign up for the limited number of invites to participate in the upcoming developer preview of Mbed Linux OS here.

Arm’s Pelion partnerships

Arm’s Pelion IoT Platform will soon run on devices with Intel’s recently launched Secure Device Onboard (SDO) service, enabling customers to deploy both Arm and x86 based systems controlled by the common Pelion platform. “We believe this collaboration is a big step forward for greater customer choice, fewer device SKUs, higher volume and velocity through IoT supply chains and lower deployment cost,” says Arm.

The SDO “zero-touch onboarding service” depends on Intel Enhanced Privacy ID (EPID) data embedded in chips to validate and provision IoT devices automatically. SDO automatically discovers and provisions compliant devices during installation. This “late binding” approach reduces provisioning times from 20 minutes to an hour to a few minutes, says Intel.

Unlike PKI based authentication methods, “SDO does not insert Intel into the authentication path.” Instead, it brokers a rendezvous URL to the Intel SDO service where Intel EPID opens a private authentication channel between the device and the customer’s IoT platform.

The Pelion IoT Platform offers its own scheme for provisioning and configuration of devices using cryptographic identities built into Cortex-M MCUs running Mbed. With the new Mbed Linux, Pelion will also be able to accept devices that run on Cortex-A chips with TrustZone security.

Pelion combines Arm’s Mbed Cloud connected Mbed IoT Device Management Platform with technologies it acquired via two 2018 acquisitions. The new Treasure Data unit supplies data management services to Pelion. Meanwhile, Stream Technologies provides Pelion managed gateway services for wireless technologies including cellular, LoRa, and satellite communications.

The partnership with myDevices extends Pelion support to devices that run myDevices’ new IoT in a Box turnkey IoT software for LoRa gateways and nodes. myDevices, which is known for its Linux- and Arduino-friendly Cayenne drag-and-drop IoT development and management platform, launched IoT in a Box to enable easy set up a LoRa gateway and LoRa sensor nodes. Different IoT in a Box versions target specific applications ranging from home and building management to storage lockers to refrigeration systems. Developers can try out Pelion services together with IoT in a Box for a new, $199 IoT Starter Kit.

The Arduino partnership is a bit less clear.  It appears to extend Arm’s Pelion Connectivity Management stack, based on the Stream Technologies acquisition, to Arduino devices. The partnership gives users the option of selecting “competitive global data plans” for cellular service, says Arm.

More details on this and the other Pelion announcements should emerge at Arm TechCon in San Jose, California and IoT Solution World Congress in Barcelona, both of which run Oct 16-18. Intel also offers a video overview of the Pelion/SDO mashup.

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.

Posted on Leave a comment

Run and Scale a Distributed Crossword Puzzle App with CI/CD on Kubernetes (Part 3)

In Part 2 of our series, we deployed a Jenkins pod into our Kubernetes cluster, and used Jenkins to set up a CI/CD pipeline that automated building and deploying our containerized Hello-Kenzan application in Kubernetes.

In Part 3, we are going to set aside the Hello-Kenzan application and get to the main event: running our Kr8sswordz Puzzle application. We will showcase the built-in UI functionality to scale backend service pods up and down using the Kubernetes API, and also simulate a load test. We will also touch on showing caching in etcd and persistence in MongoDB.

Before we start the install, it’s helpful to take a look at the pods we’ll run as part of the Kr8sswordz Puzzle app:

  • kr8sswordz – A React container with our Node.js frontend UI.

  • puzzle – The primary backend service that handles submitting and getting answers to the crossword puzzle via persistence in MongoDB and caching in ectd.

  • mongo – A MongoDB container for persisting crossword answers.

  • etcd – An etcd cluster for caching crossword answers (this is separate from the etcd cluster used by the K8s Control Plane).

  • monitor-scale – A backend service that handles functionality for scaling the puzzle service up and down. This service also interacts with the UI by broadcasting websockets messages.

We will go into the main service endpoints and architecture in more detail after running the application. For now, let’s get going!

Read all the articles in the series:

 

3di6imeKV7hPtEx3cDcZM3dUG6aW4CWOPmdGOIFA

This tutorial only runs locally in Minikube and will not work on the cloud. You’ll need a computer running an up-to-date version of Linux or macOS. Optimally, it should have 16 GB of RAM. Minimally, it should have 8 GB of RAM. For best performance, reboot your computer and keep the number of running apps to a minimum.

Running the Kr8sswordz Puzzle App

First make sure you’ve run through the steps in Part 1 and Part 2, in which we set up our image repository and Jenkins pods—you will need these to proceed with Part 3 (to do so quickly, you can run the part1 and part2 automated scripts detailed below). If you previously stopped Minikube, you’ll need to start it up again. Enter the following terminal command, and wait for the cluster to start:

minikube start

You can check the cluster status and view all the pods that are running.

kubectl cluster-info kubectl get pods --all-namespaces
Make sure the registry and jenkins pods are up and running. 
goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9

So far we have been creating deployments directly using K8s manifests, and have not yet used Helm. Helm is a package manager that deploys a Chart (or package) onto a K8s cluster with all the resources and dependencies needed for the application. Underneath, the chart generates Kubernetes deployment manifests for the application using templates that replace environment configuration values. Charts are stored in a repository and versioned with releases so that cluster state can be maintained.
Helm is very powerful because it allows you to templatize, version, reuse, and share the deployments you create for Kubernetes. See https://hub.kubeapps.com/ for a look at some of the open source charts available. We will be using Helm to install an etcd operator directly onto our cluster using a pre-built chart.

1. Initialize Helm. This will install Tiller (Helm’s server) into our Kubernetes cluster.

helm init --wait --debug; kubectl rollout status deploy/tiller-deploy -n kube-system

2. We will deploy an etcd operator onto the cluster using a Helm Chart.  

helm install stable/etcd-operator --version 0.8.0 --name etcd-operator --debug --wait
goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9

An operator is a custom controller for managing complex or stateful applications. As a separate watcher, it monitors the state of the application, and acts to align the application with a given specification as events occur. In the case of etcd, as nodes terminate, the operator will bring up replacement nodes using snapshot data.

3. Deploy the etcd cluster and K8s Services for accessing the cluster.

kubectl  create -f manifests/etcd-cluster.yaml
kubectl  create -f manifests/etcd-service.yaml

You can see these new pods by entering kubectl get pods in a separate terminal window. The cluster runs as three pod instances for redundancy.

4. The crossword application is a multi-tier application whose services depend on each other. We will create three K8s Services so that the applications can communicate with one another.

kubectl apply -f manifests/all-services.yaml

5. Now we’re going to walk through an initial build of the monitor-scale application.

docker build -t 127.0.0.1:30400/monitor-scale:`git rev-parse 
 --short HEAD` -f applications/monitor-scale/Dockerfile 
 applications/monitor-scale
goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9

To simulate a real life scenario, we are leveraging the github commit id to tag all our service images, as shown in this command (git rev-parse –short HEAD).

6. Once again we’ll need to set up the Socat Registry proxy container to push the monitor-scale image to our registry, so let’s build it. Feel free to skip this step in case the socat-registry image already exists from Part 2 (to check, run docker images).

docker build -t socat-registry -f applications/socat/Dockerfile 
 applications/socat

7. Run the proxy container from the newly created image.

docker stop socat-registry; docker rm socat-registry; docker run 
 -d -e "REG_IP=`minikube ip`" -e "REG_PORT=30400" --name 
 socat-registry -p 30400:5000 socat-registry
goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9

This step will fail if local port 30400 is currently in use by another process. You can check if there’s any process currently using this port by running the command
lsof -i :30400

8. Push the monitor-scale image to the registry.

docker push 127.0.0.1:30400/monitor-scale:`git rev-parse --short HEAD`

9. The proxy’s work is done, so go ahead and stop it.

docker stop socat-registry

10. Open the registry UI and verify that the monitor-scale image is in our local registry.

minikube service registry-ui
_I4gSkKcakXTMxLSD_qfzVLlTlfLiabRf3fOZzrm

11. Monitor-scale has the functionality to let us scale our puzzle app up and down through the Kr8sswordz UI, therefore we’ll need to do some RBAC work in order to provide monitor-scale with the proper rights.

kubectl apply -f manifests/monitor-scale-serviceaccount.yaml
ANM4b9RSNsAb4CFeAbJNUYr6IlIzulAIb0sEvwVJ

In the manifests/monitor-scale-serviceaccount.yaml you’ll find the specs for the following K8s Objects.

Role: The custom “puzzle-scaler” role allows “Update” and “Get” actions to be taken over the Deployments and Deployments/scale kinds of resources, specifically to the resource named “puzzle”. This is not a ClusterRole kind of object, which means it will only work on a specific namespace (in our case “default”) as opposed to being cluster-wide.

ServiceAccount: A “monitor-scale” ServiceAccount is assigned to the monitor-scale deployment.

RoleBinding: A “monitor-scale-puzzle-scaler” RoleBinding binds together the aforementioned objects.

12. Create the monitor-scale deployment and the Ingress defining the hostname by which this service will be accessible to the other services.

sed 's#127.0.0.1:30400/monitor-scale:$BUILD_TAG#127.0.0.1:30400/
 monitor-scale:'`git rev-parse --short HEAD`'#' 
 applications/monitor-scale/k8s/deployment.yaml | kubectl apply -f -
goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9

The sed command is replacing the $BUILD_TAG substring from the manifest file with the actual build tag value used in the previous docker build command. We’ll see later how Jenkins plugin can do this automatically.

13. Wait for the monitor-scale deployment to finish.

kubectl rollout status deployment/monitor-scale

14. View pods to see the monitor-scale pod running.

kubectl get pods

15. View services to see the monitor-scale service.

kubectl get services

16. View ingress rules to see the monitor-scale ingress rule.

kubectl get ingress

17. View deployments to see the monitor-scale deployment.

kubectl get deployments

18. We will run a script to bootstrap the puzzle and mongo services, creating Docker images and storing them in the local registry. The puzzle.sh script runs through the same build, proxy, push, and deploy steps we just ran through manually for both services.

scripts/puzzle.sh

19. Check to see if the puzzle and mongo services have been deployed.

kubectl rollout status deployment/puzzle
kubectl rollout status deployment/mongo

20. Bootstrap the kr8sswordz frontend web application. This script follows the same build proxy, push, and deploy steps that the other services followed.

scripts/kr8sswordz-pages.sh

21. Check to see if the frontend has been deployed.

kubectl rollout status deployment/kr8sswordz

22. Check to see that all the pods are running.

kubectl get pods

23. Start the web application in your default browser.

minikube service kr8sswordz

Giving the Kr8sswordz Puzzle a Spin

Now that it’s up and running, let’s give the Kr8sswordz puzzle a try. We’ll also spin up several backend service instances and hammer it with a load test to see how Kubernetes automatically balances the load.   

1. Try filling out some of the answers to the puzzle. You’ll see that any wrong answers are automatically shown in red as letters are filled in.

2. Click Submit. When you click Submit, your current answers for the puzzle are stored in MongoDB.

EfPr45Sz_JuXZDzxNUyRsfXnKCis5iwRZLGi3cSo
3. Try filling out the puzzle a bit more, then click Reload once. This will perform a GET which retrieves the last submitted puzzle answers in MongoDB.

Did you notice the green arrow on the right as you clicked Reload? The arrow indicates that the application is fetching the data from MongoDB. The GET also caches those same answers in etcd with a 30 sec TTL (time to live). If you immediately press Reload again, it will retrieve answers from etcd until the TTL expires, at which point answers are again retrieved from MongoDB and re-cached. Give it a try, and watch the arrows.

4. Scale the number of instances of the Kr8sswordz puzzle service up to 16 by dragging the upper slider all the way to the right, then click Scale. Notice the number of puzzle services increase.

goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9

If you did not allocate 8 GB of memory to Minikube, we suggest not exceeding 6 scaled instances using the slider.

r5ShVJ4omRX9znIrPLlpBCwatys2yjjdHA2h2Dlq

In a terminal, run kubectl get pods to see the new replicas.

5. Now run a load test. Drag the lower slider to the right to 250 requests, and click Load Test. Notice how it very quickly hits several of the puzzle services (the ones that flash white) to manage the numerous requests. Kubernetes is automatically balancing the load across all available pod instances. Thanks, Kubernetes!
P4S5i1UdQg6LHo71fTFLfHiZa1IpGwmXDhg7nhZJ

​6. Drag the middle slider back down to 1 and click Scale. In a terminal, run kubectl get pods to see the puzzle services terminating.

g5SHkVKTJQjiRvaG-huPf8aJmLWS19QGlmqgn2OI

7. Now let’s try deleting the puzzle pod to see Kubernetes restart a pod using its ability to automatically heal downed pods

a. In a terminal enter kubectl get pods to see all pods. Copy the puzzle pod name (similar to the one shown in the picture above).

 b. Enter the following command to delete the remaining puzzle pod. 
kubectl delete pod [puzzle podname]

c. Enter kubectl get pods to see the old pod terminating and the new pod starting. You should see the new puzzle pod appear in the Kr8sswordz Puzzle app.

What’s Happening on the Backend

We’ve seen a bit of Kubernetes magic, showing how pods can be scaled for load, how Kubernetes automatically handles load balancing of requests, as well as how Pods are self-healed when they go down. Let’s take a closer look at what’s happening on the backend of the Kr8sswordz Puzzle app to make this functionality apparent.  

Kr8sswordz.png

1. pod instance of the puzzle service. The puzzle service uses a LoopBack data source to store answers in MongoDB. When the Reload button is pressed, answers are retrieved with a GET request in MongoDB, and the etcd client is used to cache answers with a 30 second TTL.  

2. The monitor-scale pod handles scaling and load test functionality for the app. When the Scale button is pressed, the monitor-scale pod uses the Kubectl API to scale the number of puzzle pods up and down in Kubernetes.

3. When the Load Test button is pressed, the monitor-scale pod handles the loadtest by sending several GET requests to the service pods based on the count sent from the front end. The puzzle service sends Hits to monitor-scale whenever it receives a request. Monitor-scale then uses websockets to broadcast to the UI to have pod instances light up green.

4. When a puzzle pod instance goes up or down, the puzzle pod sends this information to the monitor-scale pod. The up and down states are configured as lifecycle hooks in the puzzle pod k8s deployment, which curls the same endpoint on monitor-scale (see kubernetes-ci-cd/applications/crossword/k8s/deployment.yml to view the hooks). Monitor-scale persists the list of available puzzle pods in etcd with set, delete, and get pod requests.

goO2T3gv5m_ehnoPadbP8Eww76Kgh9oUzsSGD7v9

We do not recommend stopping Minikube (minikube stop) before moving on to do the tutorial in Part 4. Upon restart, it may create some issues with the etcd cluster.

Automated Scripts

If you need to walk through the steps we did again (or do so quickly), we’ve provided npm scripts that will automate running the same commands in a terminal.  

1. To use the automated scripts, you’ll need to install NodeJS and npm.

On Linux, follow the NodeJS installation steps for your distribution. To quickly install NodeJS and npm on Ubuntu 16.04 or higher, use the following terminal commands.

 a. curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash -
 b. sudo apt-get install -y nodejs

On macOS, download the NodeJS installer, and then double-click the .pkg file to install NodeJS and npm.

2. Change directories to the cloned repository and install the interactive tutorial script:

 a. cd ~/kubernetes-ci-cd
 b. npm install

3. Start the script

npm run part1 (or part2, part3, part4 of the blog series)

4. Press Enter to proceed running each command.

Up Next

Now that we’ve run our Kr8sswordz Puzzle app, the next step is to set up CI/CD for our app. Similar to what we did for the Hello-Kenzan app, Part 4 will cover creating a Jenkins pipeline for the Kr8sswordz Puzzle app so that it builds at the touch of a button. We will also modify a bit of code to enhance the application and enable our Submit button to show white hits on the puzzle service instances in the UI.  

Curious to learn more about Kubernetes? Enroll in Introduction to Kubernetes, a FREE training course from The Linux Foundation, hosted on edX.org.

This article was revised and updated by David Zuluaga, a front end developer at Kenzan. He was born and raised in Colombia, where he studied his BE in Systems Engineering. After moving to the United States, he studied received his master’s degree in computer science at Maharishi University of Management. David has been working at Kenzan for four years, dynamically moving throughout a wide range of areas of technology, from front-end and back-end development to platform and cloud computing. David’s also helped design and deliver training sessions on Microservices for multiple client teams.

Posted on Leave a comment

Anaxi App Shows the State of Your Software Project

If you work within the world of software development, you’ll find yourself bouncing back and forth between a few tools. You’ll most likely use GitHub to host your code, but find yourself needing some task/priority software. This could be GitHub itself or other ones like Jira. Of course, you may also find yourself collaborating on several tools, like Slack, and several projects. Considering that it’s already hard to keep track of the progress on one of your projects, working across several of them becomes a struggle. This problem gets worse as you move up the ranks of management where it becomes increasingly difficult to assimilate and rationalize all of this information. To help combat this, Anaxi was created to help give you all the information on the state and progress of your projects in one single interface.

Why measure dev progress?

According to LinkedIn data, there are currently over 3,000 software engineers employed on average at Fortune 4,000 companies. So, how do those companies measure the progress of their software projects and the performance of their teams? After all, you can’t manage what you don’t measure, so the best of them will manually compute portions of this data on a weekly basis. This turns into a tedious and time-consuming task. In fact, this directly impacts your bottom line. Anaxi cuts out this task and may significantly improve software development efficiency within organizations. Teams will know the impact of any process change, which task they should focus on, and whether or not to anticipate any bottlenecks. This also helps reduce the loss in revenue due to shipping critical issues. According to Tricentis, there was a total of $1.7T loss in revenue in 2017 alone due to software failures and poor bug prioritization.

What is Anaxi?

Anaxi currently offers a free iPhone app that provides the full picture of your GitHub projects to help you understand and manage them better. Anaxi has a lot of features based on what they call reports. Reports are lists of issues or pull requests that you can filter as you see fit using labels, state, milestone, authors, assignees, and more. This allows you to monitor those critical bugs or see the progress of your team’s work. For each project, you can select the people on your team so you can easily see what each person is doing and help where help is needed most. It can also be used to keep track of your own work and priorities, and because it’s an iPhone app, it grants quick access to issues and pull requests that have been assigned. There’s also a customizable color indicator for report deadlines that will help you prioritize what to work on.

How to set up the app

First, you’ll need an iPhone and access to the app store. Go into the App Store and download it. Once you open the app, the landing page will appear.

65oqrPLAq7UaPC1LpOw6FXI5GZ6mEgLr1_MUE9Wm

To get started, press on the Connect GitHub button on the bottom of the screen and enter your GitHub credentials. Next, you’ll be asked to select projects that you want to monitor. Anaxi will automatically select some projects. There is a button you can press to edit this list at the bottom that allows you to add or remove projects from this list. If you forget a project, or realize that you don’t want to monitor a project anymore, you can change it once the initial setup is over.

DexG3Wh2YvwVFjs6u58Dstvj515wF7TrWDO0t6v3

When you have your projects selected, hit the Next button. It’s time to select your team. Anaxi will start by automatically selecting people that you interact with the most with for the projects you selected. Just like the previous step, you can edit this list by pressing the button at the bottom and you can add or remove team members later.

wGqp_gzsPVeIqoZxye6XlANIDpCdmJFzwFKRveEe

Next, you will be prompted to help set up the reports for your projects. Anaxi will also start by automatically choosing labels that are most used, but you can customize which labels you want to monitor by clicking the button at the bottom of each project. Later on, you can create more tailored reports by adding issue or pull request reports when inside of a project folder.

3j6EUiM-LMnOtNOJbpf-jAC4v9PCJ3Y918avzAC9

Now, Anaxi is set up and a view of reports appears. Mine are all green because I don’t have any activity on my selected projects. From this menu, you can see which projects have pull requests at the top. Clicking on these will pull up open tickets on these projects. If you scroll down, you can see all the pull requests and issues that are assigned to you and your team. Then you can see individual views near the bottom for all of your projects. The order of these can be changed at any time by hitting the edit button in the top right and dragging the folders around.

X1IJT672xy31lkj3_jp32LKg1oIthwhU4ju_9tEyfKEbYkA6-9VWq-L64UkV4iYcEMn8OwGfC479ynFPsoct3rYoAsfB5fxMSUP6BokKGNq0py35FvqZstKv

Let’s choose an open-source project and see what it looks like when more people are working together and there are more issues and pull requests. For this example, let’s use kubernetes/kubernetes. As you can see below, Anaxi created a report for the new project, and added it to the current full report that already existed. Now that there is a more active GitHub project present in my reports, we can see the full extent of Anaxi in action.

y3Btj41pWjZ4HYjyk8ijrVH1h_ny0AveRyeLw_r11n2BhEKWd8bxu6cWUhzQI96FWXGaUMYmyQBPJABNkUzQQqJmztRKLDjJq-HagoI3mWu8mX0-uJdvtC70

To edit any part of the reports, simply click on that section, and then click on the edit button in the top right. Once there, you can change filters and if you scroll to the bottom, you can change the values for when an aspect of a report displays green, yellow or red.

My experience

After using Anaxi for a little while, scrolling through my GitHub Projects doesn’t feel like a chore anymore. It’s easy to choose one project and see everything that I want to see. One thing that was slightly bothersome is every time you click on a project, it has to read the GitHub API instead of holding on to it. This results in some wait time when you are trying to switch back and forth between multiple projects in quick succession, but that’s the only downside I’ve seen so far. Changing the colors or filters on aspects of reports is surprisingly easy and intuitive. Another thing I like is that you can create a due date for a certain issue or pull request. This is great when you want to build in dates into your projects. I feel like this would really help me when I want to prioritize certain things, instead of creating Google Calendar notifications, I can do this on the project directly.

So far, I haven’t worked on any project that’s been bigger than 4 people, so it hasn’t helped me that much… yet. As I move forward in my career and work on projects with more and more people and deadlines, I feel like Anaxi will become a go-to product for me. The ability to see everything so easily and the customizability really draws me in and makes me love the product and see myself using it in the future.

What’s coming next

Anaxi currently offers an iPhone app, but don’t fret if you are a web user. The plan for Anaxi is to work on integration with Jira next to help with the technology gap between managing project and managing code. After that is completed, they are planning on creating a web app, followed by Android, and ending with native desktop apps.

This article was produced in partnership with Holberton School.

Posted on Leave a comment

Spinnaker: The Kubernetes of Continuous Delivery

Comparing Spinnaker and Kubernetes in this way is somewhat unfair to both projects. The scale, scope, and magnitude of these technologies are different, but parallels can still be drawn.

Just like Kubernetes, Spinnaker is a technology that is battle tested, with Netflix using Spinnaker internally for continuous delivery. Like Kubernetes, Spinnaker is backed by some of the biggest names in the industry, which helps breed confidence among users. Most importantly, though, both projects are open source, designed to build a diverse and inclusive ecosystem around them.

Frankenstein’s Monster

Continuous Delivery (CD) is a solved problem, but it has been a bit of a Frankenstein’s monster, with companies trying to build their own creations by stitching parts together, along with Jenkins. “We tried to build a lot of custom continuous delivery tooling, but they all fell short of our expectation,” said Brandon Leach, Sr. Manager of Platform Engineering at Lookout.

“We were using Jenkins along with tools like Rundeck, but both had their own set of problems. While Rundeck didn’t have a first-class deployment tool, Jenkins was becoming a nightmare and we ended up moving to Gitlabs,” said Gard Voigt Rimestad of Schibsted, a major Norwegian media group.

Netflix created a more elegant way for continuous delivery called Asgard, open sourced in 2012, which was designed to run Netflix’s own workload on AWS. Many companies were using Asgard, including Schibsted, and it was gaining momentum. But it was tied closely to the kind of workload Netflix was running with AWS. Bigger companies who liked Asgard forked it to run their own workloads. IBM forked it twice to make it work with Docker containers.

IBM’s forking of Asgard was an eye-opening experience for Netflix. At that point, Netflix had started looking into containerized workloads, and IBM showed how it could be done with Asgard.

Google was also planning to fork Asgard to make it work on Google Compute Engine. By that time, Netflix had started working on the successor to Asgard, called Spinnaker. “Before Google could fork the project, we managed to convince Google to collaborate on Spinnaker instead of forking Asgard. Pivotal also joined in,” said Andy Glover, shepherd of Spinnaker and Director of Delivery Engineering at Netflix. The rest is history.

Continuous popularity

There are many factors at play that contribute to the popularity and adoption of Spinnaker. First and foremost, it’s a proven technology that’s been used at Netflix. It instills confidence in users. “Spinnaker is the way Netflix deploys its services. They do things at the scale we don’t do in AWS. That was compelling,” said Leach.

The second factor is the powerful community around Spinnaker that includes heavyweights like Microsoft, Google, and Netflix. “These companies have engineers on their staff that are dedicated to working on Spinnaker,” added Leach.

Governance

In October 2018, the Spinnaker community organized its first official Spinnaker Summit in Seattle. During the Summit, the community announced the governance structure for the project.

“Initially, there will be a steering committee and a technical oversight committee. At the moment Google and Netflix are steering the governance body, but we would like to see more diversity,” said Steven Kim, Google’s Software Engineering Manager who leads the Google team that works on Spinnaker.  The broader community is organized around a set of special interest groups (SIGs) that enable users to focus on particular areas of interest.

“There are users who have deployed Spinnaker in their environment, but they are often intimidated by two big players like Google and Netflix. The governance structure will enable everyone to be able to have a voice in the community,” said Kim.

At the moment, the project is being run by Google and Netflix, but eventually, it may be donated to an organization that has a better infrastructure for managing such projects. “It could be the OpenStack Foundation, CNCF, or the Apache Foundation,” said Boris Renski, Co-founder and CMO of Mirantis.

I met with more than a dozen users at the Summit, and they were extremely bullish about Spinnaker. Companies are already using it in a way even Netflix didn’t envision. Since continuous delivery is at the heart of multi-cloud strategy, Spinnaker is slowly but steadily starting to beat at the heart of many companies.

Spinnaker might not become as big as Kubernetes, due to its scope, but it’s certainly becoming as important. Spinnaker has made some bold promises, and I am sure it will continue to deliver on them.

Posted on Leave a comment

Set Up a CI/CD Pipeline with a Jenkins Pod in Kubernetes (Part 2)

In Part 1 of our series, we got our local Kubernetes cluster up and running with Docker, Minikube, and kubectl. We set up an image repository, and tried building, pushing, and deploying a container image with code changes we made to the Hello-Kenzan app. It’s now time to automate this process.

In Part 2, we’ll set up continuous delivery for our application by running Jenkins in a pod in Kubernetes. We’ll create a pipeline using a Jenkins 2.0 Pipeline script that automates building our Hello-Kenzan image, pushing it to the registry, and deploying it in Kubernetes. That’s right: we are going to deploy pods from a registry pod using a Jenkins pod. While this may sound like a bit of deployment alchemy, once the infrastructure and application components are all running on Kubernetes, it makes the management of these pieces easy since they’re all under one ecosystem.

With Part 2, we’re laying the last bit of infrastructure we need so that we can run our Kr8sswordz Puzzle in Part 3.

Read all the articles in the series:

 

RyHz4CL2OgRH4G8M7BLPxwZ7MMAZh-DpmmEuXBoa

This tutorial only runs locally in Minikube and will not work on the cloud. You’ll need a computer running an up-to-date version of Linux or macOS. Optimally, it should have 16 GB of RAM. Minimally, it should have 8 GB of RAM. For best performance, reboot your computer and keep the number of running apps to a minimum.

Creating and Building a Pipeline in Jenkins

Before you begin, you’ll want to make sure you’ve run through the steps in Part 1, in which we set up our image repository running in a pod (to do so quickly, you can run the npm part1 automated script detailed below).

If you previously stopped Minikube, you’ll need to start it up again. Enter the following terminal command, and wait for the cluster to start:

minikube start

You can check the cluster status and view all the pods that are running.

kubectl cluster-info kubectl get pods --all-namespaces

Make sure that the registry pod has a Status of Running.

We are ready to build out our Jenkins infrastructure.

Remember, you don’t actually have to type the commands below—just press Enter at each step and the script will enter the command for you!

1. First, let’s build the Jenkins image we’ll use in our Kubernetes cluster.

docker build -t 127.0.0.1:30400/jenkins:latest   -f applications/jenkins/Dockerfile applications/jenkins

2. Once again we’ll need to set up the Socat Registry proxy container to push images, so let’s build it. Feel free to skip this step in case the socat-registry image already exists from Part 1 (to check, run docker images).

docker build -t socat-registry -f applications/socat/Dockerfile applications/socat

3. Run the proxy container from the image.

docker stop socat-registry; docker rm socat-registry;   docker run -d -e "REG_IP=`minikube ip`" -e "REG_PORT=30400"   --name socat-registry -p 30400:5000 socat-registry
n-z3awvRgVXCO-QIpllgiqXOWtsTeePM62asXPD5

This step will fail if local port 30400 is currently in use by another process. You can check if there’s any process currently using this port by running the command
lsof -i :30400

4. With our proxy container up and running, we can now push our Jenkins image to the local repository.

docker push 127.0.0.1:30400/jenkins:latest

You can see the newly pushed Jenkins image in the registry UI using the following command.

minikube service registry-ui

5. The proxy’s work is done, so you can go ahead and stop it.

docker stop socat-registry

6. Deploy Jenkins, which we’ll use to create our automated CI/CD pipeline. It will take the pod a minute or two to roll out.

kubectl apply -f manifests/jenkins.yaml; kubectl rollout status deployment/jenkins

Inspect all the pods that are running. You’ll see a pod for Jenkins now.

kubectl get pods
_YIHeGg141vkuJmdJZBO0zN2s3pjLdDMgo5pfQFe

Jenkins as a CD tool needs special rights in order to interact with the Kubernetes cluster, so we’ve setup RBAC (Role Based Access Control) authorization for it inside the jenkins.yaml deployment manifest. RBAC consists of a Role, a ServiceAccount and a Binding object that binds the two together. Here’s how we configured Jenkins with these resources:

Role: For simplicity we leveraged the pre-existing ClusterRole “cluster-admin” which by default has unlimited access to the cluster. (In a real life scenario you might want to narrow down Jenkins’ access rights by creating a new role with the least privileged PolicyRule.)

ServiceAccount: We created a new ServiceAccount named “Jenkins”. The property “automountServiceAccountToken” has been set to true; this will automatically mount the authentication resources needed for a kubeconfig context to be setup on the pod (i.e. Cluster info, User represented by a token and a Namespace).

RoleBinding: We created a ClusterRoleBinding that binds together the “Jenkins” serviceAccount to the “cluster-admin” ClusterRole.

Lastly, we tell our Jenkins deployment to run as the Jenkins ServiceAccount.

n-z3awvRgVXCO-QIpllgiqXOWtsTeePM62asXPD5

Notice our Jenkins deployment has an initContainer. This is a container that will run to completion before the main container is deployed on our pod. The job of this init container is to create a kubeconfig file based on the provided context and to share it with the main Jenkins container through an “emptyDir” volume.

7. Open the Jenkins UI in a web browser.

minikube service jenkins

8. Display the Jenkins admin password with the following command, and right-click to copy it.

kubectl exec -it `kubectl get pods --selector=app=jenkins  --output=jsonpath={.items..metadata.name}` cat  /var/jenkins_home/secrets/initialAdminPassword

9. Switch back to the Jenkins UI. Paste the Jenkins admin password in the box and click Continue. Click Install suggested plugins. Plugins have actually been pre-downloaded during the Jenkins image build, so this step should finish fairly quickly.

RyHz4CL2OgRH4G8M7BLPxwZ7MMAZh-DpmmEuXBoa

One of the plugins being installed is Kubernetes Continuous Deploy, which allows Jenkins to directly interact with the Kubernetes cluster rather than through kubectl commands. This plugin was pre-downloaded with the Jenkins image build.  

10. Create an admin user and credentials, and click Save and Continue. (Make sure to remember these credentials as you will need them for repeated logins.)

s7KGWbFBCOau5gi7G05Fs_mjAtBOVNy7LlEQ4wTL

11. On the Instance Configuration page, click Save and Finish. On the next page, click Restart (if it appears to hang for some time on restarting, you may have to refresh the browser window). Login to Jenkins.

12. Before we create a pipeline, we first need to provision the Kubernetes Continuous Deploy plugin with a kubeconfig file that will allow access to our Kubernetes cluster. In Jenkins on the left, click on Credentials, select the Jenkins store, then Global credentials (unrestricted), and Add Credentials on the left menu

13. The following values must be entered precisely as indicated:

  • Kind: Kubernetes configuration (kubeconfig)

  • ID: kenzan_kubeconfig

  • Kubeconfig: From a file on the Jenkins master

  • File: /var/jenkins_home/.kube/config

Finally click Ok.

HznE6h9fOjuiv543Oqs5MqiIj0D52wSFJ44a-3An

13. We now want to create a new pipeline for use with our Hello-Kenzan app. Back on Jenkins Home, on the left, click New Item.

EdS4p4roTIfvBrg5Fz0n7sx8gTtMiXQMT7mqYqT-

Enter the item name as Hello-Kenzan Pipeline, select Pipeline, and click OK.

4If4KfHDUj8hGFn8kkaavcX9H8sboABcODIkrVL3

14. Under the Pipeline section at the bottom, change the Definition to be Pipeline script from SCM.

15. Change the SCM to Git. Change the Repository URL to be the URL of your forked Git repository, such as https://github.com/[GIT USERNAME]/kubernetes-ci-cd.

OPuG1YZM70f-TcKx-dkQQLl223gu0PudZe12eQPl

RyHz4CL2OgRH4G8M7BLPxwZ7MMAZh-DpmmEuXBoa

Note for the Script Path, we are using a Jenkinsfile located in the root of our project on our Github repo. This defines the build, push and deploy steps for our hello-kenzan application.  

Click Save. On the left, click Build Now to run the new pipeline. You should see it run through the build, push, and deploy steps in a few seconds.

b4KTpFJ4vnNdFbTKcMxn7Yy3aFr8UTlmQBuVK6YB

16. After all pipeline stages are colored green as complete, view the Hello-Kenzan application.

minikube service hello-kenzan

You might notice that you’re not seeing the uncommitted change you previously made to index.html in Part 1. That’s because Jenkins wasn’t using your local code. Instead, Jenkins pulled the code from your forked repo on GitHub, used that code to build the image, push it, and then deploy it.

Pushing Code Changes Through the Pipeline

Now let’s see some Continuous Integration in action! try changing the index.html in our Hello-Kenzan app, then building again to verify that the Jenkins build process works.

a. Open applications/hello-kenzan/index.html in a text editor.

nano applications/hello-kenzan/index.html

b. Add the following html at the end of the file (or any other html you like). (Tip: You can right-click in nano and choose Paste.)

<p style="font-family:sans-serif">For more from Kenzan, check out our <a href="http://kenzan.io">website</a>.</p>

c. Press Ctrl+X to close the file, type Y to confirm the filename, and press Enter to write the changes to the file.

d. Commit the changed file to your Git repo (you may need to enter your GitHub credentials):

git commit -am "Added message to index.html" git push

In the Jenkins UI, click Build Now to run the build again.

Jc8EnFCovLr3FfxWQxfuaeqX4VDJCHaq-mxvBIeC

18. View the updated Hello-Kenzan application. You should see the message you added to index.html. (If you don’t, hold down Shift and refresh your browser to force it to reload.)

minikube service hello-kenzan

ZyyeJWIXiqbBXfNd9MwG25_9Ewb8YmrKFTI-4zUz

And that’s it! You’ve successfully used your pipeline to automatically pull the latest code from your Git repository, build and push a container image to your cluster, and then deploy it in a pod. And you did it all with one click—that’s the power of a CI/CD pipeline.

If you’re done working in Minikube for now, you can go ahead and stop the cluster by entering the following command:

minikube stop

Automated Scripts

If you need to walk through the steps we did again (or do so quickly), we’ve provided npm scripts that will automate running the same commands in a terminal.  

1. To use the automated scripts, you’ll need to install NodeJS and npm.

On Linux, follow the NodeJS installation steps for your distribution. To quickly install NodeJS and npm on Ubuntu 16.04 or higher, use the following terminal commands.

a. curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash - b. sudo apt-get install -y nodejs

2. Change directories to the cloned repository and install the interactive tutorial script:

a. cd ~/kubernetes-ci-cd b. npm install

3. Start the script

npm run part1 (or part2, part3, part4 of the blog series)

​4. Press Enter to proceed running each command.

Up Next

In Parts 3 and 4, we will deploy our Kr8sswordz Puzzle app through a Jenkins CI/CD pipeline. We will demonstrate its use of caching with etcd, as well as scaling the app up with multiple puzzle service instances so that we can try running a load test. All of this will be shown in the UI of the app itself so that we can visualize these pieces in action.

Curious to learn more about Kubernetes? Enroll in Introduction to Kubernetes, a FREE training course from The Linux Foundation, hosted on edX.org.

This article was revised and updated by David Zuluaga, a front end developer at Kenzan. He was born and raised in Colombia, where he studied his BE in Systems Engineering. After moving to the United States, he studied received his master’s degree in computer science at Maharishi University of Management. David has been working at Kenzan for four years, dynamically moving throughout a wide range of areas of technology, from front-end and back-end development to platform and cloud computing. David’s also helped design and deliver training sessions on Microservices for multiple client teams.

Posted on Leave a comment

Get an Introduction to Open Source, Git, and Linux with New Training Course

Open source software has become the dominant model for how technology infrastructure operates all around the world, impacting organizations of all sizes. Use of open source software leads to better and faster development, and wider collaboration, and open source skills are an ever more valuable form of currency in the job market. In this article series, we’ll take a closer look at one of the best new ways to gain open source fluency: the Introduction to Open Source Software Development, Git and Linux training course from The Linux Foundation.

Some development experience, as well as some command-line experience, are ideal prerequisites for taking the course, but are not required. The course presents a comprehensive learning path focused on development, Linux systems, and Git, the revision control system. The $299 course is self-paced and comes with extensive and easily referenced learning materials. Organizations interested in training more than five people through the course can get a quote on possible discounts here.

This story is the first in a four-part article series that highlights the major aspects of the training course. The course begins with a general introduction to working with open source software, and explores project collaboration, licensing and legal issues. On the topic of collaboration, the curriculum emphasizes that project collaboration offers some distinct advantages over other kinds of developments models:

  • When progress is shared not everyone has to solve the same problems and make the same mistakes. Thus, progress can be much faster, and costs can be reduced.

  • Having more eyeballs viewing code and more groups testing it also leads to stronger and more secure code.

  • It is often hard for competitors to get used to the idea of sharing, and grasping that the benefits can be greater than the costs. But experience has shown this to be true again and again.

  • Competitors can compete on user-facing interfaces, for example, instead of internal plumbing that everyone needs, so that end users still see plenty of product differentiation and have varying experiences.

The course’s discussion of licensing is comprehensive and explains clearly how some open source licenses are highly restrictive, while others are permissive. The discussion also delves into how differing project needs and philosophies can dictate how permissive or restrictive a license should be. Permissive licenses do not require modifications and enhancements be made generally available, as is noted in the the course materials. Prominent permissive licensing examples include the BSD and Apache licenses.

Before launching into some of the Linux- and Git-specific curriculum, the course presents other guidance that is important to observe when working with open source projects The downstream impact of leadership and control decisions is one of these topics. “If the controllers of a project take and do not give back by mentoring and moderating they are limiting what a project can accomplish,” the course materials state. “A good leader listens. Good ideas can originate from many different contributors, even recent entrants into a project. Even though leadership paradigms such as BDFL (Benevolent Dictator for Life) are popular, note the use of the word benevolent.”

Additionally, the course covers the extremely important topic of getting help. This includes how to get help from others and how to access and work with documentation. First, the course considers how to view Linux man pages and then delves into how to use the info utility. Next, it examines how to use the built-in help facilities in many commands. Finally, it offers comprehensive coverage of graphical help interfaces.

Are you unfamiliar with some of these sources of help? The course explains them from the ground up:

  • man is the workhorse of Linux documentation as it has been on all UNIX-like operating systems since their inception. Its name is short for manual.

  • Info is a simple-to-use documentation system, hypertextual in nature, although it does not require a graphical browser. The documentation is built using the Texinfo system, which as a reader you need know nothing about.

  • Whatever Linux distribution you are running, there should be a graphical interface to the online documentation. Exactly how you can invoke it from the menus on your taskbar will vary, but with a little bit of searching, you should find it.

With the groundwork laid for working with open source tools and platforms and comprehensive guidance for getting help, the course then delves into hands-on instruction on topics including working with shells, bash and the command line, and Command Details. We will cover the course’s approach to these important topics in coming installments of this series.

Learn more about Introduction to Open Source Development, Git, and Linux (LFD201) and sign up now to start your open source journey.