Posted on Leave a comment

Set Up a CI/CD Pipeline with Kubernetes Part 1: Overview

What’s New?

We’ve updated the four parts of this blog series and versioned the code along with it to include the following new technology components.

  • Jenkins Plugin Kubernetes Continuous Deploy has  been added to deployments. https://plugins.jenkins.io/kubernetes-cd

  • Kubernetes RBAC and serviceaccounts are being used by applications to interact with the cluster.

  • We are now introducing and using Helm for a deployment (specifically for the deployment of the etcd-operator in part 3)

  • All versions of the main tools and technologies have been upgraded and locked

  • Fixed bugs, refactored K8s manifests and refactored applications’ code

  • We are now providing Dockerfile specs for socat registry and Jenkins

  • We’ve improved all instructions in the blog post and included a number of informational text boxes

The software industry is rapidly seeing the value of using containers as a way to ease development, deployment, and environment orchestration for app developers. Large-scale and highly-elastic applications that are built in containers definitely have their benefits, but managing the environment can be daunting. This is where an orchestration tool like Kubernetes really shines.

Kubernetes is a platform-agnostic container orchestration tool created by Google and heavily supported by the open source community as a project of the Cloud Native Computing Foundation. It allows you to spin up a number of container instances and manage them for scaling and fault tolerance. It also handles a wide range of management activities that would otherwise require separate solutions or custom code, including request routing, container discovery, health checking, and rolling updates.

Kenzan is a services company that specializes in building applications at scale. We’ve seen cloud technology evolve over the last decade, designing microservice-based applications around the Netflix OSS stack, and more recently implementing projects using the flexibility of  container technology. While each implementation is unique, we’ve found the combination of microservices, Kubernetes, and Continuous Delivery pipelines to be very powerful.

Crossword Puzzles, Kubernetes, and CI/CD

This article is the first in a series of four blog posts. Our goal is to show how to set up a fully-containerized application stack in Kubernetes with a simple CI/CD pipeline to manage the deployments.

We’ll describe the setup and deployment of an application we created especially for this series. It’s called the Kr8sswordz Puzzle, and working with it will help you link together some key Kubernetes and CI/CD concepts. The application will start simple enough, then as we progress we will introduce components that demonstrate a full application stack, as well as a CI/CD pipeline to help manage that stack, all running as containers on Kubernetes. Check out the architecture diagram below to see what you’ll be building.

Read all the articles in the series:

The completed application will show the power and ease with which Kubernetes manages both apps and infrastructure, creating a sandbox where you can build, deploy, and spin up many instances under load.

Get Kubernetes up and Running

The first step in building our Kr8sswordz Puzzle application is to set up Kubernetes and get comfortable with running containers in a pod. We’ll install several tools explained along the way: Docker, Minikube, and Kubectl.

9m-m-dyeYpMNvBD5Cxr3_GzLX-MrMKWZ9A9MGR-a

This tutorial only runs locally in Minikube and will not work on the cloud. You’ll need a computer running an up-to-date version of Linux or macOS. Optimally, it should have 16 GB of RAM. Minimally, it should have 8 GB of RAM. For best performance, reboot your computer and keep the number of running apps to a minimum.

Install Docker

Docker is one of the most widely used container technologies and works directly with Kubernetes.

Install Docker on Linux

To quickly install Docker on Ubuntu 16.04 or higher, open a terminal and enter the following commands (see the Linux installation instructions for other distributions):

sudo apt-get update
curl -fsSL https://get.docker.com/ | s

After installation, create a Docker group so you can run Docker commands as a non-root user (you’ll need to log out and then log back in after running this command):

sudo usermod -aG docker $USER

When you’re all done, make sure Docker is running:

sudo service docker start

Install Docker on macOS

Download Docker for Mac (stable) and follow the installation instructions. To launch Docker, double-click the Docker icon in the Applications folder. Once it’s running, you’ll see a whale icon in the menu bar.

2wFuUBKImxVs4uoJ8wc-giTDD_vtnEI5R2GXzlRp

Try Some Docker Commands

You can test out Docker by opening a terminal window and entering the following commands:

# Display the Docker version docker version # Pull and run the Hello-World image from Docker Hub docker run hello-world # Pull and run the Busybox image from Docker Hub docker run busybox echo "hello, you've run busybox" # View a list of containers that have run docker ps -a

9m-m-dyeYpMNvBD5Cxr3_GzLX-MrMKWZ9A9MGR-a

Images are specs that define all the files and resources needed for a container to run. Images are defined in a DockerFile, and built and stored in a repository. Many OSS images are publically available on Docker Hub, a web repository for Docker images. Later we will setup a private image repository for our own images.

Install Minikube and Kubectl

Minikube is a single-node Kubernetes cluster that makes it easy to run Kubernetes locally on your computer. We’ll use Minikube as the primary Kubernetes cluster to run our application on. Kubectl is a command line interface (CLI) for Kubernetes and the way we will interface with our cluster. (For details, check out Running Kubernetes Locally via Minikube.)

Install Virtual Box

Download and install the latest version of VirtualBox for your operating system. VirtualBox lets Minikube run a Kubernetes node on a virtual machine (VM)

Install Minikube

Head over to the Minikube releases page and install the latest version of Minikube using the recommended method for your operating system. This will set up our Kubernetes node.

Install Kubectl

The last piece of the puzzle is to install kubectl so we can talk to our Kubernetes node. Use the commands below, or go to the kubectl install page.

On Linux, install kubectl using the following command:

curl -LO
​ https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/

On macOS, install kubectl using the following command:

curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/darwin/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/

Install Helm

Helm is a package manager for Kubernetes. It allows you to deploy Helm Charts (or packages) onto a K8s cluster with all the resources and dependencies needed for the application. We will use it a bit later in Part 3, and highlight how powerful Helm charts are.

On Linux or macOS, install Helm with the following command.

curl https://raw.githubusercontent.com/kubernetes/helm/master/scripts/get > get_helm.sh; chmod 700 get_helm.sh; ./get_helm.sh

Fork the Git Repo

Now it’s time to make your own copy of the Kubernetes CI/CD repository on Github.

1. Install Git on your computer if you don’t have it already.

On Linux, use the following command:

sudo apt-get install git

On macOS, download and run the macOS installer for Git. To install, first double-click the .dmg file to open the disk image. Right-click the .pkg file and click Open, and then click Open again to start the installation.

2. Fork Kenzan’s Kubernetes CI/CD repository on Github. This has all the containers and other goodies for our Kr8sswordz Puzzle application, and you’ll want to fork it as you’ll later be modifying some of the code.

   a. Sign up if you don’t yet have an account on Github.  

   b. On the Kubernetes CI/CD repository on Github, click the Fork button in the upper right and follow the instructions.

VWmK6NaGcXD3TPZL6YRk_XPNZ8lqloN6of6yIUe7

  c. Within a chosen directory, clone your newly forked repository.

 git clone https://github.com/YOURUSERNAME/kubernetes-ci-cd

 d. Change directories into the newly cloned repo.

Clear out Minikube

Let’s get rid of any leftovers from previous experiments you might have conducted with Minikube. Enter the following terminal command:

minikube stop; minikube delete; sudo rm -rf ~/.minikube; sudo rm -rf ~/.kub

9m-m-dyeYpMNvBD5Cxr3_GzLX-MrMKWZ9A9MGR-a

This command will clear out any other Kubernetes contexts you’ve previously setup on your machine locally, so be careful. If you want to keep your previous contexts, avoid the last command which deletes the ~/.kube folder.

Run a Test Pod

Now we’re ready to test out Minikube by running a Pod based on a public image on Docker Hub.

9m-m-dyeYpMNvBD5Cxr3_GzLX-MrMKWZ9A9MGR-a

A Pod is Kubernetes’ resiliency wrapper for containers, allowing you to horizontally scale replicas.

1. Start up the Kubernetes cluster with Minikube, giving it some extra resources.

minikube start --memory 8000 --cpus 2 --kubernetes-version v1.6.0

9m-m-dyeYpMNvBD5Cxr3_GzLX-MrMKWZ9A9MGR-a

If your computer does not have 16 GB of RAM, we suggest giving Minikube less RAM in the command above. Set the memory to a minimum of 4 GB rather than 8 GB.

2. Enable the Minikube add-ons Heapster and Ingress.

minikube addons enable heapster; minikube addons enable ingress

Inspect the pods in the cluster. You should see the add-ons heapster, influxdb-grafana, and nginx-ingress-controller.

kubectl get pods --all-namespaces

3. View the Minikube Dashboard in your default web browser. Minikube Dashboard is a UI for managing deployments. You may have to refresh the web browser if you don’t see the dashboard right away.

minikube service kubernetes-dashboard --namespace kube-system

4. Deploy the public nginx image from DockerHub into a pod. Nginx is an open source web server that will automatically download from Docker Hub if it’s not available locally.

kubectl run nginx --image nginx --port 80

After running the command, you should be able to see nginx under Deployments in the Minikube Dashboard with Heapster graphs. (If you don’t see the graphs, just wait a few minutes.)

taZzJW57y2HD12JINuNJeuo-9LrkFMLjQEfcU0G5

9m-m-dyeYpMNvBD5Cxr3_GzLX-MrMKWZ9A9MGR-a

A Kubernetes Deployment is a declarative way of creating, maintaining and updating a specific set of Pods or objects. It defines an ideal state so K8s knows how to manage the Pods.

5. Create a K8s service for deployment. This will expose the nginx pod so you can access it with a web browser.

kubectl expose deployment nginx --type NodePort --port 80

6. The following command will launch a web browser to test the service. The nginx welcome page displays, which means the service is up and running. Nice work!

minikube service nginx

5Mm8CSeIyO1clhqVqD4v-j4hZGWjUMPGCI1MA36E

7. Delete the nginx deployment and service you created.

kubectl delete service nginx
kubectl delete deployment nginx

Create a Local Image Registry

We previously ran a public image from Docker Hub. While Docker Hub is great for public images, setting up a private image repository on the site involves some security key overhead that we don’t want to deal with. Instead, we’ll set up our own local image registry. We’ll then build, push, and run a sample Hello-Kenzan app from the local registry. (Later, we’ll use the registry to store the container images for our Kr8sswordz Puzzle app.

8. From the root directory of the cloned repository, set up the cluster registry by applying a .yaml manifest file.

kubectl apply -f manifests/registry.yaml

9m-m-dyeYpMNvBD5Cxr3_GzLX-MrMKWZ9A9MGR-a

Manifest .yaml files (also called k8s files) serve as a way of defining objects such as Pods or Deployments in Kubernetes. While previously we used the run command to launch a pod, here we are applying k8s files to deploy pods into Kubernetes.

9. Wait for the registry to finish deploying using the following command. Note that this may take several minutes.

kubectl rollout status deployments/registry

10. View the registry user interface in a web browser. Right now it’s empty, but you’re about to change that.

minikube service registry-ui

DUUet5TikWjRivAuP0aELBrwSx0QxKPBrOKfIzlB

11. Let’s make a change to an HTML file in the cloned project. Open the /applications/hello-kenzan/index.html file in your favorite text editor, or run the command below to open it in the nano text editor.

nano applications/hello-kenzan/index.html

Change some text inside one of the <p> tags. For example, change “Hello from Kenzan!” to “Hello from Me!”. When you’re done, save the file. (In nano, press Ctrl+X to close the file, type Y to confirm the filename, and press Enter to write the changes to the file.)

12. Now let’s build an image, giving it a special name that points to our local cluster registry.

docker build -t 127.0.0.1:30400/hello-kenzan:latest -f 
 applications/hello-kenzan/Dockerfile applications/hello-kenzan
9m-m-dyeYpMNvBD5Cxr3_GzLX-MrMKWZ9A9MGR-a

When a docker image is tagged with a hostname prefix (as shown above), Docker will perform pull and push actions against a private registry located at the hostname as opposed to the default Docker Hub registry.

13. We’ve built the image, but before we can push it to the registry, we need to set up a temporary proxy. By default the Docker client can only push to HTTP (not HTTPS) via localhost. To work around this, we’ll set up a Docker container that listens on 127.0.0.1:30400 and forwards to our cluster.

First, build the image for our proxy container:

docker build -t socat-registry -f applications/socat/Dockerfile applications/socat

14. Now run the proxy container from the newly created image. (Note that you may see some errors; this is normal as the commands are first making sure there are no previous instances running.)

docker stop socat-registry; docker rm socat-registry;   docker run -d -e "REG_IP=`minikube ip`" -e "REG_PORT=30400"   --name socat-registry -p 30400:5000 socat-registry
9m-m-dyeYpMNvBD5Cxr3_GzLX-MrMKWZ9A9MGR-a

This step will fail if local port 30400 is currently in use by another process. You can check if there’s any process currently using this port by running the command
lsof -i :30400

15. With our proxy container up and running, we can now push our hello-kenzan image to the local repository.

docker push 127.0.0.1:30400/hello-kenzan:latest

Refresh the browser window with the registry UI and you’ll see the image has appeared.

YSBsriST1ssQBC1z0Lewx67eZ8Lx4eeAkBNuW7gn

16. The proxy’s work is done for now, so you can go ahead and stop it.

docker stop socat-registry

17. With the image in our cluster registry, the last thing to do is apply the manifest to create and deploy the hello-kenzan pod based on the image.

kubectl apply -f applications/hello-kenzan/k8s/deployment.yaml

18. Launch a web browser and view the service.

minikube service hello-kenzan

Notice the change you made to the index.html file. That change was baked into the image when you built it and then was pushed to the registry. Pretty cool!

1Q5e2bfkbGFdwJWNa2LB16mkr1Y5dGx40Ep7DwEA

19. Delete the hello-kenzan deployment and service you created.

kubectl delete service hello-kenzan
kubectl delete deployment hello-kenzan

We are going to keep the registry deployment in our cluster as we will need it for the next few parts in our series.  

If you’re done working in Minikube for now, you can go ahead and stop the cluster by entering the following command:

minikube stop

If you need to walk through the steps we did again (or do so quickly), we’ve provided npm scripts that will automate running the same commands in a terminal.  

1. To use the automated scripts, you’ll need to install NodeJS and npm.

On Linux, follow the NodeJS installation steps for your distribution. To quickly install NodeJS and npm on Ubuntu 16.04 or higher, use the following terminal commands.

 a. curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash -
 b. sudo apt-get install -y nodejs

On macOS, download the NodeJS installer, and then double-click the .pkg file to install NodeJS and npm.

2. Change directories to the cloned repository and install the interactive tutorial script:

 a. cd ~/kubernetes-ci-cd b. npm install

3. Start the script

 npm run part1 (or part2, part3, part4 of the blog series)

4. Press Enter to proceed running each command.

​Up Next

In Part 2 of the series, we will continue to build out our infrastructure by adding in a CI/CD component: Jenkins running in its own pod. Using a Jenkins 2.0 Pipeline script, we will build, push, and deploy our Hello-Kenzan app, giving us the infrastructure for continuous deployment that will later be used with our Kr8sswordz Puzzle app.

This article was revised and updated by David Zuluaga, a front end developer at Kenzan. He was born and raised in Colombia, where he studied his BE in Systems Engineering. After moving to the United States, he studied received his master’s degree in computer science at Maharishi University of Management. David has been working at Kenzan for four years, dynamically moving throughout a wide range of areas of technology, from front-end and back-end development to platform and cloud computing. David’s also helped design and deliver training sessions on Microservices for multiple client teams.

Posted on Leave a comment

4 Must-Have Tools for Monitoring Linux

Linux. It’s powerful, flexible, stable, secure, user-friendly… the list goes on and on. There are so many reasons why people have adopted the open source operating system. One of those reasons which particularly stands out is its flexibility. Linux can be and do almost anything. In fact, it will (in most cases) go well above what most platforms can. Just ask any enterprise business why they use Linux and open source.

But once you’ve deployed those servers and desktops, you need to be able to keep track of them. What’s going on? How are they performing? Is something afoot? In other words, you need to be able to monitor your Linux machines. “How?” you ask. That’s a great question, and one with many answers. I want to introduce you to a few such tools—from command line, to GUI, to full-blown web interfaces (with plenty of bells and whistles). From this collection of tools, you can gather just about any kind of information you need. I will stick only with tools that are open source, which will exempt some high-quality, proprietary solutions. But it’s always best to start with open source, and, chances are, you’ll find everything you need to monitor your desktops and servers. So, let’s take a look at four such tools.

Top

We’ll first start with the obvious. The top command is a great place to start, when you need to monitor what processes are consuming resources. The top command has been around for a very long time and has, for years, been the first tool I turn to when something is amiss. What top does is provide a real-time view of all running systems on a Linux machine. The top command not only displays dynamic information about each running process (as well as the necessary information to manage those processes), but also gives you an overview of the machine (such as, how many CPUs are found, and how much RAM and swap space is available). When I feel something is going wrong with a machine, I immediately turn to top to see what processes are gobbling up the most CPU and MEM (Figure 1). From there, I can act accordingly.

There is no need to install anything to use the top command, because it is installed on almost every Linux distribution by default. For more information on top, issue the command man top.

Glances

If you thought the top command offered up plenty of information, you’ve yet to experience Glances. Glances is another text-based monitoring tool. In similar fashion to top, glances offers a real-time listing of more information about your system than nearly any other monitor of its kind. You’ll see disk/network I/O, thermal readouts, fan speeds, disk usage by hardware device and logical volume, processes, warnings, alerts, and much more. Glances also includes a handy sidebar that displays information about disk, filesystem, network, sensors, and even Docker stats. To enable the sidebar, hit the 2 key (while glances is running). You’ll then see the added information (Figure 2).

You won’t find glances installed by default. However, the tool is available in most standard repositories, so it can be installed from the command line or your distribution’s app store, without having to add a third-party repository.

GNOME System Monitor

If you’re not a fan of the command line, there are plenty of tools to make your monitoring life a bit easier. One such tool is GNOME System Monitor, which is a front-end for the top tool. But if you prefer a GUI, you can’t beat this app.

With GNOME System Monitor, you can scroll through the listing of running apps (Figure 3), select an app, and then either end the process (by clicking End Process) or view more details about said process (by clicking the gear icon).

You can also click any one of the tabs at the top of the window to get even more information about your system. The Resources tab is a very handy way to get real-time data on CPU, Memory, Swap, and Network (Figure 4).

If you don’t find GNOME System Monitor installed by default, it can be found in the standard repositories, so it’s very simple to add to your system.

Nagios

If you’re looking for an enterprise-grade networking monitoring system, look no further than Nagios. But don’t think Nagios is limited to only monitoring network traffic. This system has over 5,000 different add-ons that can be added to expand the system to perfectly meet (and exceed your needs). The Nagios monitor doesn’t come pre-installed on your Linux distribution and although the install isn’t quite as difficult as some similar tools, it does have some complications. And, because the Nagios version found in many of the default repositories is out of date, you’ll definitely want to install from source. Once installed, you can log into the Nagios web GUI and start monitoring (Figure 5).

Of course, at this point, you’ve only installed the core and will also need to walk through the process of installing the plugins. Trust me when I say it’s worth the extra time.
The one caveat with Nagios is that you must manually install any remote hosts to be monitored (outside of the host the system is installed on) via text files. Fortunately, the installation will include sample configuration files (found in /usr/local/nagios/etc/objects) which you can use to create configuration files for remote servers (which are placed in /usr/local/nagios/etc/servers).

Although Nagios can be a challenge to install, it is very much worth the time, as you will wind up with an enterprise-ready monitoring system capable of handling nearly anything you throw at it.

There’s More Where That Came From

We’ve barely scratched the surface in terms of monitoring tools that are available for the Linux platform. No matter whether you’re looking for a general system monitor or something very specific, a command line or GUI application, you’ll find what you need. These four tools offer an outstanding starting point for any Linux administrator. Give them a try and see if you don’t find exactly the information you need.

Posted on Leave a comment

Add It Up: Test Automation Is Not a Tooling Story

Test automation tools are not used very often. Only 16 percent of performance test cases are executed with test automation tools, and security tests are being completed at the same frequency according to the World Quality Report (WQR) 2018-2019, which surveyed 1,700 IT decision makers (ITDMs) at companies with more than a thousand employees. Although the QA and testing job roles have been adapting to agile development practices, remember that even if one test is automated, the majority of tests are still done manually.

Read more at The New Stack

Click Here!

Posted on Leave a comment

source{d} Engine: A Simple, Elegant Way to Analyze your Code

With the recent advances in machine learning technology, it is only a matter of time before developers can expect to run full diagnostics and information retrieval on their own source code. This can include autocompletion, auto-generated user tests, more robust linters, automated code reviews and more. I recently reviewed a new product in this sphere — the source{d} Engine.
source{d} offers a suite of applications that uses machine learning on code to complete source code analysis and assisted code reviews. Chief among them is the source{d} Engine, now in public beta; it uses a suite of open source tools (such as Gitbase, Babelfish, and Enry) to enable large-scale source code analysis. Some key uses of the source{d} Engine include language identification, parsing code into abstract syntax trees, and performing SQL Queries on your source code such as:

  • What are the top repositories in a codebase based on number of commits?

  • What is the most recent commit message in a given repository?

  • Who are the most prolific contributors in a repository?

Because source{d} Engine uses both agnostic language analysis and standard SQL queries, the information available feels infinite.

From minute one, using source{d} Engine was an easy, efficient process. I ran source{d} Engine chiefly on a virtual machine running Ubuntu 14.04 but also installed it on MacOS and Ubuntu 16.04 for comparison purposes. On all three, install was completely painless, although the Ubuntu versions seemed to run slightly faster. The source{d} Engine documentation is accurate and thorough. It correctly warned me that the first time initializing the engine would take a fair amount of time so I was prepared for the wait. I did have to debug a few errors, all relating to my having a previous SQL instance running so some more thorough troubleshooting documentation might be warranted.

It’s simple to go between codebases using the commands scrd kill and scrd init. I wanted to explore many use cases so I picked a wide variety of codebases to test on ranging from a single contributor with only 5 commits to one with 10 contributors, thousands of lines of code, and hundreds of commits. source{d} Engine worked phenomenally with all of them although it is easier to see the benefits in a larger codebase.

My favorite queries to run were those pertaining to commits. I am not a fan of the way GitHub organizes commit history, so I find myself coming back to source{d} Engine again and again when I want commit history-related information. I’m also very impressed with the Universal Abstract Syntax Tree (UAST) concept. A UAST is a normalized form of an abstract syntax tree (AST) — a structural representation of source code used for code analysis. Unlike ASTs, UASTs are language agnostic and do not rely on any specific programming language. The UAST format enables further analysis and can be used with any tools in a standard, open style.

My only complaint is the (obvious and understandable) reliance on a base level of SQL knowledge. Because I was already very familiar with SQL, I was able to quickly use the source{d} Engine engine and create my own queries. However, if I had been shakier on the basics, I would’ve appreciated more example queries. Another minor complaint is that support for Python appears to only be for Python 2 right now, and not Python 3.

I’m excited to follow the future of source{d} Engine and also source{d} Lookout (now in public alpha) which is the first step to a suite of true machine learning on code applications. I would love for the documentation of this and other upcoming applications to be more comprehensive, but because they are not fully available yet, just having what’s available already is great.

In general, I’m extremely impressed with the transparency of the company — not only are the future products and applications clearly listed and described, many internal company documents are also available. This true dedication to open source software is amazing, and I hope more companies follow source{d} ’s lead.

Lizzie Turner is a former digital marketing analyst studying full stack software engineering at Holberton School. She is currently looking for her first software engineering role and is particularly passionate about data and analytics. You can find Lizzie on LinkedIn, GitHub, and Twitter.

Posted on Leave a comment

Linux-Based Airtame 2 Offers an Enterprise Alternative to Chromecast

One category that often gets overlooked in the discussion of Linux computers is the market for HDMI dongle devices that plug into your TV to stream, mirror, or cast content from your laptop or mobile device. Yesterday, Google announced an extensively leaked third-gen version of its market-leading, Linux-powered Chromecast device. The latest Chromecast upgrades the WiFi radio to 5GHz and adds 2.4GHz Bluetooth while also overhauling the physical design.

Here, we look at a similar Linux-based HDMI dongle device that launched this morning with a somewhat different feature set and market focus. The Airtame 2 is the first hardware overhaul since the original Airtame generated $1.3 million on Indiegogo in 2013. The new version doubles the RAM, improves the Debian Linux firmware, and advances to dual-band 802.11a/b/g/n/ac, which is now known as WiFi 5 in the new Wi-Fi Alliance naming scheme that accompanied its recent WiFi 6 (ax) announcement.

In its first year, Copenhagen, Denmark-based Airtame struggled to fulfill its Indiegogo orders and almost collapsed in the process. Yet, the company went on to find success and recently surpassed 100,000 device shipments. With a growing focus on enterprise and educational markets, Airtame upgraded its software with cloud device management features, and expanded its media sources beyond cross-platform desktops to Android and iOS devices.

The key difference with Chromecast is that Airtame supports mirroring to multiple devices at once, as long as you’re video is coming from a laptop or desktop rather than a mobile. Chromecast also requires the Chrome browser, and it lacks cloud-based device management features.

Combined with Chromecast’s dominance of the low-end entertainment segment, thanks in part to its $25 pricetag, Airtame’s advantages led the company to focus more on the enterprise, signage, and educational markets. Unfortunately, the Airtame 2 price went up by $100 to $399 per device.

Airtame 2 extends its enterprise trajectory by “re-imagining how to turn blank screens into smart, collaborative displays,” says the company. Airtame recently released four Homescreen apps, providing “simple app integrations for better team collaboration and digital signage.” These deployments are controlled via Airtame Cloud, which was launched in early 2017. The cloud service enables enterprise and educational customers to monitor their Airtame devices, perform bulk updates, and add updated content directly from the cloud.

Twice the RAM, five times the WiFi performance

The Airtame 2 offers the same basic functionality as the Airtame 1, but it adds a number of performance benefits. It moves from the DualLite version of the NXP i.MX6 to the similarly dual-core, Cortex-A9 Dual model. This has the same 1GHz clock rate, but with a more advanced Vivante GC2000 GPU. Output resolution via the HDMI 1.4b port stays the same at 1920×1080, but you now get a 60fps frame rate instead of 30fps. As before, you can plug into VGA or DVI ports using adapters.

More importantly for performance, the Airtame 2 doubles the RAM to 2GB. In place of an SD card slot, the firmware is stored on onboard eMMC.

The new Cypress (Broadcom) CYW89342 RSDB WiFi 5 chip is about five times faster than the original’s Qualcomm WiFi 4 (802.11n) chip, which also provided dual-band MIMO 2.4GHz/5.2GHz WiFi. The Airtame 2 has twice the range, at up to 20 meters, which is helpful for its enterprise and educational customers.

Other hardware improvements include a smaller, 77.9 x 13.5mm footprint, a Kensington Lock input, an LED, and a magnetic wall mount. A USB Type-C port replaces the power-only micro-USB OTG, adding support for HDMI, USB host, and Ethernet.

As before, there’s also a micro-USB host port that with the help of an adapter, supports Ethernet and Power-over-Ethernet (PoE). Ethernet can run simultaneously with WiFi, and can improve throughput and reliability, says Airtame. We saw no mention of the new product’s latency, but on the previous Airtame, WiFi streaming latency was one second with audio.

Once again, iOS 9 devices can mirror video using AirPlay. However, Android (4.2.2) devices are limited to the display of static images and PDF files, including non-animated PowerPoint presentations. Desktop support, which also includes a special optimization for Chromebooks, includes support for Windows 10/7, Ubuntu 15.05, and Mac OS X 10.12.

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.

Posted on Leave a comment

GCC: Optimizing Linux, the Internet, and Everything

The purpose of this paper is to provide developers a comprehensive overview of the GNU Compiler Collection (GCC), a suite of compilers that has been in use for over 30 years and is the core component of the GNU toolchain. This includes highlighting GCC’s main benefits to programmers, showcasing when and why GCC is a good choice for code development, and providing basic information about GCC 8.2, the most recent release of this popular tool.

One of the first decisions facing a developer when starting a coding project is which programming tools to utilize. The GNU Compiler Collection (GCC ) offers a robust and reliable suite of compilers that has been in use and under constant development for more than 30 years. The main goal of the GCC project is to develop and maintain a world-class optimizing compiler that is retargetable across multiple architectures and diverse environments. GCC is a free software, which guarantees end users the freedom to run, study, share, and modify software.  

According to LLVM.org, GCC is “the de facto-standard open source compiler today, and it routinely compiles a huge volume of code.” [1] GCC is the main compiler that builds the Linux kernel and is used for developing software for GNU/Linux systems as well as other systems that use Linux as the kernel. GCC also plays a major role in the development of software for embedded processors and is an enabling platform for the research and development of other programming tools and application software. GCC is a core component of the GNU toolchain, a collection of highly integrated programming tools produced by the GNU Project. GCC is distributed under the GNU General Public License (GPL), a copyleft license, which means that derivative work can only be distributed under the same license terms. GPL is intended to protect GCC and other GNU software from being made proprietary and requires that changes to code are made available freely and openly.

The original author of GCC is Richard Stallman, the founder of the GNU Project. The GNU Project was started in 1984 to create a Unix-like operating system as free software. Since every Unix operating system needs a C compiler, the GNU Project took on the task of developing a compiler from scratch. GCC was first released on March 22, 1987 and was considered a significant breakthrough since it was the first portable ANSI C optimizing compiler released as  free software. GCC is currently maintained by a community of programmers from all over the world under the direction of a steering committee that ensures broad, representative oversight of the project. GCC’s community approach is one of its strengths, resulting in a large and diverse community of developers and users that contribute to and provide support for the project. According to Open Hub, GCC “is one of the largest open-source teams in the world, and is in the top 2% of all project teams on Open Hub.” [2]

Today GCC supports over 60 hardware platforms, including ARM (32-bit and 64-bit), Intel and AMD (32-bit and 64-bit), IBM POWER (32-bit and 64-bit), SPARC, HP PA-RISC, and IBMz (32-bit and 64-bit). GCC offers highly compliant C/C++ compilers and support for popular C libraries including the GNU C Library (glibc), newlib, musl-libc, and the C libraries included with various BSD operating systems. GCC also offers front-ends for Fortran, Ada, and GO languages and support for a variety of programming APIs including OpenMP and OpenAcc. GCC runs with popular operating environments, including Linux, Windows, macOS, FreeBSD, NetBSD, OpenBSD, DragonFly BSD, Solaris, AIX, HPUX, and RTEMS. A key benefit of GCC is its tight integration with the other components of the GNU toolchain, including glibc, Binutils, and the GNU Debugger (GDB). GCC also functions as a cross compiler, providing the capability of creating executable code for a platform other than the one on which the compiler is running.  

Optimizing Linux

GCC is the default compiler for the Linux kernel source, providing developers with trusted, stable performance. For successful Linux kernel development C needs additional extensions that provide more precise control over  certain parameters. GCC supports these extensions, offering the capability of correctly building the Linux kernel. GCC is also a standard component of popular Linux distributions, such as Arch Linux, CentOS, Debian, Fedora, openSUSE, and  Ubuntu, where it routinely compiles supporting system components. This includes the default libraries used by Linux, such as libc, libm, libintl, libssh, libssl, libcrypto, libexpat, libpthread, and ncurses. These libraries depend on GCC to provide correctness and performance and are used by applications and system utilities to access Linux kernel features. GCC also builds the many application packages included with a distribution, such as Python, Perl, Ruby, nginx, Apache HTTP Server, OpenStack, Docker, and OpenShift. The combination of kernel, libraries, and application software translates into a large volume of code built with GCC for each Linux distribution. For example, for the openSUSE distribution nearly 100% of native code is built by GCC, including 6,135 source packages producing 5,705 shared libraries and 38,927 executables. This amounts to about 24,540 sources packages compiled weekly with GCC for openSUSE. [3]

It should be noted that the version of GCC included in Linux distributions is not necessarily the latest version, but rather the version used to create the libraries that define the system Application Binary Interface (ABI). User space developers will need to download  the latest stable version of GCC to gain access to new features, performance optimizations, and improvements in usability. Many organizations, including Red Hat, SUSE, ARM, Linaro, and IBM, offer toolchains that include the latest version of GCC along with other GNU tools to help enhance developer productivity and improve deployment times.

Optimizing the Internet

GCC is a de facto standard embedded compiler, enabling the development of software for the growing world of Internet of Things (IoT). GCC offers a number of extensions that make it well suited for embedded systems software development, including fine-grained control using many command-line options, inline assembly, and compiler #pragmas that helps control the compiler behavior in greater detail. GCC supports a broad base of embedded platforms, including ARM, AMCC, RISC-V, and Freescale Power Architecture-based processors, and produces high quality code for these environment. GCC’s cross-compilation capability is a critical function to this community and prebuilt cross-compilation toolchains are often a major requirement. For example, the GNU ARM Embedded toolchains [4] are integrated and validated packages featuring the Arm Embedded GCC compiler, libraries, and other tools necessary for bare-metal software development. These toolchains are available for cross-compilation on Windows, Linux and macOS host operating systems and target the popular ARM Cortex-R and Cortex-M processors, which have shipped in tens of billions of Internet-capable devices. [5]

GCC is a reliable development platform for creating software applications that need to directly manage computing resources, including the open source database and web serving engines and backup and security software that are used to power the Cloud. GCC is compliant with C++11, C++14, and C++17 and will build object code from source code without using an intermediary to first build C code from C++ source, creating better object code with better debugging information. Some examples of applications that utilize GCC include: MySQL Database Management System, which requires GCC for Linux [6]; the Apache HTTP Server, which recommends using GCC [7]; and Bacula, an enterprise ready network backup tool which require GCC. [8]  

For the research and development of the scientific codes used in High Performance Computing (HPC), GCC offers mature C, C++, and Fortran front ends as well as support for OpenMP and OpenACC APIs for directive-based parallel programming. Because GCC offers portability across computing environments, it enables code to be more easily targeted and tested across a variety of new and legacy client and server platforms. Code performance is an important parameter to this community and GCC offers a solid performance base. A November 2017 paper published by Colfax Research evaluates C++ compilers on an Intel platform for the speed of compiled code parallelized with OpenMP 4.x directives and for the speed of compilation time. The paper summarizes “the GNU compiler also does very well in our tests. G++ produces the second fastest code in three out of six cases and is amongst the fastest compiler in terms of compile time.” [9] The proprietary Intel Compiler, which only supports Intel hardware, produced the fastest code in all six cases.

Optimizing Everything

Finally, GCC offers an attractive and robust environment for building development tools because of its free availability, retargetability, and continued development. An example of a popular tool developed with GCC is LLVM, a compiler infrastructure project initially started at the University of Illinois Urbana-Champaign in 2000. GCC provided the resources to bootstrap LLVM, was used for compiling LLVM until it was self-hosting, and is still used by LLVM for filling language and library gaps. In 2005 Apple formed a team to work on the LLVM system, making it an integral part of Apple’s development tools for macOS and iOS. Clang, a C language front end developed by Apple for LLVM, was designed to be compatible with GCC 4.2.1, supporting compilation flags and unofficial language extension, and utilizing the GCC tool chain. LLVM and Clang are distributed under a BSD license, which imposes minimal restrictions on the use and redistribution of code and is considered more friendly for corporations needing to protect IP. Unlike GPL, BSD does not protect code from being co-opted by other organizations.

The Clang project has generated a wealth of comparisons between GCC and Clang/LLVM mainly focused around performance, diagnostics, and licensing. A more comprehensive comparison is useful for developers who are evaluating these compilers.

  • GCC supports a broader base of hardware and operating environments, which is key to application developers who are required to write software that supports a variety of new and legacy computing platforms.

    • GCC supports over 60 hardware platforms and Linux, Windows, macOS, FreeBSD, NetBSD, OpenBSD, DragonFly BSD, Solaris, AIX, HPUX, and RTEMS operating environments.

    • Clang provides stable support for Intel and AMD (x86 and AMD64), a limited number of ARM processors, and PowerPC (mainly 64) and Linux, NetBSD, FreeBSD, Butterfly BSD  and MacOS operating environments. [10]

  • For developers writing scientific applications, GCC offers compliant, optimized C, C++ and Fortran compilers that include support for OpenMP and OpenACC APIs.

    • GCC has offered full support for OpenMP 4.0 for C, C++ and Fortran compilers since version 4.9.1 and full support for OpenMP 4.5 for C and C++ compilers since version 6.1. For OpenACC, GCC has supported most of the 2.5 specification and performance optimizations since version 6.

    • Clang only supports C-like languages and offers limited OpenMP support, including OpenMP 3.1 support since version 3.8 and some feature support for OpenMP 4.0 and 4.5 since version 3.9. [11] In terms of OpenACC, support is under development.

  • GCC delivers cross compilation capabilities for a broad range of hardware and operating environments. This has resulted in many prebuilt cross compiler toolchains available that help streamline IoT development efforts for embedded systems developers.

  • The GNU ARM Embedded toolchains are ready-to-use, open source suite of tools based on GCC for programming Arm Cortex-M and Cortex-R processors. The Bootlin web site provides access to 138 pre-compiled cross compiler toolchains based on GCC, binutils, and GDB for a variety of hardware platforms. [12]

  • For Clang there are limited prebuilt toolchains such as the standalone Android NDK. Theoretically Clang could replace GCC in existing cross compiling toolchains, but developers will need to rebuild these toolchains to include Clang. [13]

  • GCC creates optimized, well performing code. In comparisons of current versions of GCC and Clang, GCC provided better performance over a range of benchmarks.

    • The GCC 8/9 vs. LLVM Clang 6/7 Compiler Benchmarks On AMD EPYC article provides results of 49 benchmarks ran across the four tested compilers at three optimization levels. Coming in first 34% of the time was GCC 8.2 RC1 using “-O3 -march=native” level, while at the same optimization level LLVM Clang 6.0 came in second with wins 20% of the time. [14]

    • With OpenMP workloads, GCC outperformed Clang in the six reported cases in a paper published by Colfax Research (referenced in footnote 9) and was also one of the fastest compilers in terms of compile time.

GCC: Continuing to Optimize Linux, the Internet, and Everything

GCC  continues to move forward as the world-class compiler that is optimized across multiple architectures and diverse environments. The most current version of GCC is 8.2, released in July 2018, added hardware support for upcoming Intel CPUs, more ARM CPUs and improved performance for AMD’s ZEN CPU. Initial C17 support has been added along with initial work towards C++2A. Diagnostics have continued to be enhanced including better emitted diagnostics, with improved locations, location ranges, and fix-it hints, particularly in the C++ front end.

In The State of Developer Ecosystem Survey in 2018 by JetBrains, out of 6,000 developers who took the survey GCC  is regularly used by 66% of C++ programmers and 73% of C programmers. New hardware platforms continue to rely on the GCC toolchain for software development, such as RISC-V, a free and open ISA that of interest to machine learning, Artifical Intelligence (AI), and IoT market segments. GCC continues to be a critical component in the continuing development of Linux systems. The Clear Linux Project for Intel Architecture, an emerging distribution built for cloud, client, and IoT use cases, provides a good example of how GCC compiler technology is being used and improved to boost the performance and security of a Linux-based system. GCC is also being used for application development for Microsoft’s Azure Sphere, a Linux-based operating system for IoT applications that initially supports the ARM based MediaTek MT3620 processor. In terms of developing the next generation of programmers, GCC is also a core component of the Windows toolchain for Raspberry PI, the low-cost embedded board running Debian-based GNU/Linux that is used to promote the teaching of basic computer science in schools and in developing countries.

Notes

 3. Information provided by SUSE based on recent build statistics. There are other source packages in openSUSE that do not generate an executable image and these are not included in the counts.

Margaret Lewis is a technology consultant who previously served as Director of Software Planning at AMD and an Associate Director at the Maui High Performance Computing Center.  

Posted on Leave a comment

Get Essential Security Information from Linux Security Summit Videos

In case you missed it, videos for Linux Security Summit NA are now available. On Linux.com, we covered a couple of these in depth, including:

Redefining Security Technology in Zephyr and Fuchsia By Eric Brown

If you’re the type of person who uses the word “vuln” as a shorthand for code vulnerabilities, you should check out the presentation from the recent Linux Security Summit called “Security in Zephyr and Fuchsia.” In the talk, two researchers from the National Security Agency discuss their contributions to the nascent security stacks of two open source OS projects: Zephyr and Fuchsia.

Building Security into Linux-Based Azure Sphere By Eric Brown

Microsoft’s Ryan Fairfax explained how to fit an entire Linux stack into 4 MiB of RAM in this presentation. Yet, the hard part, according to Fairfax, was not so much the kernel modification, as it was the development of the rest of the stack. This includes the custom Linux Security Module, which coordinates with the Cortex-M4’s proprietary Pluton security code using a mailbox-based protocol.

Greg Kroah-Hartman Explains How the Kernel Community Is Securing Linux By Swapnil Bhartiya

In this article, Swapnil Bhartiya interviewed Linux kernel maintainer Greg Kroah-Hartman about how the kernel community is hardening Linux against vulnerabilities. You can see excerpts from their talk in the accompanying video.

The entire list of videos from the event also includes:

The next Linux Security Summit Europe, coming up October 25 – 26 in Edinburgh, offers more essential security information, with refereed presentations, discussion sessions, subsystem updates, and more. There’s still time to register and attend! Check out the full schedule and stay tuned for more coverage.

You can also sign up to receive event updates:

Posted on Leave a comment

Open Source Logging Tools for Linux

If you’re a Linux systems administrator, one of the first tools you will turn to for troubleshooting are log files. These files hold crucial information that can go a long way to help you solve problems affecting your desktops and servers. For many sysadmins (especially those of an old-school sort), nothing beats the command line for checking log files. But for those who’d rather have a more efficient (and possibly modern) approach to troubleshooting, there are plenty of options.

In this article, I’ll highlight a few such tools available for the Linux platform. I won’t be getting into logging tools that might be specific to a certain service (such as Kubernetes or Apache), and instead will focus on tools that work to mine the depths of all that magical information written into /var/log.

Speaking of which…

What is /var/log?

If you’re new to Linux, you might not know what the /var/log directory contains. However, the name is very telling. Within this directory is housed all of the log files from the system and any major service (such as Apache, MySQL, MariaDB, etc.) installed on the operating system. Open a terminal window and issue the command cd /var/log. Follow that with the command ls and you’ll see all of the various systems that have log files you can view (Figure 1).

Say, for instance, you want to view the syslog log file. Issue the command less syslog and you can scroll through all of the gory details of that particular log. But what if the standard terminal isn’t for you? What options do you have? Plenty. Let’s take a look at few such options.

Logs

If you use the GNOME desktop (or other, as Logs can be installed on more than just GNOME), you have at your fingertips a log viewer that mainly just adds the slightest bit of GUI goodness over the log files to create something as simple as it is effective. Once installed (from the standard repositories), open Logs from the desktop menu, and you’ll be treated to an interface (Figure 2) that allows you to select from various types of logs (Important, All, System, Security, and Hardware), as well as select a boot period (from the top center drop-down), and even search through all of the available logs.

Logs is a great tool, especially if you’re not looking for too many bells and whistles getting in the way of you viewing crucial log entries, so you can troubleshoot your systems.

KSystemLog

KSystemLog is to KDE what Logs is to GNOME, but with a few more features to add into the mix. Although both make it incredibly simple to view your system log files, only KSystemLog includes colorized log lines, tabbed viewing, copy log lines to the desktop clipboard, built-in capability for sending log messages directly to the system, read detailed information for each log line, and more. KSystemLog views all the same logs found in GNOME Logs, only with a different layout.

From the main window (Figure 3), you can view any of the different log (from System Log, Authentication Log, X.org Log, Journald Log), search the logs, filter by Date, Host, Process, Message, and select log priorities.

If you click on the Window menu, you can open a new tab, where you can select a different log/filter combination to view. From that same menu, you can even duplicate the current tab. If you want to manually add a log to a file, do the following:

  1. Open KSystemLog.

  2. Click File > Add Log Entry.

  3. Create your log entry (Figure 4).

  4. Click OK

KSystemLog makes viewing logs in KDE an incredibly easy task.

Logwatch

Logwatch isn’t a fancy GUI tool. Instead, logwatch allows you to set up a logging system that will email you important alerts. You can have those alerts emailed via an SMTP server or you can simply view them on the local machine. Logwatch can be found in the standard repositories for almost every distribution, so installation can be done with a single command, like so:

sudo apt-get install logwatch

Or:

sudo dnf install logwatch

During the installation, you will be required to select the delivery method for alerts (Figure 5). If you opt to go the local mail delivery only, you’ll need to install the mailutils app (so you can view mail locally, via the mail command).

All Logwatch configurations are handled in a single file. To edit that file, issue the command sudo nano /usr/share/logwatch/default.conf/logwatch.conf. You’ll want to edit the MailTo = option. If you’re viewing this locally, set that to the Linux username you want the logs sent to (such as MailTo = jack). If you are sending these logs to an external email address, you’ll also need to change the MailFrom = option to a legitimate email address. From within that same configuration file, you can also set the detail level and the range of logs to send. Save and close that file.
Once configured, you can send your first mail with a command like:

logwatch --detail Med --mailto ADDRESS --service all --range today
Where ADDRESS is either the local user or an email address.

For more information on using Logwatch, issue the command man logwatch. Read through the manual page to see the different options that can be used with the tool.

Rsyslog

Rsyslog is a convenient way to send remote client logs to a centralized server. Say you have one Linux server you want to use to collect the logs from other Linux servers in your data center. With Rsyslog, this is easily done. Rsyslog has to be installed on all clients and the centralized server (by issuing a command like sudo apt-get install rsyslog). Once installed, create the /etc/rsyslog.d/server.conf file on the centralized server, with the contents:

# Provide UDP syslog reception
$ModLoad imudp
$UDPServerRun 514 # Provide TCP syslog reception
$ModLoad imtcp
$InputTCPServerRun 514 # Use custom filenaming scheme
$template FILENAME,"/var/log/remote/%HOSTNAME%.log"
*.* ?FILENAME $PreserveFQDN on

Save and close that file. Now, on every client machine, create the file /etc/rsyslog.d/client.conf with the contents:

$PreserveFQDN on
$ActionQueueType LinkedList
$ActionQueueFileName srvrfwd
$ActionResumeRetryCount -1
$ActionQueueSaveOnShutdown on
*.* @@SERVER_IP:514

Where SERVER_IP is the IP address of your centralized server. Save and close that file. Restart rsyslog on all machines with the command:

sudo systemctl restart rsyslog

You can now view the centralized log files with the command (run on the centralized server):

tail -f /var/log/remote/*.log

The tail command allows you to view those files as they are written to, in real time. You should see log entries appear that include the client hostname (Figure 6).

Rsyslog is a great tool for creating a single point of entry for viewing the logs of all of your Linux servers.

More where that came from

This article only scratched the surface of the logging tools to be found on the Linux platform. And each of the above tools is capable of more than what is outlined here. However, this overview should give you a place to start your long day’s journey into the Linux log file.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Posted on Leave a comment

Continuous Security with Kubernetes

As the Chief Technologist at Red Hat for the western region, Christian Van Tuin has been architecting solutions for strategic customers and partners for over a decade. He’s lived through the rise of DevOps and containers. And in his role, he’s found that security is the highest adoption barrier for enterprises interested in harnessing the power of containers.

After all, “Now we’re seeing an increasing level of threats for geopolitical reasons, and we’re seeing the dissolving security perimeter,” says Van Tuin. “Everything doesn’t sit behind the firewall in your data center anymore, and there’s a shift to software-based storage, networking and compute. The traditional network base, the fences, are no longer good enough.”

But as he will share duringhis talk at OpenFinTech Forumin New York City, Oct. 10-11, there are security best practices in the areas of DevOps, containers, and Kubernetes that companies can adopt so that everyone can sleep better at night.

“We’re seeing this evolution to DevSecOps,” he says. “It’s all about reducing security and business risk, lowering costs, speeding up delivery and reaction time, falling in line with DevOps. And we’re doing this with automation process optimization and continuous security improvement.”

This is particularly relevant for FinTech companies. “With the move from physical to digital banking, DevSecOps ensures security is integrated into the process from the start of development rather than appended on in production,” says Van Tuin. “At the same time, it still allows for rapid and frequent releases without security becoming a bottleneck or burden on development.” For instance, OpenSCAP can be used to scan container images for compliance with PCI DSS (Payment Card Industry Security Standard) and customer security policies for banking.

Van Tuin’s best practices are wide-ranging: addressing security risks such as container images, builds, registry, hosts, and network; automating and integrating security vulnerability management and compliance checking in a DevOps CI/CD pipeline; and deployment strategies for container security updates. And he’s hopeful that there will be more improvements to security around Kubernetes with the growth of Istio service mesh and CoreOS operators.

“One of the keys to DevSecOps is to ensure that you can enable your developers to rapidly innovate and experiment,” says Van Tuin. And the first thing that needs to happen? “Embrace security into the culture of the company.”

To hear all about Chris’s strategies for continuous security with DevOps, containers, and Kubernetes, plus talks from other open source leaders, come to OpenFinTech Forum in New York City October 10-11. You can still register here!

Sign up to receive updates on Open FinTech Forum:

Posted on Leave a comment

Kid’s Day at Open Source Summit

The Linux Foundation strives to make Open Source Summit one of the most inclusive tech events in a variety of ways, offering activities such as the “Women in Open Source” lunch, a diversity social, a first-time attendees get-together, and more. The have activities focused on children, too. Not only does Open Source Summit offer free on-site childcare for attendees’ children, they also sponsor a Kid’s Day.

At this year’s Kid’s Day in Vancouver, the primary goal was to introduce the kids to coding via HTML, and very little computer knowledge or experience was required to participate. “The basics, typing, browsing the Internet and minor computer operation, are all your child needs to participate,” according to the website.

For this event, The Linux Foundation collaborated with Banks Family Tech, who organized the 4-hour long workshop. This workshop was geared toward children ages 9–18 and was open to children from the community as well as those of event attendees. The kids that participated actually ranged in age from 5-13 years of age, and, many already had some coding experience. Some had tried Scratch, and others had written scripts for games.

“We are going to teach how to go from nothing and become coders,” said Phillip Banks, founder of Banks Family Tech.

HTML workshop

The workshop focused squarely on HTML, one of the easiest computing languages. “It’s close to English and it’s not hard text and syntax to learn. It allows us to squeeze a lot of things into a day and get them excited so that they can go home and learn more,” said Banks. “After that, maybe, you can go to Python but HTML is so easy as they get a quick return by manipulating objects, text color and other things on a web-page immediately.”

This Kid’s Day event had a great mix of participants. While some of the kids accompanied their parents who were attending the conference, the majority were from the local community, whose parents learned about the workshop from social networks like Facebook. Khristine Carino, Director for Communications of SCWIST (Society for Canadian Women In Science and Technology), not only brought her own kids but also invited families from underrepresented minorities in Vancouver.

In the workshop, the children learned HTML basics like font tags, how to use fonts and colors, how to add images and videos, and how choose a background for their website. They also had the opportunity to share what they created with the whole group and learn from each other.

“It’s not so much about learning to code, just to be a coder; it’s learning to understand how things work,” said Banks. You can hear more in the video below.

[embedded content]

Check out the full list of activities coming up at Open Source Summit in Europe and sign up to receive updates:

This article originally appeared at The Linux Foundation