Posted on Leave a comment

Meet the New Linux Desktop Champion: System76 Thelio

The American dream has driven millions upon millions of people to come to a country filled with possibility and opportunity. Sometimes, you get caught up in the gears of enterprise and learn that the machinations of big business tend to run counter to that dream. But, sometimes, you start a company on an ideal and cling to that initial spark no matter what.

That’s what Carl Richell did when he created System76. That was more than a decade ago, when the company’s goal was to sell computer hardware running open source operating systems. System76 has been a bastion of hope for Linux and open source fans, as they’ve proved, year after year, that the dream can be fulfilled, that Linux can be sold on the desktop and laptop space.

The launch of the game-changing System76 Thelio only solidifies this open source dream. CEO Carl Richell oozes the open source ethos, and that ideology comes through, to perfection, in their latest offering, the Thelio desktop, a machine as beautiful in design as it is in execution. The Thelio was created, from the ground up, to offer as much open hardware as possible, while delivering a beast of a computer, housed in a chassis that is equal parts form and  function. Every detail (from the planetary alignment, marking the date System76 was formed, to the design of the fan output grill, and the ease with which the machine can be upgraded) has been executed to perfection. The Thelio is a hand-crafted computer that any user would be thrilled to own.

What Makes the Thelio So Special?

The easy answer to that question is “everything,” but that serves no purpose, especially for those considering dropping the coin for this desktop beauty. So wherein lies the answer? There are so many ways this new machine exceeds all other desktop computers I’ve seen (which is impressive, considering my daily driver for the last five years has been the—no longer available—System76 Leopard Extreme, which has been truly amazing).

While getting the grand tour of the new System76 headquarters, I was privy to the official Thelio “dog and pony show,” which solidified my assumptions about the company and what they have been and are doing. I was able to watch engineer Ian Santopietro demonstrate how they’ve designed the Thelio in such a way as to optimize air flow through the chassis, so the CPU is capable of running at its listed speed (see video).

[youtube https://www.youtube.com/watch?v=9ts6IKI3MeI]

Video 1: Air flow has been optimized for the Thelio.

With so many other desktop computers, the listed speed of a CPU is often negated by poorly designed cooling systems and airflows, rendering them incapable of running to spec. The Thelio blows this issue out of the water (or, better yet, out of the back cooling vent). Watching smoke flow through the device was impressive, especially considering how hush-quiet the machine was (with all cores being pushed to the max).

The ease at which the machine can be upgraded is equally impressive. With a hot-swappable drive-bay that is easily accessible, the Thelio is an upgraders dream come true. And with highly refined tolerances, you can trust that chassis will come apart and slip back together with ease. System76 has added a number of touches most other companies wouldn’t even consider. Take, for instance, the spare drive bolts housed conveniently inside the case (Figure 1). No more will you have to scramble to find that small bag of screws you tossed aside when you unboxed the computer. When you go to add a new drive, you have plenty of screws waiting to be used. A nice touch, for sure.

Those curious about the System76 open source claims for the Thelio can download the files for the chassis, as well as the Thelio Io Daughterboard (Figure 2, from GitHub).

The Io daughterboard offloads control of the cooling, the passing of data from the motherboard to the storage drives, the power switch, the USB system, and more. This means System76 is able to better control how these systems function (it also means users can benefit from the open source nature of that particular piece of hardware).

The Operating System

As with all of the System76 machines, you can purchase the Thelio with one of the following options:

  • Ubuntu 18.10

  • Ubuntu 18.04 LTS

  • POP!_OS

I would suggest, however, purchasing the hardware with System76’s own POP!_OS, as it offers more control over, and has been optimized for, the Thelio. Although POP!_OS is designed with creators in mind (i.e. developers), one does not have to be a developer to use the OS. In fact, System76’s version of Linux is a general-purpose operating system, only with a few tweaks and inclusions for creators. No matter which operating system you choose, you can be sure to enjoy an unrivaled experience with the Thelio.

Conclusion

To say the Thelio is impressive is an understatement. This is a machine that could easily be mistaken for something produced by a much, much larger company. But make no mistake about it, few large companies put this level of care and heart into the design and execution of a computer desktop. If you’re looking to purchase one of the most impressive pieces of desktop hardware to date, look no further than System76’s new Thelio.

Posted on Leave a comment

4 Unique Terminal Emulators for Linux

Let’s face it, if you’re a Linux administrator, you’re going to work with the command line. To do that, you’ll be using a terminal emulator. Most likely, your distribution of choice came pre-installed with a default terminal emulator that gets the job done. But this is Linux, so you have a wealth of choices to pick from, and that ideology holds true for terminal emulators as well. In fact, if you open up your distribution’s GUI package manager (or search from the command line), you’ll find a trove of possible options. Of those, many are pretty straightforward tools; however, some are truly unique.

In this article, I’ll highlight four such terminal emulators, that will not only get the job done, but do so while making the job a bit more interesting or fun. So, let’s take a look at these terminals.

Tilda

Tilda is designed for Gtk and is a member of the cool drop-down family of terminals. That means the terminal is always running in the background, ready to drop down from the top of your monitor (such as Guake and Yakuake). What makes Tilda rise above many of the others is the number of configuration options available for the terminal (Figure 1).

Tilda can be installed from the standard repositories. On a Ubuntu- (or Debian-) based distribution, the installation is as simple as:

sudo apt-get install tilda -y 

Once installed, open Tilda from your desktop menu, which will also open the configuration window. Configure the app to suit your taste and then close the configuration window. You can then open and close Tilda by hitting the F1 hotkey. One caveat to using Tilda is that, after the first run, you won’t find any indication as to how to reach the configuration wizard. No worries. If you run the command tilda -C it will open the configuration window, while still retaining the options you’ve previously set.

Available options include:

What I like about these types of terminals is that they easily get out of the way when you don’t need them and are just a button click away when you do. For those that hop in and out of the terminal, a tool like Tilda is ideal.

Aterm

Aterm holds a special place in my heart, as it was one of the first terminals I used that made me realize how flexible Linux was. This was back when AfterStep was my window manager of choice (which dates me a bit) and I was new to the command line. What Aterm offered was a terminal emulator that was highly customizable, while helping me learn the ins and outs of using the terminal (how to add options and switches to a command). “How?” you ask. Because Aterm never had a GUI for customization. To run Aterm with any special options, it had to run as a command. For example, say you want to open Aterm with transparency enabled, green text, white highlights, and no scroll bar. To do this, issue the command:

aterm -tr -fg green -bg white +xb

The end result (with the top command running for illustration) would look like that shown in Figure 2.

Of course, you must first install Aterm. Fortunately, the application is still found in the standard repositories, so installing on the likes of Ubuntu is as simple as:

sudo apt-get install aterm -y

If you want to always open Aterm with those options, your best bet is to create an alias in your ~/.bashrc file like so:

alias=”aterm -tr -fg green -bg white +sb”

Save that file and, when you issue the command aterm, it will always open with those options. For more about creating aliases, check out this tutorial.

Eterm

Eterm is the second terminal that really showed me how much fun the Linux command line could be. Eterm is the default terminal emulator for the Enlightenment desktop. When I eventually migrated from AfterStep to Enlightenment (back in the early 2000s), I was afraid I’d lose out on all those cool aesthetic options. That turned out to not be the case. In fact, Eterm offered plenty of unique options, while making the task easier with a terminal toolbar. With Eterm, you can easily select from a large number of background images (should you want one – Figure 3) by selecting from the Background > Pixmap menu entry.

There are a number of other options to configure (such as font size, map alerts, toggle scrollbar, brightness, contrast, and gamma of background images, and more). The one thing you want to make sure is, after you’ve configured Eterm to suit your tastes, to click Eterm > Save User Settings (otherwise, all settings will be lost when you close the app).

Eterm can be installed from the standard repositories, with a command such as:

sudo apt-get install eterm

Extraterm

Extraterm should probably win a few awards for coolest feature set of any terminal window project available today. The most unique feature of Extraterm is the ability to wrap commands in color-coded frames (blue for successful commands and red for failed commands – Figure 4).

When you run a command, Extraterm will wrap the command in an isolated frame. If the command succeeds, the frame will be outlined in blue. Should the command fail, the frame will be outlined in red.

Extraterm cannot be installed via the standard repositories. In fact, the only way to run Extraterm on Linux (at the moment) is to download the precompiled binary from the project’s GitHub page, extract the file, change into the newly created directory, and issue the command ./extraterm.

Once the app is running, to enable frames you must first enable bash integration. To do that, open Extraterm and then right-click anywhere in the window to reveal the popup menu. Scroll until you see the entry for Inject Bash shell Integration (Figure 5). Select that entry and you can then begin using the frames option.

If you run a command, and don’t see a frame appear, you probably have to create a new frame for the command (as Extraterm only ships with a few default frames). To do that, click on the Extraterm menu button (three horizontal lines in the top right corner of the window), select Settings, and then click the Frames tab. In this window, scroll down and click the New Rule button. You can then add a command you want to work with the frames option (Figure 6).

If, after this, you still don’t see frames appearing, download the extraterm-commands file from the Download page, extract the file, change into the newly created directory, and issue the command sh setup_extraterm_bash.sh. That should enable frames for Extraterm.
There’s plenty more options available for Extraterm. I’m convinced, once you start playing around with this new take on the terminal window, you won’t want to go back to the standard terminal. Hopefully the developer will make this app available to the standard repositories soon (as it could easily become one of the most popular terminal windows in use).

And Many More

As you probably expected, there are quite a lot of terminals available for Linux. These four represent (at least for me) four unique takes on the task, each of which do a great job of helping you run the commands every Linux admin needs to run. If you aren’t satisfied with one of these, give your package manager a look to see what’s available. You are sure to find something that works perfectly for you.

Posted on Leave a comment

ONAP Myths Debunked

The Linux Foundation’s Open Network Automation Platform (ONAP) is well into its third 6-month release (Casablanca came out in Dec ’18), and while the project has evolved since it’s first release, there is still some confusion about what it is and how it’s architected. This blogs takes a closer look at ONAP, under-the-hood, to clarify how it works.  

To start, it is important to consider what functionality ONAP includes. I call ONAP a MANO++, where ONAP includes the NFVO and VNFM layers as described by ETSI, but goes beyond by including service assurance/automation and a unified design tool. ONAP does not include the NFVI/VIM or the NFV cloud layer. In other words, ONAP doesn’t really care whether the NFV cloud is OpenStack, Kubernetes or Microsoft Azure. Nor does ONAP include VNFs. VNFs come from third-party companies or open source projects but have VNF guidelines and onboarding SDKs that ease the deployment. In other words, ONAP is a modular platform for complete Network Automation.

OK, end of background. On to four themes:

MODEL DRIVEN

Model-driven is a central tenet of ONAP. If anything, one might complain about there being too much model-driven thinking but not too little! There are models for:

  • VNF descriptor

  • Network service descriptor

  • VNF configuration

  • Closed-loop automation template descriptor

  • Policy

  • APP-C/SDN-C directed graphs

  • Orchestration workflow

  • The big bang (just kidding)

  • So on and so forth

The key idea of a model driven approach is to enable non-programmers to change the behavior of the platform with ease. And ONAP embraces this paradigm fully.

DEVICE ORIENTATION

ONAP goes through great pains of creating a hierarchy and providing the highest level of abstraction to the OSS/BSS layers to support both a cloud-based and device-based networking approach. The below show a couple of examples.

Service Orchestration & LCM (the left-hand side item feeds into the right-hand side item):

VF ⇛ Module ⇛ VNF  ⇛ Network/SDN service   ⇛ E2E network service ⇛ Product (future) ⇛ Offer (future)

Service Assurance:

Analytics Microservices & Policies ⇛ Closed Loop Templates

With upcoming MEC applications, the million dollar question is, will ONAP orchestrate MEC applications as well? This is to be determined, but if this happens, ONAP will be even further from device-orientation than it already is.

CLOUD NATIVE

ONAP Casablanca is moving towards an emphasis on cloud native; so what does that mean for virtual network functions (VNFs),  or ONAP’s ability to provide an operational layer for NFV? To break it down more specifically:

  • Can VNFs be cloud native? Yes! In fact they can be, and ONAP highly encourages, I daresay, insists upon it (see ONAP VNF Development requirements here). Cloud-native or containerized network functions (CNFs) are just around the corner and they will be fully supported by ONAP (when we say VNF, we include CNFs in that discussion).

  • ONAP documentation includes references to VNFs and PNFs – does that mean there is no room for CNFs? ONAP refers to VNFs and PNFs since they constitute higher level services. This would be tantamount to saying that if AWS uses the words VM or container, they need to be written off as outmoded. Moreover, new services such as 5G are expected to be built out of physical network functions (PNFs) — for performance reasons — and VNFs. Therefore, ONAP anticipates orchestrating and managing the lifecycle of PNFs.

  • Is the fact that VNFs are not always written in a cloud-native manner mean that ONAP has been mis-architected?  It is true that a large number of VNFs are VNFs-in-name-only (i.e. PNFs that have been virtualized, but not much else); however, this is orthogonal to ONAP. As mentioned above, ONAP does not include VNFs.

LACK OF INNOVATION

We’ve also heard suggestions that ONAP lacks innovation. For example, there have been questions around the types of clouds supported by ONAP in addition to OpenStack and different NFVI compute instances in addition to virtual machines. In fact, ONAP provides tremendous flexibility in these areas:

  • Different clouds — There are two levels at which we can discuss clouds. First, clouds that ONAP can run on, and second clouds that ONAP can orchestrate VNFs onto. Since ONAP is already containerized and managed using Kubernetes, the first topic is moot. ONAP can already run on any cloud that supports k8s (which is every cloud out there). For the second use case, the ONAP Casablanca release has experimental support for Kubernetes and Microsoft Azure. There is no reason so believe that this new cloud types like AWS Outpost etc. can’t be supported.

  • Different compute types — Currently, ONAP instantiates VNFs that are packaged as VMs. With this, ONAP already supports unikernels (i.e. ridiculously small VMs) should a VNF vendor choose to package their VNF as a unikernel. Moreover, ONAP is working on full K8s support that will allow container and Kata Container based VNFs. The only compute type that I think is not on the roadmap is function-as-a-service (aka serverless). But with an NFV use case I don’t see the need to support this compute type. Maybe if/when ONAP supports MEC application orchestration, it will need to do so. I don’t view this as a showstopper. When the time comes, I’m sure the ONAP community will figure out how to support function-as-a-service — it’s not that hard of a problem.

  • Different controllers — Through the use of message bus approach, ONAP has a set of controllers optimized for layers of SDN/NFV (physical, virtual, application).

In summary, it is always important to make sure you are aware of ONAP’s ability to support a model driven, service oriented, cloud native, transformative future. Hopefully this blog clarifies some of those points.

This article was written by Amar Kapadia and was previously published at Aarna Networks.

Posted on Leave a comment

KubeCon + CloudNativeCon Videos Now Online

This week’s KubeCon + CloudNativeCon North America 2018 in Seattle was the biggest ever! This sold-out event featured four days of information on Kubernetes, Prometheus, Envoy, OpenTracing, Fluentd, gRPC, containerd, and rkt, along with many other exciting projects and topics.

More than 100 lightning talks, keynotes, and technical sessions from the event have already been posted. 

You can check out the videos on YouTube.

Also, registration will open soon for 2019 events. Be sure to register early!

KubeCon Barcelona, May 20-23 

KubeCon Shanghai, June 24-26 

KubeCon San Diego, November 18-21

Posted on Leave a comment

Strategies for Deploying Embedded Software

While many Embedded Linux Conference talks cover emerging technologies, some of the most useful are those that survey the embedded development tools and techniques that are already available. These summaries are not only useful for newcomers but can be a helpful reality check and a source for best practices for more experienced developers.

In “Strategies for Developing and Deploying your Embedded Applications and Images,” Mirza Krak, an embedded developer at Mender.io, surveys the many options for prepping and deploying software on devices. These range from cross-device development strategies between the desktop and embedded platforms to using IDEs to working with Yocto/OE-Core with package managers. Krak, who spoke at this October’s ELC Europe conference in Edinburgh, also covered configuration management tools, network boot utilities, and update solutions such as swupdate and Mender.

Basic desktop/embedded cross development

It’s easier to do your development on a desktop PC rather than directly on an embedded device, said Krak. Even if your device can run the required development software and distributions, you cannot easily integrate all the tools that are available on the desktop. In addition, compile times tend to be very slow.

On the desktop, “everything is available to you via apt-get install, and there is high availability of trace and debug tools,” said Krak. “You have a lot more control when doing things like running a binary, and you can build, debug, and test on the same machine so there are usually very short development cycles.”

Eventually, however, you’ll probably need to do some cross-device development. “You can use some mock hardware to do some basic sanity testing on the desktop, but you are not testing on the hardware where the software will run, which may have some constraints.”

A typical approach for cross-device development is to run Yocto Project or Buildroot on your PC and then cross compile and transfer the binaries to the embedded device. This adds to the complexity because you are compiling the code on one device and you may need to transfer it to multiple devices for testing.

You can use the secure copy (scp) command or transfer data by USB stick. However, “It’s a lot of manual work and prone to error, and it’s hard to replicate across many devices,” said Krak. “I’m surprised at how many people don’t go beyond this entry point.”

IDEs and Package Managers

An easier and more reliable approach is to use an IDE such as Eclipse or Qt Creator, which have plug-ins to cross compile. “IDEs usually have post-build hooks that transfer the binary to the device and run it,” said Krak. “With Qt Creator, you can launch the debug server on the device and do remote debugging remotely.”

IDEs are great for simpler projects, especially for beginning or casual developers, but they may lack the flexibility required for more complex jobs. Krak generally prefers package managers — collections of tools for automating the processing of installing, upgrading, configuring, and removing software — which are much the same as those you’d find on a desktop Linux PC.

“Package managers give you more sanity checks and controls, and the ability to state dependencies,” said Krak. Package managers for embedded targets include deb, rpm, and opkg, and you can also turn to package utilities like apt, yum, dnf, pacman, zipper, and smart.

“If you’re compiling your Debian application in a build system you could say ‘make dpkg’, which will package your binary and any configuration files you might have,” said Krak. “You can then transfer the binary to your device and install. This is less error prone since you have dependency tracking and upstream and custom package feeds.”

Package managers are useful during prototyping and early development, but you typically won’t make them available to the embedded end user, said Krak. In addition, not all embedded devices support platforms such as Ubuntu or Raspbian that include package managers.

Krak typically works with a Yocto Project/Open Embedded environment and uses the OE-core Angstrom distribution, which maintains opkg package feeds. “You can include meta-angstrom in your Yocto build and set DISTRO = ‘angstrom’ to you get package feeds,” said Krak. “But there’s a lot more to Angstrom that you may not want, so you may want to create a more customized setup based on Poky or something.”

Yocto generates package feeds when you do an image build, giving you a choice of rpm, deb, or ipk. Yet, “these are only packages, not a complete package feed,” said Krak. To enable a feed, “there’s a bitbake package-index command that generates the files. You expose the deploy server where all your packages are to make it available on your device.”

While this process handles the “service side” package feed, you still need tools on your embedded device. Within Yocto, “there’s an EXTRA_IMAGE_FEATURES setting you can set to package-management,” said Krak. “There’s also a recipe in meta-openembedded/meta-oe called distro-feed-configs.bb. If you include it in your build it will generate the config files needed for your package manager.”

Config management, network boot, and update solutions

Krak went on to discuss configuration management tools such as CFEngine, Puppet, Chef, and Ansible. “These are very common in the enterprise server world if you need to manage a fleet of servers,” said Krak. “some apply workflows to embedded devices. You install a golden image on all your devices, and then set up connectivity and trust between the CM server and device. You can then script the configuration.”

Krak also surveyed solutions for more complex projects in which the application extends beyond a single binary. “You may be developing a specialized piece of hardware for a very specific use case or perhaps you depend on some custom kernel options,” said Krak.

Network booting is useful here because you can “deploy all the resources necessary to boot your device,” said Krak. “On boot, the system fetches the Linux kernel device tree and file system, so you just need to reboot the device to update the software. The setup can be complex, but it has the advantage of being easily extended to boot multiple devices.”

Typical network booting schemes such as PXELINUX and PXE boot use a tftp server setup on a laptop where you put the build artifacts you want to deploy. Alternatively, you can script it using the NFS root file-system.

A final alternative for complex systems is to use an update solution such as Mender, rauc, or swupdate. “You can use these early in the development process to deploy your builds,” said Krak. “If you build the same device in production, you can use the same software to test it throughout the development process, which builds confidence. Some use image-based updates, which is nice because your devices are stateless, which simplifies testing. Updaters fit well into development workflow and make it easier to integrate build artifacts. They often have features to avoid bricking devices.”

As a developer for Mender.io, Krak is most familiar with Mender, which provides an A/B image update strategy. “You have two copies of the OS and you do image-based updates so you always update the whole system,” explained Krak. You can watch the complete presentation below.

[youtube https://www.youtube.com/watch?v=rCDZVjHHC6o]

Posted on Leave a comment

Aliases: DIY Shell Commands

Aliases, in the context of the Linux shell, are commands you build yourself by packing them with combinations of other instructions that are too long or too hard to remember.

You create an alias by using the word alias, then the name of the command you want to create, an equal sign (=), and then the Bash command(s) you want your alias to run. For example, ls in its base form does not colorize its output, making it difficult to distinguish between directories, files, and links. You can build a new command that shows colors by making an alias like this:

alias lc='ls --color=auto'

where lc is the name you have picked for your new command. When creating aliases, be sure to check that the name you picked isn’t already in use, or you may override an existing command. In this case, lc stands for “list (with) color”. Notice there is no space in front of or behind the =. Finally, you have the regular Bash command(s) you want to run when lc is executed. In this case, the ls command with the --color option.

After defining your alias, every time you type lc, the contents of the current directory will be shown in color.

But, you may think, “my ls command already lists files in different colors!” That is because most Linux distros come with some aliases already set up for you.

Aliases you (probably) already have

Indeed, you can use the alias instruction without any options to see what aliases you already have. These will vary by distro, but some typical preset aliases are:

  • alias ls='ls --color=auto': You already saw this one above. The auto modifier of the --color option tells ls to use color when standard output is connected to a terminal. That is, the output of ls is going to show up in a terminal window or a text screen, instead of, say, being piped to a file. Other alternatives for --color are always and never.
  • alias cp='cp -i': The -i option stands for interactive. Sometimes, when you use cp you may inadvertently overwrite an existing file. By using the -i, cp will ask you before clobbering anything.
  • alias free='free -m': Using -m with freeyou can see how much free memory you have and how much your applications are using in megabytes instead of the default bytes. This makes the output of free easier to read for a human.

There may be more (or less, or even none), but regardless of what your distribution comes with, you can always use the base form (vs. the aliased form) of a command with the \ modifier. For example:

\free

will execute free without the -m option, and

\ls

will execute ls without the --color=auto option.

If you want to get rid or modify the preset aliases forever, note that they live in the global .bashrc file which hangs out in our old haunt, the /etc/skel directory.

Aliases for muscle memory

Distro designers try their best to predict which aliases are going to be useful for you. But every user is different and comes from a different background. If you are new to GNU+Linux, it may be because you are coming from another system, and the basic commands vary from shell to shell. If you come from a Windows/MS-DOS background, you may want to define an alias like

alias dir='ls'

to list files or directories.

Likewise,

alias copy='cp'
alias move='mv'

may also come in handy, at least until you get used to Linux’s new lexicon.

The other problem occurs when mistakes become ingrained in your muscle memory, so you always mistype some words the same way. I, for instance, have great difficulty typing admnis-adminsi-A-D-M-I-N-I-S-T-R-A-T-I-ON (phew!) at speed. That is why some users create aliases like

alias sl='ls'

and

alias gerp='echo "You did it *again*!"; grep'

Although we haven’t formally introduced grep yet, in its most basic form, it looks for a string of characters in a file or a set of files. It’s one of those commands that you will tend to use A LOT once you get to grips with it, as those ingrained mistyping habits that force you to type the instruction twice every time get annoying really quickly.

Another thing to note in the gerp example is that it is not a single instruction, but two. The first one (echo "You did it *again*!") prints out a message reminding you that you misspelled the grep command, then there is a semicolon (;) that separates one instruction from the other. Finally, you’ve got the second command (grep) that does the actual grepping.

Using gerp on my system to search for the lines containing the word “alias” in /etc/skel/.bashrc, the output looks like this:

$ gerp -R alias /etc/skel/.bashrc
You did it *again*! alias ls='ls --color=auto' alias grep='grep --colour=auto' alias egrep='egrep --colour=auto' alias fgrep='fgrep --colour=auto' alias cp="cp -i"
alias df='df -h'
alias free='free -m'
alias np='nano -w PKGBUILD' alias more=less shopt -s expand_aliases

Running commands sequentially as part of an alias, or, even better, chaining commands so that one command can use the results coughed up by another, is getting us perilously close to Bash scripting. This has been in the making of this series for quite some time, and we’ll start covering it in the very next article.

For the time being, if you want to get rid of an alias you temporarily set up in a running terminal, use the unalias command:

unalias gerp

If you want to make your aliases permanent, you can drop them into the .bashrc file you have in your home directory. This is the same thing we did with custom environment variables in last week’s article.

See you next time!

Posted on Leave a comment

New Ebook Offers Comprehensive Guide to Open Source Compliance

The Linux Foundation has released the second edition of Open Source Compliance in the Enterprise by Ibrahim Haddad, which offers organizations a practical guide to using open source code and participating in open source communities while complying with both the spirit and the letter of open source licensing.

This fully updated ebook — with new contributions from Shane Coughlan and Kate Stewart — provides detailed information on issues related to the licensing, development, and reuse of open source software. The new edition also includes all new chapters on OpenChain, which focuses on increasing open source compliance in the supply chain, and SPDX, which is a set of standard formats for communicating the components, licenses, and copyrights of software packages.

“Open source compliance is the process by which users, integrators, and developers of open source observe copyright notices and satisfy license obligations for their open source software components,” Haddad states in the book.

This 200+ page book encompasses the entire process of open source compliance, including an introduction on how to establish an open source management program, a description of relevant roles and responsibilities, an overview of common compliance tools and processes, and all new material to help navigate mergers and acquisitions. It offers proven best practices as well as practical checklists to help those responsible for compliance activities create their own processes and policies.

Essential topics covered in this updated ebook include:

  • An introduction to open source compliance
  • Compliance roles and responsibilities
  • Building a compliance program
  • Best practices in compliance management
  • Source code scanning tools

To learn more about the benefits of open source compliance and how to achieve it, download the free ebook today!

This article originally appeared at The Linux Foundation

Posted on Leave a comment

Get the Skills You Need to Monitor Systems and Services with Prometheus

Open source software isn’t just transforming technology infrastructure around the world, it is also creating profound opportunities for people with relevant skills. From Linux to OpenStack to Kubernetes, employers have called out significant skills gaps that make it hard for them to find people fluent with cutting-edge tools and platforms. The Linux Foundation not only offers self-paced training options for widely known tools and platforms, such as Linux and Git, but also offers options specifically targeting the rapidly growing cloud computing ecosystem. The latest offering in this area is Monitoring Systems and Services with Prometheus (LFS241).

Prometheus is an open source monitoring system and time series database that is especially well suited for monitoring dynamic cloud environments. It contains a powerful query language and data model in addition to integrated alerting and service discovery support. The new course is specifically designed for software engineers and systems administrators wanting to learn how to use Prometheus to gain better insights into their systems and services.

Why is monitoring so crucial for today’s cloud stacks and environments? Because the metrics these monitoring tools provide allow administrators to see and anticipate potential problems, keep performance tuned, and more. Monitoring tools like Prometheus can also generate automated alerts, helping administrators respond to issues in real time.

The Site Reliability Engineering book covering Google’s key site reliability tools notes: “The idea of treating time-series data as a data source for generating alerts is now accessible to everyone through open source tools like Prometheus.”

As is true for most monitoring tools, Prometheus provides detailed and rich dashboard views of system and platform performance. Prometheus is also 100 percent open source and community-driven. All components are available under the Apache 2 License on GitHub.

Announced in November, this training course includes 20 to 25 hours of course material covering many of the tool’s major features, best practices, and use cases. Students will be able to monitor their systems and services effectively with Prometheus upon completion on this course. This course covers the following topics:

  • Prometheus architecture

  • Setting up and using Prometheus

  • Monitoring core system components and services

  • Basic and advanced querying

  • Creating dashboards

  • Instrumenting services and writing third-party integrations

  • Alerting

  • Using Prometheus with Kubernetes

  • Advanced operational aspects

Hands-on training makes a big difference, and this course contains 55 labs that can be completed locally on a VM or in the cloud. What do you need in terms of prerequisites? Participants should have basic experience with Linux/Unix system administration and common shell commands, as well as some development experience in Go and/or Python and working with Kubernetes.

“Adoption of the Prometheus monitoring system is growing rapidly, leading to demand for more talent qualified to work with this technology, which is why we decided now is the time to develop this course,” said Clyde Seepersad, General Manager, Training & Certification, The Linux Foundation. “With content developed by Cloud Native Computing Foundation (CNCF), which hosts Prometheus, and Julius Volz, one of the founders of the project, there is no better option than LFS241 for learning the ins and outs of this solution.”

Interested in finding out more about Monitoring Systems and Services with Prometheus (LFS241)? Information and enrollment options for this $199 course are found here.

Posted on Leave a comment

Phippy + Cloud Native Friends Make CNCF Their Home

In 2016, Deis (now part of Microsoft) platform architect Matt Butcher was looking for a way to explain Kubernetes to technical and non-technical people alike. Inspired by his daughter’s prolific stuffed animal collection, he came up with the idea of “The Children’s Illustrated Guide to Kubernetes.” Thus Phippy, the yellow giraffe and PHP application, along with her friends, were born.

Today, live from the keynote stage at KubeCon + CloudNativeCon North America, Matt and co-author Karen Chu announced Microsoft’s donation and presented the official sequel to the Children’s Illustrated Guide to Kubernetes in their live reading of “Phippy Goes to the Zoo: A Kubernetes Story” – the tale of Phippy and her niece as they take an educational trip to the Kubernetes Zoo.

Read more at CNCF

Posted on Leave a comment

Demystifying Kubernetes Operators with the Operator SDK: Part 2

In the previous article, we started building the foundation for building a custom operator that can be applied to real-world use cases.  In this part of our tutorial series, we are going to create a generic example-operator that manages our apps of Examplekind. We have already used the operator-sdk to build it out and implement the custom code in a repo here. For the tutorial, we will rebuild what is in this repo.

The example-operator will manage our Examplekind apps with the following behavior:  

  • Create an Examplekind deployment if it doesn’t exist using an Examplekind CR spec (for this example, we will use an nginx image running on port 80).

  • Ensure that the pod count is the same as specified in the Examplekind CR spec.

  • Update the Examplekind CR status with:

Prerequisites

You’ll want to have the following prerequisites installed or set up before running through the tutorial. These are prerequisites to install operator-sdk, as well as a a few extras you’ll need.

Initialize your Environment

​1. Make sure you’ve got your Kubernetes cluster running by spinning up minikube. minikube start

2. Create a new folder for your example operator within your Go path.

mkdir -p $GOPATH/src/github.com/linux-blog-demo
cd $GOPATH/src/github.com/linux-blog-demo

3. Initialize a new example-operator project within the folder you created using the operator-sdk.

operator-sdk new example-operator
cd example-operator
JMgJ3Q24igREHRCmKiX0UAhZVXH-arLxyyXWfKAm

What just got created?

By running the operator-sdk new command, we scaffolded out a number of files and directories for our defined project. See the project layout for a complete description; for now, here are some important directories to note:

  • pkg/apis – contains the APIs for our CR. Right now this is relatively empty; the commands that follow will create our specific API and CR for Examplekind.
  • pkg/controller – contains the Controller implementations for our Operator, and specifically the custom code for how we reconcile our CR (currently this is somewhat empty as well).
  • deploy/ – contains generated K8s yaml deployments for our operator and its RBAC objects. The folder will also contain deployments for our CR and CRD, once they are generated in the steps that follow.  

Create a Custom Resource and Modify it

4. Create the Custom Resource and it’s API using the operator-sdk.

operator-sdk add api --api-version=example.kenzan.com/v1alpha1 --kind=Examplekind
JMgJ3Q24igREHRCmKiX0UAhZVXH-arLxyyXWfKAm

What just got created?

Under pkg/ais/example/v1alpha, a new generic API was created for Examplekind in the file examplekind_types.go.

Under deploy/crds, two new K8s yamls were generated:

  • examplekind_crd.yaml – a new CustomResourceDefinition defining our Examplekind object so Kubernetes knows about it.
  • examplekind_cr.yaml – a general manifest for deploying apps of type Examplekind

A DeepCopy methods library is generated for copying the Examplekind object

5. We need to modify the API in pkg/apis/example/v1alpha1/examplekind_types.go with some custom fields for our CR. Open this file in a text editor. Add the following custom variables to ExamplekindSpec and ExamplekindStatus structs.

JMgJ3Q24igREHRCmKiX0UAhZVXH-arLxyyXWfKAm

The variables in these structs are used to generate the data structures in the yaml spec for the Custom Resource, as well as variables we can later display in getting the status of the Custom Resource.  

6. After modifying the examplekind_types.go, regenerate the code.

operator-sdk generate k8s
JMgJ3Q24igREHRCmKiX0UAhZVXH-arLxyyXWfKAm

What just got created?

You always want to run the operator-sdk generate command after modifying the API in the _types.go file. This will regenerate the DeepCopy methods.

Create a New Controller and Write Custom Code for it

7. Now add a controller to your operator.  

operator-sdk add controller --api-version=example.kenzan.com/v1alpha1 --kind=Examplekind
JMgJ3Q24igREHRCmKiX0UAhZVXH-arLxyyXWfKAm

What just got created?
Among other code, a  pkg/controller/examplekind/examplekind_controller.go file was generated. This is the primary code running our controller; it contains a Reconcile loop where custom code can be implemented to reconcile the Custom Resource against its spec.  

8. Replace the examplekind_controller.go file with the one in our completed repo. The new file contains the custom code that we’ve added to the generated skeleton.

Wait, what was in the custom code we just added?  

If you want to know what is happening in the code we just added, read on. If not, you can skip to the next section to continue the tutorial.

To break down what we are doing in our examplekind_controller.go, lets first go back to what we are trying to accomplish:

  1. Create an Examplekind deployment if it doesn’t exist

  2. Make sure our count matches what we defined in our manifest

  3. Update the status with our group and podnames.

To achieve these things, we’ve created three methods: one to get pod names, one to create labels for us, and last to create a deployment.

In getPodNames(), we are using the core/v1 API to get the names of pods and appending them to a slice.

In labelsForExampleKind(), we are creating a label to be used later in our deployment. The operator name will be passed into this as a name value.

In newDeploymentForCR(), we are creating a deployment using the apps/v1 API. The label method is used here to pass in a label. It uses whatever image we specify in our manifest as you can see below in Image: m.Spec.Image. Replicas for this deployment will also use the count field we specified in our manifest.

Then in our main Reconcile() method, we check to see if our deployment exists. If it does not, we create a new one using the newDeploymentForCR() method. If for whatever reason it cannot create a deployment, print an error to the logs.

In the same Reconcile() method, we are also making sure that the deployment replica field is set to our count field in the spec of our manifest.

And we are getting a list of our pods that matches the label we created.

We are then passing the pod list into the getPodNames() method. We are making sure that the podNames field in our ExamplekindStatus ( in examplekind_types.go) is set to the podNames list.

Finally, we are making sure the AppGroup in our ExamplekindStatus (in examplekind_types.go) is set to the Group field in our Examplekind spec (also in examplekind_types.go).

Deploy your Operator and Custom Resource

We could run the example-operator as Go code locally outside the cluster, but here we are going to run it inside the cluster as its own Deployment, alongside the Examplekind apps it will watch and reconcile.

9. Kubernetes needs to know about your Examplekind Custom Resource Definition before creating instances, so go ahead and apply it to the cluster.

kubectl create -f deploy/crds/example_v1alpha1_examplekind_crd.yaml

10. Check to see that the custom resource definition is deployed.

kubectl get crd

11. We will need to build the example-operator as an image and push it to a repository. For simplicity, we’ll create a public repository on your account on dockerhub.com.

  a. Go to https://hub.docker.com/ and login

  b. Click Create Repository

  c. Leave the namespace as your username

  d. Enter the repository as “example-operator”

  e. Leave the visibility as Public.

  f. Click Create

12. Build the example-operator.

operator-sdk build [Dockerhub username]/example-operator:v0.0.1

13. Push the image to your repository on Dockerhub (this command may require logging in with your credentials).

docker push [Dockerhub username]/example-operator:v0.0.1

14. Open up the deploy/operator.yaml file that was generated during the build. This is a manifest that will run your example-operator as a Deployment in Kubernetes. We need to change the image so it is the same as the one we just pushed.

  a. Find image: REPLACE_IMAGE

  b. Replace with image: [Dockerhub username]/example-operator:v0.0.1

15. Set up Role-based Authentication for the example-operator by applying the RBAC manifests that were previously generated.

kubectl create -f deploy/service_account.yaml

kubectl create -f deploy/role.yaml

kubectl create -f deploy/role_binding.yaml

16. Deploy the example-operator.

kubectl create -f deploy/operator.yaml

17. Check to see that the example-operator is up and running.

kubectl get deploy

18. Now we’ll deploy several instances of the Examplekind app for our operator to watch. Open up the deploy/crds/example_v1alpha1_examplekind_cr.yaml deployment manifest. Update fields so they appear as below, with name, count, group, image and port. Notice we are adding fields that we defined in the spec struct of our pkg/apis/example/v1alpha1/examplekind_types.go.

apiVersion: "example.kenzan.com/v1alpha1"
kind: "Examplekind"
metadata:
 name: "kenzan-example"
spec:
 count: 3
 group: Demo-App
 image: nginx
 port: 80

19. Apply the Examplekind app deployment.

kubectl apply -f deploy/crds/example_v1alpha1_examplekind_cr.yaml

20. Check that an instance of the Examplekind object exists in Kubernetes.  

kubectl get Examplekind

21. Let’s describe the Examplekind object to see if our status now shows as expected.

kubectl describe Examplekind kenzan-example

Note that the Status describes the AppGroup the instances are a part of (“Demo-App”), as well as enumerates the Podnames.

22. Within the deploy/crds/example_v1alpha1_examplekind_cr.yaml, change the count to be 1 pod. Apply the deployment again.

kubectl apply -f deploy/crds/example_v1alpha1_examplekind_cr.yaml

23. Based on the operator reconciling against the spec, you should now have one instance of kenzan-example.

kubectl describe Examplekind kenzan-example

Well done. You’ve successfully created an example-operator, and become familiar with all the pieces and parts needed in the process. You may even have a few ideas in your head about which stateful applications you could potentially automate the management of for your organization, getting away from manual intervention. Take a look at the following links to build on your Operator knowledge:

Toye Idowu is a Platform Engineer at Kenzan Media.