Posted on Leave a comment

Linux Kernel Maintainer Summit

Linux Kernel Maintainer

September 12, 2019

Corinthia Hotel Lisbon

1099-031 Lisbon

Portugal

The Linux Kernel Maintainer Summit brings together the world’s leading core kernel developers to discuss the state of the existing kernel and plan the next development cycle. This is an invite-only event.

Linux Kernel Summit technical tracks are offered at Linux Plumbers Conference 2019 and are open to all LPC attendees. More information, including how to register, will be available in the coming months.

Learn more

Click Here!

Posted on Leave a comment

Xen Summit

The ​Xen Project ​Developer ​and ​Design ​Summit ​brings ​together ​the ​Xen ​Project’s ​community ​of ​developers ​and ​power ​users ​for ​their ​annual ​conference. ​The ​conference ​is ​about ​sharing ​ideas ​and ​the ​latest ​developments, ​sharing ​experience, ​planning, ​collaboration ​and ​above ​all ​to ​have ​fun ​and ​to ​meet ​the ​community ​that ​defines ​the ​Xen ​Project.

Learn more

Posted on Leave a comment

Ampersands and File Descriptors in Bash

In our quest to examine all the clutter (&, |, ;, >, <, {, [, (, ), ], }, etc.) that is peppered throughout most chained Bash commands, we have been taking a closer look at the ampersand symbol (&).

Last time, we saw how you can use & to push processes that may take a long time to complete into the background. But, the &, in combination with angle brackets, can also be used to pipe output and input elsewhere.

In the previous tutorials on angle brackets, you saw how to use > like this:

ls > list.txt

to pipe the output from ls to the list.txt file.

Now we see that this is really shorthand for

ls 1> list.txt

And that 1, in this context, is a file descriptor that points to the standard output (stdout).

In a similar fashion 2 points to standard error (stderr), and in the following command:

ls 2> error.log

all error messages are piped to the error.log file.

To recap: 1> is the standard output (stdout) and 2> the standard error output (stderr).

There is a third standard file descriptor, 0<, the standard input (stdin). You can see it is an input because the arrow (<) is pointing into the 0, while for 1 and 2, the arrows (>) are pointing outwards.

What are the standard file descriptors good for?

If you are following this series in order, you have already used the standard output (1>) several times in its shorthand form: >.

Things like stderr (2) are also handy when, for example, you know that your command is going to throw an error, but what Bash informs you of is not useful and you don’t need to see it. If you want to make a directory in your home/ directory, for example:

mkdir newdir

and if newdir/ already exists, mkdir will show an error. But why would you care? (Ok, there some circumstances in which you may care, but not always.) At the end of the day, newdir will be there one way or another for you to fill up with stuff. You can supress the error message by pushing it into the void, which is /dev/null:

mkdir newdir 2> /dev/null

This is not just a matter of “let’s not show ugly and irrelevant error messages because they are annoying,” as there may be circumstances in which an error message may cause a cascade of errors elsewhere. Say, for exapmple, you want to find all the .service files under /etc. You could do this:

find /etc -iname "*.service"

But it turns out that on most systems, many of the lines spat out by find show errors because a regular user does not have read access rights to some of the folders under /etc. It makes reading the correct output cumbersome and, if find is part of a larger script, it could cause the next command in line to bork.

Instead, you can do this:

find /etc -iname "*.service" 2> /dev/null

And you get only the results you are looking for.

A Primer on File Descriptors

There are some caveats to having separate file descriptors for stdout and stderr, though. If you want to store the output in a file, doing this:

find /etc -iname "*.service" 1> services.txt

would work fine because 1> means “send standard output, and only standard output (NOT standard error) somewhere“.

But herein lies a problem: what if you *do* want to keep a record within the file of the errors along with the non-erroneous results? The instruction above won’t do that because it ONLY writes the correct results from find, and

find /etc -iname "*.service" 2> services.txt

will ONLY write the errors.

How do we get both? Try the following command:

find /etc -iname "*.service" &> services.txt

… and say hello to & again!

We have been saying all along that stdin (0), stdout (1), and stderr (2) are file descriptors. A file descriptor is a special construct that points to a channel to file, either for reading, or writing, or both. This comes from the old UNIX philosophy of treating everything as a file. Want to write to a device? Treat it as a file. Want to write to a socket and send data over a network? Treat it as a file. Want to read from and write to a file? Well, obviously, treat it as a file.

So, when managing where the output and errors from a command goes, treat the destination as a file. Hence, when you open them to read and write to them, they all get file descriptors.

This has interesting effects. You can, for example, pipe contents from one file descriptor to another:

find /etc -iname "*.service" 1> services.txt 2>&1

This pipes stderr to stdout and stdout is piped to a file, services.txt.

And there it is again: the &, signaling to Bash that 1 is the destination file descriptor.

Another thing with the standard file descriptors is that, when you pipe from one to another, the order in which you do this is a bit counterintuitive. Take the command above, for example. It looks like it has been written the wrong way around. You may be reading it like this: “pipe the output to a file and then pipe errors to the output.” It would seem the error output comes to late and is sent when 1 is already done.

But that is not how file descriptors work. A file descriptor is not a placeholder for the file, but for the input and/or output channel to the file. In this case, when you do 1> services.txt, you are saying “open a write channel to services.txt and leave it open“. 1 is the name of the channel you are going to use, and it remains open until the end the line.

If you still think it is the wrong way around, try this:

find /etc -iname "*.service" 2&>1 1>services.txt

And notice how it doesn’t work, notice how errors get piped to the terminal and only the non-erroneous output (that is stdout) gets pushed to services.txt.

That is because Bash processes every result from find from left to right. Think about it like this: when Bash gets to 2&>1, stdout (1) is still a channel that points to the terminal. If the result that find feeds Bash contains an error, it is popped into 2, transferred to 1, and, away it goes, off to the terminal!

Then at the end of the command, Bash sees you want to open stdout as a channel to the services.txt file. If no error has occurred, the result goes through 1 into the file.

By contrast, in

find /etc -iname "*.service" 1>services.txt 2>&1

1 is pointing at services.txt right from the beginning, so anything that pops into 2 gets piped through 1, which is already pointing to the final resting place in services.txt, and that is why it works.

In any case, as mentioned above &> is shorthand for “both standard output and standard error“, that is, 2>&1.

This is probably all a bit much, but don’t worry about it. Re-routing file descriptors here and there is commonplace in Bash command lines and scripts. And, you’ll be learning more about file descriptors as we progress through this series. See you next week!

Posted on Leave a comment

Gain Valuable Kubernetes Skills and Certification with Linux Foundation Training

Quick, what was the the most dominant technology skill requested by IT firms in 2018? According to a study from job board Dice, Kubernetes skills dominated among IT firm requests, and this news followed similar findings released last year from jobs board Indeed. The Dice report, based on its available job postings, found that Kubernetes was heavily requested by IT recruiters as well as hiring managers. As SDX Central has reported: “Indeed’s work found that Kubernetes had the fastest year-over-year surge in job searches among IT professionals. It also found that related job postings increased 230 percent between September 2017 and September 2018.”

The demand for Kubernetes skills is so high that companies of all sizes are reporting skills gaps and citing difficulty finding people who have the required Kubernetes and containerization skills. That spells opportunity for those who gain Kubernetes expertise, and the good news is that you have several approachable and inexpensive options for getting trained as well as certified.

Certification Options

Certification is the gold standard in the Kubernetes arena. On that front, last year The Cloud Native Computing Foundation launched its Certified Kubernetes Application Developer exam and a Kubernetes for Developers (LFD259) course ($299). These offerings complement the Certified Kubernetes Administrator program ($300). CNCF, working in partnership with edX, also offers an Introduction to Kubernetes course that is absolutely free, and requires a time commitment of only two to three hours a week for four to five weeks. You can register here, and find out more about Kubernetes Fundamentals (LFS2580) and Developer courses here.

The Kubernetes Fundamentals course comes with extensive course materials, and you can get a free downloadable chapter from the materials here. For those new to Kubernetes, the course covers architecture, networking setup and much more. If you are new to Kubernetes, you can also find a free webinar here, where Kubernetes Founder Craig McLuckie provides an introduction to the Kubernetes project and how it began when he was working at Google.

“As Kubernetes has grown, so has the demand for application developers who are knowledgeable about building on top of Kubernetes,” said Dan Kohn, Executive Director of the Cloud Native Computing Foundation. ”The CKAD exam allows developers to certify their proficiency in designing and building cloud native applications for Kubernetes, while also allowing companies to confidently hire high-quality teams.”

According to the Cloud Native Computing Foundation: “With the majority of container-related job listings asking for proficiency in Kubernetes as an orchestration platform, the CKAD program will help expand the pool of Kubernetes experts in the market, thereby enabling continued growth across the broad set of organizations using the technology.”

December’s KubeCon + CloudNativeCon conference in Seattle was a sold-out event that has now ushered in a wealth of free Kubernetes-focused content that you can access. In fact, more than 100 lightning talks, keynotes, and technical sessions from the event have already been posted online, with more information here.

You can watch many videos from KubeCon on YouTube. You’ll find videos sharing basics and best practices, explaining how to integrate Kubernetes with various platforms, and discussing the future of Kubernetes. You can also get hear talks in person at these upcoming conferences:

KubeCon Barcelona, May 20-23

KubeCon Shanghai, June 24-26

KubeCon San Diego, November 18-21

Kubernetes is spreading its reach rapidly for many reasons, including its extensible architecture and healthy open source community, but some still feel that it is too difficult to use. The resources found here—many of them free—will help you move toward mastery of one of today’s most compelling technology architectures.

Posted on Leave a comment

5 Streaming Audio Players for Linux

As I work, throughout the day, music is always playing in the background. Most often, that music is in the form of vinyl spinning on a turntable. But when I’m not in purist mode, I’ll opt to listen to audio by way of a streaming app. Naturally, I’m on the Linux platform, so the only tools I have at my disposal are those that play well on my operating system of choice. Fortunately, plenty of options exist for those who want to stream audio to their Linux desktops.

In fact, Linux offers a number of solid offerings for music streaming, and I’ll highlight five of my favorite tools for this task. A word of warning, not all of these players are open source. But if you’re okay running a proprietary app on your open source desktop, you have some really powerful options. Let’s take a look at what’s available.

Spotify

Spotify for Linux isn’t some dumb-downed, half-baked app that crashes every other time you open it, and doesn’t offer the full-range of features found on the macOS and Windows equivalent. In fact, the Linux version of Spotify is exactly the same as you’ll find on other platforms. With the Spotify streaming client you can listen to music and podcasts, create playlists, discover new artists, and so much more. And the Spotify interface (Figure 1) is quite easy to navigate and use.

You can install Spotify either using snap (with the command sudo snap install spotify), or from the official repository, with the following commands:

  • sudo apt-key adv –keyserver hkp://keyserver.ubuntu.com:80 –recv-keys 931FF8E79F0876134EDDBDCCA87FF9DF48BF1C90

  • sudo echo deb http://repository.spotify.com stable non-free | sudo tee /etc/apt/sources.list.d/spotify.list

  • sudo apt-get update

  • sudo apt-get install spotify-client

Once installed, you’ll want to log into your Spotify account, so you can start streaming all of the great music to help motivate you to get your work done. If you have Spotify installed on other devices (and logged into the same account), you can dictate to which device the music should stream (by clicking the Devices Available icon near the bottom right corner of the Spotify window).

Clementine

Clementine one of the best music players available to the Linux platform. Clementine not only allows user to play locally stored music, but to connect to numerous streaming audio services, such as:

There are two caveats to using Clementine. The first is you must be using the most recent version (as the build available in some repositories is out of date and won’t install the necessary streaming plugins). Second, even with the most recent build, some streaming services won’t function as expected. For example, with Spotify, you’ll only have available to you the Top Tracks (and not your playlist … or the ability to search for songs).

With Clementine Internet radio streaming, you’ll find musicians and bands you’ve never heard of (Figure 2), and plenty of them to tune into.

Odio

Odio is a cross-platform, proprietary app (available for Linux, MacOS, and Windows) that allows you to stream internet music stations of all genres. Radio stations are curated from www.radio-browser.info and the app itself does an incredible job of presenting the streams for you (Figure 3).

Odio makes it very easy to find unique Internet radio stations and even add those you find and enjoy to your library.
Currently, the only way to install Odio on Linux is via Snap. If you’re distribution supports snap packages, install this streaming app with the command:

sudo snap install odio

Once installed, you can open the app and start using it. There is no need to log into (or create) an account. Odio is very limited in its settings. In fact, it only offers the choice between a dark or light theme in the settings window. However, as limited as it might be, Odio is one of your best bets for playing Internet radio on Linux.

Streamtuner2

Streamtuner2 is an outstanding Internet radio station GUI tool. With it you can stream music from the likes of:

  • Internet radio stations

  • Jameno

  • MyOggRadio

  • Shoutcast.com

  • SurfMusic

  • TuneIn

  • Xiph.org

  • YouTube

Streamtuner2 offers a nice (if not slightly outdated) interface, that makes it quite easy to find and stream your favorite music. The one caveat with StreamTuner2 is that it’s really just a GUI for finding the streams you want to hear. When you find a station, double-click on it to open the app associated with the stream. That means you must have the necessary apps installed, in order for the streams to play. If you don’t have the proper apps, you can’t play the streams. Because of this, you’ll spend a good amount of time figuring out what apps to install for certain streams (Figure 4).

VLC

VLC has been, for a very long time, dubbed the best media playback tool for Linux. That’s with good reason, as it can play just about anything you throw at it. Included in that list is streaming radio stations. Although you won’t find VLC connecting to the likes of Spotify, you can head over to Internet-Radio, click on a playlist and have VLC open it without a problem. And considering how many internet radio stations are available at the moment, you won’t have any problem finding music to suit your tastes. VLC also includes tools like visualizers, equalizers (Figure 5), and more.

The only caveat to VLC is that you do have to have a URL for the Internet Radio you wish you hear, as the tool itself doesn’t curate. But with those links in hand, you won’t find a better media player than VLC.

Always More Where That Came From

If one of these five tools doesn’t fit your needs, I suggest you open your distribution’s app store and search for one that will. There are plenty of tools to make streaming music, podcasts, and more not only possible on Linux, but easy.

Posted on Leave a comment

How Zowe Is Bringing the Mainframe into the Modern Age

The goal of the Zowe project is to create a framework that enables developers to bring their latest tools to work on the mainframe. IBM, Broadcom, and Rocket Software worked together and open sourced their own technologies to achieve this. According to the project website, Zowe provides various components, including an app framework and a command-line interface, which lets users interact with the mainframe remotely and use integrated development environments, shell commands, Bash scripts, and other tools. It also provides utilities and services to help developers quickly learn how to support and build z/OS applications.

“What Zowe allows both end users and developers to do is enable a newer generation of users and developers to have access to all the critical data within all these financial, retail, and insurance systems living on the mainframe,” she said.

The fact is that almost all of the critical mainframe applications were written decades ago. Some of these companies are more than 100 years old, and they are using mainframe systems for their mission-critical workloads. So, what Zowe is trying to achieve is to open source some of these technologies to help companies bring their existing workloads into the modern day. This will also allow them to attract a new generation of users and developers.

Read more and watch the complete video interview at The Linux Foundation.

Posted on Leave a comment

Pine64 to Launch Open Source Phone, Laptop, Tablet, and Camera

At FOSDEM last weekend, California-based Linux hacker board vendor Pine64 previewed an extensive lineup of open source hardware it intends to release in 2019. Surprisingly, only two of the products are single board computers.

The Linux-driven products will include a PinePhone development kit based on the Allwinner A64. There will be second, more consumer focused Pinebook laptop — a Rockchip RK3399 based, 14-inch Pinebook Pro — and an Allwinner A64-based, 10.1-inch PineTab tablet. Pine64 also plans to release an Allwinner S3L-driven IP camera system called the CUBE and a Roshambo Retro-Gaming case that supports Pine64’s Rock64 and RockPro64, as well as the Raspberry Pi.

The SBC entries are a Pine H64 Model B that will be replace the larger, but similarly Allwinner H6 based, Model A version and will add WiFi/Bluetooth. There will also be a third rev of the popular, RK3399 based Rock64 board that adds Power-over-Ethernet support.

The launch of the phone, laptop, tablet, and camera represents the most ambitious expansion to date by an SBC vendor to new open source hardware form factors. As we noted last month in our hacker board analysis piece, community-based SBC projects are increasingly specializing to survive in today’s Raspberry Pi dominated market. In a Feb. 1 Tom’s Hardware story, RPi Trading CEO Eben Upton confirmed our speculation that a next generation Raspberry Pi 4 that moves beyond 40nm fabrication will not likely ship until 2020. That offers a window of opportunity for other SBC vendors to innovate.

It’s a relatively short technical leap — but a larger marketing hurdle — to move from a specialized SBC to a finished consumer electronics or industrial device. Still, we can expect a few more vendors to follow Pine64’s lead in building on their SBCs, Linux board support packages, and hacker communities to launch more purpose-built consumer electronics and industrial gear.

Already, community projects have begun offering a more diverse set of enclosures and other accessories to turn their boards into mini-PCs, IoT gateways, routers, and signage systems. Meanwhile, established embedded board vendors are using their community-backed SBC platforms as a foundation for end-user products. Acer subsidiary Aaeon, for example, has spun off its UP boards into a variety of signage systems, automation controllers, and AI edge computing systems.

So far, most open source, Linux phone and tablet alternatives have emerged from open source software projects, such as Mozilla’s Firefox OS, the Ubuntu project’s Ubuntu Phone, and the Jolla phone. Most of these alternative mobile Linux projects have either failed, faded, or never took off.  

Some of the more recent Linux phone projects, such as the PiTalk and ZeroPhone, have been built around the Raspberry Pi platform. The PinePhone and PineTab would be even more open source given that the mainboards ship with full schematics.

Unlike many hacker board projects, the Pine64 products offer software tied to mainline Linux. This is easier to do with the Rockchip designs, but it’s been a slower road to mainline for Allwinner. Work by Armbian and others have now brought several Allwinner SoCs up to speed.

Working from established hardware and software platforms may offer a stronger foundation for launching mobile Android alternatives than a software-only project. “The idea, in principle, is to build convergent device-ecosystems (SBC, Module, Laptop/Tablet/ Phone / Other Devices) based on SOCs that we’ve already have developers engaged with and invested in,” says the Pine64 blog announcement.

Here’s a closer look at Pine64’s open hardware products for 2019:

PinePhone Development Kit — Based on the quad -A53 Allwinner A64 driven SoPine A64 module, the PinePhone will run mainline Linux and support alternative mobile platforms such as UBPorts, Maemo Leste, PostmarketOS, and Plasma Mobile. It can also run Unity 8 and KDE Plasma with Lima. This upgradable, modular phone kit will be available soon in limited quantity and will be spun off later this year or in 2020 into an end-user phone with a target price of $149.

The PinePhone kit includes 2GB LPDDR3, 32GB eMMC, and a small 1440 x 720-pixel LCD screen. There’s a 4G LTE module with Cat 4 150Mb downlink, a battery, and 2- and 5MP cameras. Other features include WiFi/BT, microSD, HDMI, MIPI I/O, sensors, and privacy hardware switches.

Pinebook Pro — Like many of the upcoming Pine64 products, the original Pinebooks are limited edition developer systems. The Pinebook Pro, however, is aimed at a broader audience that might be considering a Chromebook. This second-gen Pro laptop will not replace the $99 and up 11.6-inch version of the Pinebook. The original 14-inch version may receive an upgrade to make it more like the Pro.

The $199 Pinebook Pro advances from the quad-core, Cortex-A53 Allwinner H64 to a hexa-core -A53 and -A72 Rockchip RK3399. It supports mainline Linux and BSD.

The more advanced features include a higher-res 14-inch, 1080p screen, now with IPS, as well as twice the RAM (4GB LPDDR4). It also offers four times the storage at 64GB, with a 128GB option for registered developers. Other highlights include USB 3.0 and 2.0 ports, a USB Type-C port that supports DP-like 4K@60Hz video, a 10,000 mAh battery, and an improved 2-megapixel camera. There’s also an option for an M.2 slot that supports NVMe storage.

PineTab — The PineTab is like a slightly smaller, touchscreen-enabled version of the first-gen Pinebook, but with the keyboard optional instead of built-in. The magnetically attached keyboard has a trackpad and can fold up to act as a screen cover.

Like the original Pinebooks, the PineTab runs Linux or BSD on an Allwinner A64 with 2GB of LPDDR3 and 16GB eMMC. The 10-inch IPS touchscreen is limited to 720p resolution. Other features include WiFi/BT, USB, micro-USB, microSD, speaker, mic, and dual cameras.

Pine64 notes that touchscreen-ready Linux apps are currently in short supply. The PineTab will soon be available for $79, or $99 with the keyboard.

The CUBE — This “early concept” IP camera runs on the Allwinner S3L — a single-core, Cortex-A7 camera SoC. It ships with a MIPI-CSI connected, 8MP Sony iMX179 CMOS camera with an m12 mount for adding different lenses.

The CUBE offers 64MB or 128MB RAM, a WiFi/BT module, plus a 10/100 Ethernet port with Power-over-Ethernet (PoE) support. Other features include USB, microSD, and 32-pin GPIO. Target price: about $20.

Roshambo Retro-Gaming — This retro gaming case and accessory set from Pine64’s Chinese partner Roshambo will work with Pine64’s Rock64 SBC, which is based on the quad -A53 Rockchip RK3328, or its RK3399 based RockPro64. It can also accommodate a Raspberry Pi. The $30 Super Famicom inspired case will ship with an optional $13 gaming controller set. Other features include buttons, switches, a SATA slot, and cartridge-shaped 128GB ($25) or 256GB ($40) SSDs.

Rock64 Rev 3 — Pine64 says it will continue to focus primarily on SBCs, although the only 2019 products it is disclosing are updates to existing designs. The Rock64 Rev 3 improves upon Pine64’s RK3399-based RPi lookalike, which it says has been its most successful board yet. New features include PoE, RTC, improved RPi 2 GPIO compatibility, and support for high-speed microSD cards. Pricing stays the same.

Pine H64 Model B — The Pine H64 Model B will replace the currently unavailable Pine H64 Model A, which shipped in limited quantities. The board trims down to a Rock64 (and Raspberry Pi) footprint, enabling use of existing cases, and adds WiFi/BT. It sells for $25 (1GB LPDDR3 RAM), $35 (2GB), and $45 (3GB).

Posted on Leave a comment

And, Ampersand, and & in Linux

Take a look at the three previous articles, and you will see that understanding the glue that joins them together is as important as recognizing the tools themselves. Indeed, tools tend to be simple and understanding what mkdir, touch, and find do (make a new directory, update a file, and find a file in the directory tree, respectively) in isolation from each other is easy.

But understanding what

mkdir test_dir 2>/dev/null || touch images.txt && find . -iname "*jpg" > backup/dir/images.txt &

does, and why we would write a command line like that is a whole different story.

It pays to look more closely at the sign and symbols that live between the commands. It will not only help you better understand how things work, but will also make you more proficient in chaining commands together to create compound instructions that will help you work more efficiently.

In this article and the next, we’ll be looking at the the ampersand (&) and its close friend, the pipe (|), and see how they can mean different things in different contexts.

Behind the Scenes

Let’s start simple and see how you can use & as a way of pushing a command to the background. The instruction:

cp -R original/dir/ backup/dir/

Copies all the files and subdirectories in original/dir/ into backup/dir/. So far so simple. But if that turns out to be a lot of data, it could tie up your terminal for hours.

However, using:

cp -R original/dir/ backup/dir/ &

pushes the process to the background courtesy of the final &. This frees you to continue working on the same terminal or even to close the terminal and still let the process finish up. Do note, however, that if the process is asked to print stuff out to the standard output (like in the case of echo or ls), it will continue to do so, even though it is being executed in the background.

When you push a process into the background, Bash will print out a number. This number is the PID or the Process’ ID. Every process running on your Linux system has a unique process ID and you can use this ID to pause, resume, and terminate the process it refers to. This will become useful later.

In the meantime, there are a few tools you can use to manage your processes as long as you remain in the terminal from which you launched them:

  • jobs shows you the processes running in your current terminal, whether be it in the background or foreground. It also shows you a number associated with each job (different from the PID) that you can use to refer to each process:

    $ jobs [1]- Running cp -i -R original/dir/* backup/dir/ & [2]+ Running find . -iname "*jpg" > backup/dir/images.txt &
    
  • fg brings a job from the background to the foreground so you can interact with it. You tell fg which process you want to bring to the foreground with a percentage symbol (%) followed by the number associated with the job that jobs gave you:

    $ fg %1 # brings the cp job to the foreground
    cp -i -R original/dir/* backup/dir/
    

    If the job was stopped (see below), fg will start it again.

  • You can stop a job in the foreground by holding down [Ctrl] and pressing [Z]. This doesn’t abort the action, it pauses it. When you start it again with (fg or bg) it will continue from where it left off…

    …Except for sleep: the time a sleep job is paused still counts once sleep is resumed. This is because sleep takes note of the clock time when it was started, not how long it was running. This means that if you run sleep 30 and pause it for more than 30 seconds, once you resume, sleep will exit immediately.

  • The bg command pushes a job to the background and resumes it again if it was paused:
    $ bg %1
    [1]+ cp -i -R original/dir/* backup/dir/ &
    

As mentioned above, you won’t be able to use any of these commands if you close the terminal from which you launched the process or if you change to another terminal, even though the process will still continue working.

To manage background processes from another terminal you need another set of tools. For example, you can tell a process to stop from a a different terminal with the kill command:

kill -s STOP <PID>

And you know the PID because that is the number Bash gave you when you started the process with &, remember? Oh! You didn’t write it down? No problem. You can get the PID of any running process with the ps (short for processes) command. So, using

ps | grep cp

will show you all the processes containing the string “cp“, including the copying job we are using for our example. It will also show you the PID:

$ ps | grep cp
14444 pts/3 00:00:13 cp

In this case, the PID is 14444. and it means you can stop the background copying with:

kill -s STOP 14444

Note that STOP here does the same thing as [Ctrl] + [Z] above, that is, it pauses the execution of the process.

To start the paused process again, you can use the CONT signal:

kill -s CONT 14444

There is a good list of many of the main signals you can send a process here. According to that, if you wanted to terminate the process, not just pause it, you could do this:

kill -s TERM 14444

If the process refuses to exit, you can force it with:

kill -s KILL 14444

This is a bit dangerous, but very useful if a process has gone crazy and is eating up all your resources.

In any case, if you are not sure you have the correct PID, add the x option to ps:

$ ps x| grep cp
14444 pts/3 D 0:14 cp -i -R original/dir/Hols_2014.mp4   original/dir/Hols_2015.mp4 original/dir/Hols_2016.mp4   original/dir/Hols_2017.mp4 original/dir/Hols_2018.mp4 backup/dir/ 

And you should be able to see what process you need.

Finally, there is nifty tool that combines ps and grep all into one:

$ pgrep cp
8 18 19 26 33 40 47 54 61 72 88 96 136 339 6680 13735 14444

Lists all the PIDs of processes that contain the string “cp“.

In this case, it isn’t very helpful, but this…

$ pgrep -lx cp
14444 cp

… is much better.

In this case, -l tells pgrep to show you the name of the process and -x tells pgrep you want an exact match for the name of the command. If you want even more details, try pgrep -ax command.

Next time

Putting an & at the end of commands has helped us explain the rather useful concept of processes working in the background and foreground and how to manage them.

One last thing before we leave: processes running in the background are what are known as daemons in UNIX/Linux parlance. So, if you had heard the term before and wondered what they were, there you go.

As usual, there are more ways to use the ampersand within a command line, many of which have nothing to do with pushing processes into the background. To see what those uses are, we’ll be back next week with more on the matter.

Posted on Leave a comment

Blockchain Skills Are in Demand: Take Advantage of Hyperledger Training Discounts Now

If you ask some people, they’ll tell you that blockchain technology — an entirely reimagined approach to records, ledgers, and authentication that is helping to protect trust in transactions — is as dramatic as the creation of the Internet. Countless organizations across industries are aligning around it and there is huge demand for training and certification focused on the tools and building blocks that drive the blockchain ecosystem.

The Linux Foundation’s Hyperledger Project has created many of these key tools and is providing leadership around these complex technologies. All of which is why it’s worth noting that the Linux Foundation is now offering 30 percent off all of its Hyperledger training and certification offerings.

At top business schools ranging from Berkeley to Wharton, students are flocking to classes on blockchain and cryptocurrency. Additionally, job postings related to blockchain and Hyperledger are taking off, with knowledge in these areas translating into opportunity.

“In the span of only a year or two, blockchain has gone from something seen only as related to cryptocurrencies to a necessity for businesses across a wide variety of industries,” said The Linux Foundation’s Clyde Seepersad, General Manager, Training & Certification, in introducing the course Blockchain: Understanding its Uses and Implications. “Providing a free introductory course designed not only for technical staff but business professionals will help improve understanding of this important technology, while offering a certificate program through edX will enable professionals from all over the world to clearly demonstrate their expertise.”

Hyperledger offers numerous other training and certification options, and its certifications are respected across industries. The project has just introduced a new certification for Hyperledger Fabric and also offers certification for Hyperledger Sawtooth—with both platforms playing central roles in the blockchain ecosystem.

The following courses are key to building your blockchain and Hyperledger skills:

Blockchain: Understanding Its Uses and Implications (LFS170) — Understand exactly what a blockchain is, its impact and potential for change around the world, and analyze use cases in technology, business, and enterprise products and institutions.

Blockchain for Business – An Introduction to Hyperledger Technologies (LFS171) — This course offers a primer to blockchain and distributed ledger technologies. Learn how to start building blockchain applications with Hyperledger frameworks.

Hyperledger Fabric Fundamentals (LFD271) — Teaches the fundamental concepts of blockchain and distributed ledger technologies.

Hyperledger Fabric Administration (LFS272) — This course will provide a deeper understanding of the Hyperledger Fabric network and how to administer and interact with chaincode, manage peers, operate basic CA-level functions, and much more. Here is how you can get trained to pass the Certified Hyperledger Fabric Administrator (CHFA) exam.

Hyperledger Sawtooth Administration (LFS273) — This course offers insight into the installation, configuration, component lifecycle, and permissioning-related information of a Hyperledger Sawtooth network. Here is how you can get trained to pass the Hyperledger Sawtooth Administration exam.

Note that Hyperledger offers bundle prices if you are interested in both an exam preparation course and certification. These courses are also well worth looking into and available at the discounted rates through February 11, 2019.

Now is a great time to learn about Hyperledger and blockchain technology. Take advantage of these training discounts and start learning from the blockchain leaders now.