Posted on Leave a comment

ST Spins Its First Linux-Powered Cortex-A SoC

STMicroelectronics announced its first Cortex-A SoC and first Linux- and Android-driven processor. The STM32MP1 SoC intends to ease the transition for developers moving from its STM32 microprocessor unit (MCU) family to more complex embedded systems. Availability was a bit unclear, but it appears ST is sampling the SoCs at the very least.

Aimed at industrial, consumer, smart home, health, and wellness applications, the STM32MP1 features dual, 650MHz Cortex-A7 cores running a new “mainlined, open-sourced” OpenSTLinux distro with Yocto Project and OpenEmbedded underpinnings. There’s also a 209MHz Cortex-M4 chip with an FPU, MPU, and DSP instructions. The Cortex-M4 is supported by an enhanced version of ST’s STM32Cube development tools and supports the Cortex-A7 cores in addition to the M4 (see below).

Like most of NXP’s recent embedded SoCs, including the single- or -dual Cortex-A7 i.MX7 and its newer, Cortex-A53 i.MX8M and i.MX8M Mini, the STM32MP1 is a hybrid Cortex-A/M design intended in ST’s words to “perform fast processing and real-time tasks on a single chip.” Hybrid Cortex-A7/M4 SoCs are also available from Renesas, Marvell, and MediaTek, which has developed a custom-built MT3620 SoC as the hardware foundation for Microsoft’s Azure Sphere IoT framework.

As the market leader in MCUs, ST has made a larger leap from its comfort zone than these other semiconductor vendors. NXP is also a leading MCU vendor, but it’s been crafting Linux-powered Cortex-A SoCs since long before it changed it named from Freescale. The SM32MP1 launch continues a trend of MCU technology companies reaching out to the Linux community, such as Arm’s new Mbed Linux distro and Pelion IoT Platform, which orchestrates Cortex-M and Cortex-A devices under a single IoT platform.

Inside the STM32MP1

The STM32MP1 is equipped with 32KB instruction and data caches, as well as a 256KB L2 cache. ST also supplies an optional Vivante 3D GPU with support for OpenGL ES 2.0 and 24-bit parallel RGB displays at up to WXGA (1280×800) at 60fps.

The SoC supports a 2-lane MIPI-DSI interface running at 1Gbps and offers native support for Linux and application frameworks such as Android Qt and Crank Software’s Storyboard GUI. While the GPU is pretty run-of-the-mill for Cortex-A7 SoCs it’s a giant leap from the perspective of MCU developers trying to bring up modern HMI displays.

Three SoC models are available: one with 3D GPU, MIPI-DSI, and 2x CAN FD interfaces, as well as one with 2x CAN FD only and one without the GPU and CAN I/O.

The STM32MP1 is touted for its rolling 10-year longevity support and heterogeneous architecture, which lets developers halt the Cortex-A7 and run only on the Cortex-M4 to reduce power consumption by 25 percent. From this mode, “going to Standby further cuts power by 2.5k times — while still supporting the resumption of Linux execution in 1 to 3 seconds, depending on the application,” says ST. The SoC includes a PMIC and other power circuitry such as buck and boost converters.

For security, the SoC provides Arm TrustZone, cryptography, hash, secure boot, anti-tamper pins, and a real-time clock. RAM support includes 32/16-bit, 533MHz DDR3, DDR3L, LPDDR2, LPDDR3. Flash support includes SD, eMMC, NAND, and NOR.

Peripherals include Cortex-A7 linked GbE, 3x USB 2.0, I2C, and multiple UART and SPI links. Analog I/O connected to the Cortex-M4 include 2x 16-bit ADCs, 2x 12-bit DACs, 29x timers, 3x watchdogs, LDOs, and up to 176 GPIOs.

OpenSTLinux, STM32Cube, and starter kits

The new OpenSTLinux distribution “has already been reviewed and accepted by the “Linux community: Linux Foundation, Yocto project, and Linaro,” says ST. The Linux BSP includes mainline kernel, drivers, boot chain, and Linaro’s OP-TEE (Trusted Execution Environment) security stack. It also includes Wayland/Weston, Gstreamer, and ALSA libraries.

Three Linux software development packages are available: a quick Starter package with STM32CubeMP1 samples; a Dev package with a Yocto Project SDK that lets you add your own Linux code; and an OpenEmbedded based Distrib package that also lets you create your own OpenSTLinux-based Linux distro. ST has collaborated with Timesys on the Linux BSPs and with Witekio to port Android to STM32MP1. 

STM32 developers can “easily find their marks” by using the familiar STM32Cube toolset to control both the Cortex-M4 and Cortex-A7 chips. The toolset includes GCC-based STM32CubeProgrammer and STM32CubeMX IDEs, which “include the DRAM interface tuning tool for easy configuration of the DRAM sub-system,” says ST.

Finally, ST is supporting its chip with a four development boards: the entry level STM32MP157A-DK1 and STM32MP157C-EV1 and the higher end STM32MP157A-EV1 and STM32MP157C-EV1. All the boards offer GPIO connectors for the Raspberry Pi and Arduino Uno V3.

The DK1/DK2 boards are equipped with 4GB DDR3L, as well as USB Type-C, USB Type-A OTG, HDMI, and MIPI-DSI. You also get GbE and WiFi/Bluetooth, and a 4-inch, VGA capacitive touch panel, among other features.

The more advanced A-EV1 and C-EV1 boards support up to 8GB DDR3L, 32GB eMMCv5.0. a microSD slot, and SPI and NAND flash. They provide most of the features of the DK boards, as well as CAN, camera support, SAI, SPDIF, digital mics, analog audio, and much more. They also supply 4x USB host ports and a micro-USB port. A 5.5-inch 720×1280 touchscreen is available.

Posted on Leave a comment

ST Spins Up Linux-Powered Cortex-A SoC

STMicroelectronics has announced a new Cortex-A SoC and Linux- and Android-driven processor. The STM32MP1 SoC intends to ease the transition for developers moving from its STM32 microprocessor unit (MCU) family to more complex embedded systems. Development boards based on the SoC will be available in April.

Aimed at industrial, consumer, smart home, health, and wellness applications, the STM32MP1 features dual, 650MHz Cortex-A7 cores running a new “mainlined, open-sourced” OpenSTLinux distro with Yocto Project and OpenEmbedded underpinnings. There’s also a 209MHz Cortex-M4 chip with an FPU, MPU, and DSP instructions. The Cortex-M4 is supported by an enhanced version of ST’s STM32Cube development tools that support the Cortex-A7 cores in addition to the M4 (see below).

Like most of NXP’s recent embedded SoCs, including the single- or -dual Cortex-A7 i.MX7 and its newer, Cortex-A53 i.MX8M and i.MX8M Mini, the STM32MP1 is a hybrid Cortex-A/M design intended in ST’s words to “perform fast processing and real-time tasks on a single chip.” Hybrid Cortex-A7/M4 SoCs are also available from Renesas, Marvell, and MediaTek, which has developed a custom-built MT3620 SoC as the hardware foundation for Microsoft’s Azure Sphere IoT framework.

As the market leader in Cortex-A MCUs, ST has made a larger leap from its comfort zone than these other semiconductor vendors. NXP is also a leading MCU vendor, but it’s been crafting Linux-powered Cortex-A SoCs since long before it changed it named from Freescale. The SM32MP1 launch continues a trend of MCU technology companies reaching out to the Linux community, such as Arm’s new Mbed Linux distro and Pelion IoT Platform, which orchestrates Cortex-M and Cortex-A devices under a single IoT platform.

Inside the STM32MP1

The STM32MP1 is equipped with 32KB instruction and data caches, as well as a 256KB L2 cache. ST also supplies an optional Vivante 3D GPU with support for OpenGL ES 2.0 and 24-bit parallel RGB displays at up to WXGA (1280×800) at 60fps.

The SoC supports a 2-lane MIPI-DSI interface running at 1Gbps and offers native support for Linux and application frameworks such as Android Qt and Crank Software’s Storyboard GUI. While the GPU is pretty run-of-the-mill for Cortex-A7 SoCs it’s a giant leap from the perspective of MCU developers trying to bring up modern HMI displays.

Three SoC models are available: one with 3D GPU, MIPI-DSI, and 2x CAN FD interfaces, as well as one with 2x CAN FD only and one without the GPU and CAN I/O.

The STM32MP1 is touted for its rolling 10-year longevity support and heterogeneous architecture, which lets developers halt the Cortex-A7 and run only on the Cortex-M4 to reduce power consumption by 25 percent. From this mode, “going to Standby further cuts power by 2.5k times — while still supporting the resumption of Linux execution in 1 to 3 seconds, depending on the application,” says ST. The SoC includes a PMIC and other power circuitry such as buck and boost converters.

For security, the SoC provides Arm TrustZone, cryptography, hash, secure boot, anti-tamper pins, and a real-time clock. RAM support includes 32/16-bit, 533MHz DDR3, DDR3L, LPDDR2, LPDDR3. Flash support includes SD, eMMC, NAND, and NOR.

Peripherals include Cortex-A7 linked GbE, 3x USB 2.0, I2C, and multiple UART and SPI links. Analog I/O connected to the Cortex-M4 include 2x 16-bit ADCs, 2x 12-bit DACs, 29x timers, 3x watchdogs, LDOs, and up to 176 GPIOs.

OpenSTLinux, STM32Cube, and starter kits

The new OpenSTLinux distribution “has already been reviewed and accepted by the “Linux community: Linux Foundation, Yocto project, and Linaro,” says ST. The Linux BSP includes mainline kernel, drivers, boot chain, and Linaro’s OP-TEE (Trusted Execution Environment) security stack. It also includes Wayland/Weston, Gstreamer, and ALSA libraries.

Three Linux software development packages are available: a quick Starter package with STM32CubeMP1 samples; a Dev package with a Yocto Project SDK that lets you add your own Linux code; and an OpenEmbedded based Distrib package that also lets you create your own OpenSTLinux-based Linux distro. ST has collaborated with Timesys on the Linux BSPs and with Witekio to port Android to STM32MP1. 

STM32 developers can “easily find their marks” by using the familiar STM32Cube toolset to control both the Cortex-M4 and Cortex-A7 chips. The toolset includes GCC-based STM32CubeProgrammer and STM32CubeMX IDEs, which “include the DRAM interface tuning tool for easy configuration of the DRAM sub-system,” says ST.

Finally, ST is supporting its chip with a four development boards: the entry level STM32MP157A-DK1 and STM32MP157C-DK2 and the higher end STM32MP157A-EV1 and STM32MP157C-EV1. All the boards offer GPIO connectors for the Raspberry Pi and Arduino Uno V3.

The DK1/DK2 boards are equipped with 4GB DDR3L, as well as USB Type-C, USB Type-A OTG, HDMI, and MIPI-DSI. You also get GbE and WiFi/Bluetooth, and a 4-inch, VGA capacitive touch panel, among other features.

The more advanced A-EV1 and C-EV1 boards support up to 8GB DDR3L, 32GB eMMCv5.0. a microSD slot, and SPI and NAND flash. They provide most of the features of the DK boards, as well as CAN, camera support, SAI, SPDIF, digital mics, analog audio, and much more. They also supply 4x USB host ports and a micro-USB port. A 5.5-inch 720×1280 touchscreen is available.

Posted on Leave a comment

DevOps Training for Network Engineers

Linux Foundation training has announced a new course designed to provide network engineers with the skills necessary to start applying DevOps practices and leverage their expertise in a DevOps environment.

In the new DevOps for Network Engineers course, you’ll learn how to navigate your role in the CI/CD cycle, find common ground, and use key tools to contribute effectively in areas like connectivity, network performance tuning, security, and other aspects of network management within a DevOps environment.

Network automation is becoming the standard in data centers, with major implications for network engineers. This online, self-paced course will help you become familiar with the tools needed to integrate your skills into the DevOps/Agile process.

Course highlights include:

  • How to integrate into a DevOps/Agile environment

  • Commonly used DevOps tools

  • How DevOps teams collaborate on projects

  • How to confidently work with software and configuration files in version control

  • How to confidently apply Agile principles in an organization

Learn more about the DevOps for Network Engineers course now.

Posted on Leave a comment

Today is a Good Day to Learn Python

Get started learning Python with this tutorial from our archives.

The cool thing about Linux and FOSS is also an aggravating thing, which is that sometimes there’s too much of a good thing. There is such an abundance of goodies that it can be overwhelming. So I am here to help you decide which programming language you should learn next, and that is Python. Oh, yes, it is.

Why Python? I like it because it is clean and straightforward. It’s a great introduction to object-oriented languages. The Python world is beginner-friendly and, as a general-purpose language, Python can be used for all sorts of things: quick simple scripts, games, Web development, Raspberry Pi — anything you want. It is also in demand by employers if you’re thinking of a career.

There are numerous excellent Python books and tons of online documentation. I want to show off Python’s coolness for beginners so you will get excited and go “Yes! I too must love Python!”

But what about all the other languages? Don’t worry, they won’t get lonesome, and everything you learn in Python is applicable to many other languages as well.

What Stuff Means

I think most of us learn terminology better with hands-on exercises, but there are four things to know from the start.

The first is Python is strongly typed. As you study Python, you will see this repeated a gazillion times. What does this even mean? Who uses a typewriter? Fortunately, it has nothing to do with typewriters, but rather with how Python handles data types. All computer programs are made of two things: data, and operating on that data. Data comes in different types, and the types determine how your programming language will handle them. Data types include characters or strings, which are literal numbers and letters, like names and addresses; integers and floating point numbers that are used in calculations; Boolean values (true/false); and arrays, which are lists of data of all the same data types.

Python enforces data types and relies on you to define them. Weakly typed languages decide for themselves what your data types are, so the data type can change depending on context.

For example, most any programming language will add the integers 1 + 2 + 3. A weakly typed language may also let you add integers and text strings, for example 5 + helloworld. If you try to do this in Python, your code will fail and you will get an error message. Weakly typed languages don’t do this randomly; this is a feature intended to add speed and flexibility by not requiring you to define your data types.

However, weak typing can lead to strange errors. One of the most common errors involves converting strings of numbers to integers when you really want them to be a literal string, like 221B Baker Street, 10,000 Maniacs, or 23andMe. In my modest opinion, it is better to learn the discipline and structure of a strongly typed language, and then try out weakly typed languages after you have experience and good grounding in the basics.

The second thing to know is what the heck is object oriented programming (OOP)? An object is a clump of data and procedures grouped into a single reusable entity. If you were coding a car racing game you might have a car object, an obstacle object, and a driver object. So, you say, objects are just like functions, right? Yes. If you already understand how to organize code into properly grouped functions and variables, then you already understand OOP. There are finer points to OOP such as classes, inheritance, and polymorphism; again, if you think in terms of sensible organization these things are easier to understand.

Third, white space has meaning in Python. You have to get your white spaces right or your code won’t work.

Fourth, Python is an interpreted language. You don’t have to compile and link your Python programs. If you’re experienced with the Bash shell, then you already know about interpreted languages, how fast they are to code in, and how you can test out your programs interactively before writing them into a script.

The downside to interpreted languages is the overhead of the interpreter. Usually, programs written in compiled languages run faster. However, you can link your Python programs to functions written in many other languages, including C/C++, Lisp, Fortran, Java, and Perl, and many more so you can mix and match to get the results you want.

Try It

Python is included in most Linux distributions, and usually the python package installs the base components and Python command interpreter. The text in bold is what you type.

$ python
Python 2.7.12 (default, Nov 19 2016, 06:48:10) [GCC 5.4.0 20160609] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> help() Welcome to Python 2.7! This is the online help utility. If this is your first time using Python, you should definitely check out
the tutorial on the Internet at http://docs.python.org/2.7/tutorial/. Enter the name of any module, keyword, or topic to get help on writing
Python programs and using Python modules. To quit this help utility and
return to the interpreter, just type "quit". To get a list of available modules, keywords, or topics, type "modules", "keywords", or "topics". Each module also comes with a one-line summary
of what it does; to list the modules whose summaries contain a given word
such as "spam", type "modules spam". help> topics Here is a list of available topics. Enter any topic name to get more help. ASSERTION DEBUGGING LITERALS SEQUENCEMETHODS2
ASSIGNMENT DELETION LOOPING SEQUENCES
[...]
help> quit

Of course we must do the traditional Hello World! Strings must be enclosed in single or double quotes.

>>> 'Hello, world!' 'Hello, world!'
>>> hell = "Hello, world!"
>>> hell 'Hello, world!'

Now create the simplest possible Python script, save it as hello.py, and run it from your normal Linux shell:

#!/usr/bin/python print "Hello World!"; carla@studio:~$ python hello.py
Hello World!

Let’s go back to the Python interpreter and play with data types.

>>> 2 + 2
4
>>> 2 + foo
Traceback (most recent call last): File "", line 1, in NameError: name 'foo' is not defined
>>> foo = 5
>>> 2 + foo
7

Now try a short interactive script. It asks you to input your age, responds according to the age you type, and checks if your response is in the correct data type. This is a great little script to tweak in different ways. For example, you could limit the acceptable age range, limit the number of incorrect tries, and get creative with your responses. Note that raw_input is for Python 2.x, and 3.x uses input.

Watch your indentation; the indented lines must be four spaces. If you are using a proper code editor, it should take care of this for you.

#!/usr/bin/python while True: try: age = int(raw_input("Please enter your age: ")) except ValueError: print("I'm so very sorry, that does not compute. Please try again.") continue else: break
if age >= 18: print("Very good, you are old enough to know better, but not too old to do it anyway.")
else: print("Sorry, come back when you're 18 and try again.")

Modules and Learning

There are a great number of Python modules, and you can learn to write your own. The key to writing good Python programs and making them do what you want is learning where to find modules. Start at Python.org because of the abundant documentation and good organization. Plan to spend a lot of time here, because it contains the best and authoritative information. It even has an interactive shell you can practice with.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Posted on Leave a comment

Logical & in Bash

One would think you could dispatch & in two articles. Turns out you can’t. While the first article dealt with using & at the end of commands to push them into the background and then diverged into explaining process management, the second article saw & being used as a way to refer to file descriptors, which led us to seeing how, combined with < and >, you can route inputs and outputs from and to different places.

This means we haven’t even touched on & as an AND operator, so let’s do that now.

& is a Bitwise Operator

If you are at all familiar with binary operations, you will have heard of AND and OR. These are bitwise operations that operate on individual bits of a binary number. In Bash, you use & as the AND operator and | as the OR operator:

AND

0 & 0 = 0 0 & 1 = 0 1 & 0 = 0 1 & 1 = 1

OR

0 | 0 = 0 0 | 1 = 1 1 | 0 = 1 1 | 1 = 1

You can test this by ANDing any two numbers and outputting the result with echo:

$ echo $(( 2 & 3 )) # 00000010 AND 00000011 = 00000010 2 $ echo $(( 120 & 97 )) # 01111000 AND 01100001 = 01100000 96 

The same goes for OR (|):

 $ echo $(( 2 | 3 )) # 00000010 OR 00000011 = 00000011 3 $ echo $(( 120 | 97 )) # 01111000 OR 01100001 = 01111001 121

Three things about this:

  1. You use (( ... )) to tell Bash that what goes between the double brackets is some sort of arithmetic or logical operation. (( 2 + 2 )), (( 5 % 2 )) (% being the modulo operator) and ((( 5 % 2 ) + 1)) (equals 3) will all work.
  2. Like with variables, $ extracts the value so you can use it.
  3. For once spaces don’t matter: ((2+3)) will work the same as (( 2+3 )) and (( 2 + 3 )).
  4. Bash only operates with integers. Trying to do something like this (( 5 / 2 )) will give you “2”, and trying to do something like this (( 2.5 & 7 )) will result in an error. Then again, using anything but integers in a bitwise operation (which is what we are talking about now) is generally something you wouldn’t do anyway.

TIP: If you want to check what your decimal number would look like in binary, you can use bc, the command-line calculator that comes preinstalled with most Linux distros. For example, using:

 bc <<< "obase=2; 97" 

will convert 97 to binary (the o in obase stands for output), and …

 bc <<< "ibase=2; 11001011" 

will convert 11001011 to decimal (the i in ibase stands for input).

&& is a Logical Operator

Although it uses the same logic principles as its bitwise cousin, Bash’s && operator can only render two results: 1 (“true”) and 0 (“false”). For Bash, any number not 0 is “true” and anything that equals 0 is “false.” What is also false is anything that is not a number:

 $ echo $(( 4 && 5 )) # Both non-zero numbers, both true = true 1 $ echo $(( 0 && 5 )) # One zero number, one is false = false 0 $ echo $(( b && 5 )) # One of them is not number, one is false = false 0

The OR counterpart for && is || and works exactly as you would expect.

All of this is simple enough… until it comes to a command’s exit status.

&& is a Logical Operator for Command Exit Status

As we have seen in previous articles, as a command runs, it outputs error messages. But, more importantly for today’s discussion, it also outputs a number when it ends. This number is called an exit code, and if it is 0, it means the command did not encounter any problem during its execution. If it is any other number, it means something, somewhere, went wrong, even if the command completed.

So 0 is good, any other number is bad, and, in the context of exit codes, 0/good means “true” and everything else means “false.” Yes, this is the exact contrary of what you saw in the logical operations above, but what are you gonna do? Different contexts, different rules. The usefulness of this will become apparent soon enough.

Moving on.

Exit codes are stored temporarily in the special variable ? — yes, I know: another confusing choice. Be that as it may, remember that in our article about variables, and we said that you read the value in a variable using a the $ symbol. So, if you want to know if a command has run without a hitch, you have to read ? as soon as the command finishes and before running anything else.

Try it with:

 $ find /etc -iname "*.service" find: '/etc/audisp/plugins.d': Permission denied /etc/systemd/system/dbus-org.freedesktop.nm-dispatcher.service /etc/systemd/system/dbus-org.freedesktop.ModemManager1.service [etcetera] 

As you saw in the previous article, running find over /etc as a regular user will normally throw some errors when it tries to read subdirectories for which you do not have access rights.

So, if you execute…

 echo $? 

… right after find, it will print a 1, indicating that there were some errors.

(Notice that if you were to run echo $? a second time in a row, you’d get a 0. This is because $? would contain the exit code of echo $?, which, supposedly, will have executed correctly. So the first lesson when using $? is: use $? straight away or store it somewhere safe — like in another variable, or you will lose it).

One immediate use of ? is to fold it into a list of chained commands and bork the whole thing if anything fails as Bash runs through it. For example, you may be familiar with the process of building and compiling the source code of an application. You can run them on after another by hand like this:

 $ configure . . . $ make . . . $ make install . . . 

You can also put all three on one line…

 $ configure; make; make install 

… and hope for the best.

The disadvantage of this is that if, say, configure fails, Bash will still try and run make and sudo make install, even if there is nothing to make or, indeed, install.

The smarter way of doing it is like this:

 $ configure && make && make install

This takes the exit code from each command and uses it as an operand in a chained && operation.

But, and here’s the kicker, Bash knows the whole thing is going to fail if configure returns a non-zero result. If that happens, it doesn’t have to run make to check its exit code, since the result is going to be false no matter what. So, it forgoes make and just passes a non-zero result onto the next step of the operation. And, as configure && make delivers false, Bash doesn’t have to run make install either. This means that, in a long chain of commands, you can join them with &&, and, as soon as one fails, you can save time as the rest of the commands get canceled immediately.

You can do something similar with ||, the OR logical operator, and make Bash continue processing chained commands if only one of a pair completes.

In view of all this (along with the stuff we covered earlier), you should now have a clearer idea of what the command line we set at the beginning of this article does:

mkdir test_dir 2>/dev/null || touch backup/dir/images.txt && find . -iname "*jpg" > backup/dir/images.txt &

So, assuming you are running the above from a directory for which you have read and write privileges, what it does it do and how does it do it? How does it avoid unseemly and potentially execution-breaking errors? Next week, apart from giving you the solution, we’ll be dealing with brackets: curly, curvy and straight. Don’t miss it!

Posted on Leave a comment

Audiophile Linux Promises Aural Nirvana

Linux isn’t just for developers. I know that might come as a surprise for you, but the types of users that work with the open source platform are as varied as the available distributions. Take yours truly for example. Although I once studied programming, I am not a developer.

The creating I do with Linux is with words, sounds, and visuals. I write books, I record audio, and a create digital images and video. And even though I don’t choose to work with distributions geared toward those specific tasks, they do exist. I also listen to a lot of music. I tend to listen to most of my music via vinyl. But sometimes I want to listen to music not available in my format of choice. That’s when I turn to digital music.

Having a Linux distribution geared specifically toward playing music might not be on the radar of the average user, but to an audiophile, it could be a real deal maker.

This bring us to Audiophile Linux. Audiophile Linux is an Arch Linux-based distribution geared toward, as the name suggests, audiophiles. What makes Audiophile Linux special? First and foremost, it’s optimized for high quality audio reproduction. To make this possible, Audiophile Linux features:

  • System and memory optimized for quality audio

  • Custom Real-Time kernel

  • Latency under 5ms

  • Direct Stream Digital support

  • Lightweight window manager (Fluxbox)

  • Pre installed audio and video programs

  • Lightweight OS, free of unnecessary daemons and services

Although Audiophile Linux claims the distribution is easily installed, it’s very much based on Arch Linux, so the installation is nowhere near as easy as, say, Ubuntu. At this point, you might be thinking, “But there’s already Ubuntu Studio, which is as easy to install as Ubuntu, and contains some of the same features!” That may be true, but there are users out there (even those of a more artistic bent) who prefer a decidedly un-Ubuntu distribution. On top of which, Ubuntu Studio would be serious overkill for anyone just looking for high-quality music reproduction. For that, there’s Audiophile Linux.
Let’s install it and see what’s what.

Installation

As I mentioned, Audiophile is based on Arch Linux. Unlike some distributions based on Arch, however, Audiophile Linux doesn’t include a pretty, user-friendly GUI installer. Instead, what you must do is download the ISO image, burn the ISO to either a USB or CD/DVD, and boot from the device. Once booted, you’ll find yourself at a command prompt. Once at that prompt, here are the steps to install.

Create the necessary partition by issuing the command:

fdisk /dev/sdX

where X is the drive letter (discovered with the command fdisk -l).

Type n to create a new partition and then type p to make the partition a primary. When that completes, type w to write the changes. Format the new partition with the command:

mkfs.ext4 /dev/sda1

Mount the new partition with the command:

mount /dev/sda1 /mnt

Finish up the partition with the following commands;

time cp -ax / /mnt arch-chroot /mnt /bin/bash cd /etc/apl-files

Install the base packages (and create a username/password with the command:

./runme.sh

Take care of the GRUB boot loader with the following commands:

grub-install --target=i386-pc /dev/sda grub-mkconfig -o /boot/grub/grub.cfg

Give the root user a password with the following command:

passwd root

Set your timezone like so (substituting your location):

ln -s /usr/share/zoneinfo/America/Kentucky/Louisville /etc/localtime

Set the hardware clock and autologin with the following commands:

hwclock --systohc --utc ./autologin.sh

Reboot the system with the command:

reboot

It Gets a Bit Dicey Now

There’s a problem to be found, which is related to the pacman update process. If you immediately go to update the system with the pacman -Suy command, you’ll find Xorg broken and seemingly no way to repair it. This problem has been around for some time now and has yet to be fixed. How do you get around it? First, you need to remove the libxfont package with the command:

sudo pacman -Rc libxfont

That’s not all. There’s another package that must be removed (Cantata – the Audiophile music player). Issue the command:

sudo pacman -Rc ffmpeg2.8

Now, you can update Audiophile Linux with the command:

sudo pacman -Suy

Once updated, you can finish up the installation with the command:

sudo pacman -S terminus-font pacman -S xorg-server pacman -S firefox

You can then reinstall Cantata with the command:

sudo pacman -S cantata

When this completes, reboot and log into your desktop.

The Desktop

As mentioned earlier, Audiophile Linux opts for lightweight desktop environment, Fluxbox. Although I understand why the developers would want to make use of this desktop (because it’s incredibly lightweight), many users might not enjoy working with such a minimal desktop. And since most audiophiles are going to be working with hardware that can tolerate a more feature-rich desktop. If you want to opt to go that route, you can install a desktop like GNOME with the command:

sudo pacman -S gnome

However, if you want to be a purist (and get the absolute most out of this hardware/software combination), stick with the default Fluxbox. I recommend sticking with Fluxbox especially since you’ll only be using Audiophile Linux for one purpose (listening to music).

Fluxbox uses an incredibly basic interface. Right-click anywhere on the desktop and a menu will appear (Figure 1).

From that menu, you won’t find a lot of applications (Figure 2).

That’s okay, because you only need one—Cantata (listed in the menu as Play music). However, after the installation, Cantata won’t run. Why? Because of a QT5 problem. To get around this, you need to issue the following commands:

sudo pacman -S binutils sudo strip --remove-section=.note.ABI-tag /usr/lib64/libQt5Core.so.5

Once you’ve taken care of the above, Cantata will run and you can start playing all of the music you’ve added to the library (Figure 3).

Worth The Hassle?

I have to confess, at first I was fairly certain Audiophile Linux wouldn’t be worth the trouble of getting it up and running … for the singular purpose of listening to music. However, once those tunes started spilling from my speakers, I was sold.

Although the average listener might not notice the difference with this distribution, audiophiles will. The clarity and playback of digital music on Audiophile Linux far exceeded that on both Elementary OS and Ubuntu Linux. So if that appeals to you, I highly recommend giving Audiophile Linux a spin.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Posted on Leave a comment

Zowe Makes Mainframe Evergreen

Mainframes are, and will continue to be, a bedrock for industries and organizations that run mission-critical applications. In one way or another, all of us are mainframe users. Every time you make an online transaction or make a reservation, for example, you are using a mainframe.

According to IBM, corporations use mainframes for applications that depend on scalability and reliability. They rely on mainframes in order to:

  • Perform large-scale transaction processing (thousands of transactions per second)
  • Support thousands of users and application programs concurrently accessing numerous resources
  • Manage terabytes of information in databases
  • Handle large-bandwidth communication

Often when people hear the word mainframe, though, they think of dinosaurs. It’s true mainframes have aged, and one challenge the mainframe community faces is that they struggle to attract fresh developers who want to use latest and shiniest technologies.

Zowe milestones

Zowe, a Linux Foundation project under the umbrella of Open Mainframe Project is changing all that. Through this project, industry heavyweights including IBM, Rocket Software, and Broadcom came together to modernize mainframes running z/OS.

Read more at The Linux Foundation

Posted on Leave a comment

The Telecom Industry Has Moved to Open Source

The telecom industry is at the heart of the fourth industrial revolution. Whether it’s connected IoT devices or mobile entertainment, the modern economy runs on the Internet.

However, the backbone of networking has been running on legacy technologies. Some telecom companies are centuries old, and they have a massive infrastructure that needs to be modernized.

The great news is that this industry is already at the forefront of emerging technologies. Companies such as AT&T, Verizon, China Mobile, DTK, and others have embraced open source technologies to move faster into the future. And  LF Networking is at the heart of this transformation.

“2018 has been a fantastic year,” said Arpit Joshipura, General Manager of Networking at Linux Foundation, speaking at Open Source Summit in Vancouver last fall. “We have seen a 140-year-old telecom industry move from proprietary and legacy technologies to open source technologies with LF Networking.”

Read more and watch the complete interview at The Linux Foundation

Posted on Leave a comment

Make Container Management Easy With Cockpit

Learn how to use Cockpit for Linux administration tasks in this tutorial from our archives.

If you administer a Linux server, you’ve probably been in search of a solid administration tool. That quest has probably taken you to such software as Webmin and cPanel. But if you’re looking for an easy way to manage a Linux server that also includes Docker, one tool stands above the rest for that particular purpose: Cockpit.

Why Cockpit? Because it includes the ability to handle administrative tasks such as:

  • Connect and manage multiple machines

  • Manage containers via Docker

  • Interact with a Kubernetes or Openshift clusters

  • Modify network settings

  • Manage user accounts

  • Access a web-based shell

  • View system performance information by way of helpful graphs

  • View system services and log files

Cockpit can be installed on Debian, Red Hat, CentOS, Arch Linux, and Ubuntu. Here, I will focus on installing the system on a Ubuntu 16.04 server that already includes Docker.

Out of the list of features, the one that stands out is the container management. Why? Because it make installing and managing containers incredibly simple. In fact, you might be hard-pressed to find a better container management solution.
With that said, let’s install this solution and see just how easy it is to use.

Installation

As I mentioned earlier, I will be installing Cockpit on an instance of Ubuntu 16.04, with Docker already running. The steps for installation are quite simple. The first thing you must do is log into your Ubuntu server. Next you must add the necessary repository with the command:

sudo add-apt-repository ppa:cockpit-project/cockpit

When prompted, hit the Enter key on your keyboard and wait for the prompt to return. Once you are back at your bash prompt, update apt with the command:

sudo apt-get get update

Install Cockpit by issuing the command:

sudo apt-get -y install cockpit cockpit-docker

After the installation completes, it is necessary to start the Cockpit service and then enable it so it auto-starts at boot. To do this, issue the following two commands:

sudo systemctl start cockpit
sudo systemctl enable cockpit

That’s all there is to the installation.

Logging into Cockpit

To gain access to the Cockpit web interface, point a browser (that happens to be on the same network as the Cockpit server) to http://IP_OF_SERVER:9090, and you will be presented with a login screen (Figure 1).

A word of warning with using Cockpit and Ubuntu. Many of the tasks that can be undertaken with Cockpit require administrative access. If you log in with a standard user, you won’t be able to work with some of the tools like Docker. To get around that, you can enable the root user on Ubuntu. This isn’t always a good idea. By enabling the root account, you are bypassing the security system that has been in place for years. However, for the purpose of this article, I will enable the root user with the following two commands:

sudo passwd root sudo passwd -u root 

NOTE: Make sure you give the root account a very challenging password.

Should you want to revert this change, you only need issue the command:

sudo passwd -l root

With other distributions, such as CentOS and Red Hat, you will be able to log into Cockpit with the username root and the root password, without having to go through the extra hopes as described above.
If you’re hesitant to enable the root user, you can always pull down the images, from the server terminal (using the command docker pull IMAGE_NAME where IMAGE_NAME is the image you want to pull). That would add the image to your docker server, which can then be managed via a regular user. The only caveat to this is that the regular user must be added to the Docker group with the command:

sudo usermod -aG docker USER

Where USER is the actual username to be added to the group. Once you’ve done that, log out, log back in, and then restart Docker with the command:

sudo service docker restart

Now the regular user can start and stop the added Docker images/containers without having to enable the root user. The only caveat is that user will not be able to add new images via the Cockpit interface.

Using Cockpit

Once you’ve logged in, you will be treated to the Cockpit main window (Figure 2).

You can go through each of the sections to check on the status of the server, work with users, etc., but we want to go right to the containers. Click on the Containers section to display the current running contains as well as the available images (Figure 3).

To start an image, simply locate the image and click the associated start button. From the resulting popup window (Figure 4), you can check all the information about the image (and adjust as needed), before clicking the Run button.

Once the image is running, you can check its status by clicking on the entry under the Containers section and then Stop, Restart, or Delete the instance. You can also click Change resource limits and then adjust either the Memory limit and/or CPU priority.

Adding new images

Say you have logged on as the root user. If so, you can add new images with the help of the Cockpit GUI. From the Containers section, click the Get new image button and then, in the resulting window, search for the image you want to add. Say you want to add the latest official build of Centos. Type centos in the search field and then, once the search results populate, select the official listing and click Download (Figure 5).

Once the image has downloaded, it will be available to Docker and can be run via Cockpit.

As simple as it gets

Managing Docker doesn’t get any easier. Yes, there is a caveat when working with Cockpit on Ubuntu, but if it’s your only option, there are ways to make it work. With the help of Cockpit, you can not only easily manage Docker images, you can do so from any web browser that has access to your Linux server. Enjoy your newfound Docker ease.

Posted on Leave a comment

New Ports Bring Linux to Arm Laptops, Android to the Pi

Like life itself, software wants to be free. In our increasingly open source era, software can more easily disperse into new ecosystems. From open source hackers fearlessly planting the Linux flag on the Sony Playstation back in the aughts to standard Linux apps appearing on Chromebooks and on Android-based Galaxy smartphones (Samsung’s DeX), Linux continues to break down barriers.

The latest Linux-related ports include an AArch64-Laptops project that enables owners of Windows-equipped Arm laptops and tablets to load Ubuntu. There’s also a Kickstarter project to develop a Raspberry Pi friendly version of Google’s low-end Android 9 Pi Go stack. Even Windows is spreading its wings. A third-party project has released a WoA installer that enables a full Windows 10 image to run on the Pi.

Ubuntu to Arm laptops

The practice of replacing Windows with Linux on Intel-based computers has been around for decades, but the arrival of Arm-based laptops has complicated matters. Last year, Microsoft partnered with Qualcomm and to release the lightweight Windows 10 S on the Asus NovaGo convertible laptop and the HP Envy x2 and Lenovo Miix 630 2-in-1 tablets, all powered by a Snapdragon 835 SoC.

Reviews have been mixed, with praise for the longer battery life, but criticism about sluggish performance. Since the octa-core, 10nm fabricated Snapdragon 835 is designed to run on the Linux-based Android — it also supports embedded Linux — Linux hackers naturally decided that they could do better.

As reported by Phoronix, AArch64-Laptops has posted Ubuntu 18.04 LTS images for all three of the above systems. As noted by Liliputing, the early release lacks support for WiFi, on-board storage, or hardware-accelerated graphics, and the touchpad doesn’t work on the Asus NovaGo.

The WiFi and storage issues should be solved in the coming months and accelerated graphics should be theoretically possible thanks to the open source Freedreno GPU driver project, says Phoronix. It’s unclear if AArch64-Laptops can whip up Ubuntu builds for more powerful Arm Linux systems like the Snapdragon 850 based Samsung Galaxy Book 2 and Lenovo Yoga C630.

Liliputing notes that Arm Linux lovers can also try out the Linux-driven, Rockchip RK3399 based Pinebook laptop. Later this year, Pine64 will release a consumer-grade Pinebook Pro.

Android Go to Raspberry Pie

If you like a double helping of pie, have we got a Kickstarter project for you. As reported by Geeky Gadgets, an independent group called RaspberryPi DevTeam has launched a Kickstarter campaign to develop a version of Google’s new Android 9 Pie Go stack for entry-level smartphones that can to run on the Raspberry Pi 3.

Assuming the campaign meets its modest $3,382 goal by April 10, there are plans to deliver a usable build by the end of the year. Pledges range from 1 to 499 Euros.

The project will use AOSP-based code from Android 9 Pie Go, which was released last August. Go is designed for low-end phones with only 1GB RAM.

RaspberryPi DevTeam was motivated to launch the project because current Android stacks for the Raspberry Pi “normally have bugs, are unstable and run slow,” says the group. That has largely been true since hackers began attempting the feat four years ago with the quad-core, Cortex-A7 Raspberry Pi 2. Early attempts have struggled to give Android its due on 1GB RAM SBC, even with the RPi 3B and 3+.

The real-time focused RTAndroid has had the most success, and there have been other efforts like the unofficial, Android 7.1.2 based LineageOS 14.1 for the RPi 3. Last year, an RTAndroid-based, industrial focused emteria.OS stack arrived with more impressive performance.

A MagPi hands-on last summer was impressed with the stack, which it called “the first proper Android release running on a Raspberry Pi 3B+.” MagPi continues: “Finally there’s a proper way to install full Android on your Raspberry Pi.”

Available in free evaluation (registration required) and commercial versions, emteria.OS uses F-Droid as an open source stand-in for Google Play. The MagPi hands-on runs through an installation of Netflix and notes the availability of apps including NewPipe (YouTube), Face Slim (Facebook), and Terminal Emulator.

All these solutions should find it easier to run on next year’s Raspberry Pi 4. Its SoC will move from the current 40nm process to something larger than 7nm, but no larger than 28nm, according to RPi Trading CEO Eben Upton in a Feb. 11 Tom’s Hardware post. The SBC will have “more RAM, a faster processor, and faster I/O,” but will be the same size and price as the RPi 3B+, says the story. Interestingly, it was former Google CEO Eric Schmidt who convinced Upton and his crew to retain the $35 price for the RPi 2. The lesson seems to have stuck.

Windows 10 on RPi 3

As far back as the Raspberry Pi 2, Microsoft announced it would support the platform with its slimmed down Windows 10 IoT, which works better on the new 64-bit RPi 3 models. But why use a crippled version of Windows for low-power IoT when you could use Raspbian?

The full Windows 10 should draw more interest, and that’s what’s promised by the WOA-Project with its new WoA-Installer for the RPi 3 or 3B+. According to Windows Latest, the open source WoA (Windows on Arm) Installer was announced in January following an earlier WoA release for the Lumia 950 phones.

The WoA Installer lets you run Windows 10 Arm 64 on the Pi but comes with no performance promises. The GitHub page notes: “WoA Installer needs a set of binaries, AKA the Core Package, to do its job. These binaries are not not mine and are bundled and offered just for convenience…” Good luck!