Posted on Leave a comment

Solving License Compliance at the Source: Adding SPDX License IDs

Accurately identifying the license for open source software is important for license compliance. However, determining the license can sometimes be difficult due to a lack of information or ambiguous information. Even when there is some licensing information present, a lack of consistent ways of expressing the license can make automating the task of license detection very difficult, thus requiring significant amounts of manual human effort.   There are some commercial tools applying machine learning to this problem to reduce the false positives, and train the license scanners, but a better solution is to fix the problem at the upstream source.

In 2013,  the U-boot project decided to use the SPDX license identifiers in each source file instead of the GPL v2.0 or later header boilerplate that had been used up to that point.   The initial commit message had an eloquent explanation of reasons behind this transition.

Read more at The Linux Foundation

Posted on Leave a comment

The Linux Foundation: Accelerating Open Source Innovation

The Linux Foundation’s job is to create engines of innovation and enable the gears of those engines to spin faster, said Executive Director Jim Zemlin, in opening remarks at Open Source Summit in Vancouver.

Examples of how the organization is driving innovation across industries can be seen in projects such as Let’s Encrypt, a free, automated certificate authority working to encrypt the entire web, Automotive Grade LinuxHyperledger, and the new Academy Software Foundation, which is focused on open collaboration within the motion picture industry.

This is open source beyond Linux and, according to Zemlin, is indicative of one of the best years and most robust periods at The Linux Foundation itself. So far in 2018, the organization has added a new member every single day, with Cloud Native Computing Foundation (CNCF), one of The Linux Foundation’s fastest growing projects, announcing 38 new members this week.

Successful projects depend on members, developers, standards, and infrastructure to develop products that the market will adopt, said Zemlin, and The Linux Foundation facilitates this success in many ways. It works downstream helping industry, government, and academia understand how to consume and contribute to open source. At the same, it works upstream to foster development and adoption of open source solutions, showing industries how to create value and generate reinvestment.

During his keynote, Zemlin spoke with Sarah Novotny, Open Source Strategy Lead at Google Cloud, about Google’s support of open source development. In the talk, Novotny announced that Google Cloud is transferring ownership and management of the Kubernetes project’s cloud resources to CNCF community contributors and is additionally granting $9 million over three years to CNCF to cover infrastructure costs associated with Kubernetes development and distribution. Novotny, who noted that the project is actively seeking new contributors, said this commitment will provide the opportunity for more people to get involved.

In the words of Zemlin, let’s go solve big problems, one person, one project, one industry at a time.

This article originally appeared at The Linux Foundation

Posted on Leave a comment

Linux for Beginners: Moving Things Around

In previous installments of this series, you learned about directories and how permissions to access directories work. Most of what you learned in those articles can be applied to files, except how to make a file executable.

So let’s deal with that before moving on.

No .exe Needed

In other operating systems, the nature of a file is often determined by its extension. If a file has a .jpg extension, the OS guesses it is an image; if it ends in .wav, it is an audio file; and if it has an .exe tacked onto the end of the file name, it is a program you can execute.

This leads to serious problems, like trojans posing as documents. Fortunately, that is not how things work in Linux. Sure, you may see occasional executable file endings in .sh that indicate they are runnable shell scripts, but this is mostly for the benefit of humans eyeballing files, the same way when you use ls --color, the names of executable files show up in bright green.

The fact is most applications have no extension at all. What determines whether a file is really program is the x (for executable) bit. You can make any file executable by running

chmod a+x some_program

regardless of its extension or lack thereof. The x in the command above sets the x bit and the a says you are setting it for all users. You could also set it only for the group of users that own the file (g+x), or for only one user, the owner (u+x).

Although we will be covering creating and running scripts from the command line later in this series, know that you can run a program by writing the path to it and then tacking on the name of the program on the end:

path/to/directory/some_program

Or, if you are currently in the same directory, you can use:

./some_program

There are other ways of making your program available from anywhere in the directory tree (hint: look up the $PATH environment variable), but you will be reading about those when we talk about shell scripting.

Copying, Moving, Linking

Obviously, there are more ways of modifying and handling files from the command line than just playing around with their permissions. Most applications will create a new file if you still try to open a file that doesn’t exist. Both

nano test.txt

and

vim test.txt

(nano and vim being to popular command line text editors) will create an empty test.txt file for you to edit if test.txt didn’t exist beforehand.

You can also create an empty file by touching it:

touch test.txt

Will create a file, but not open it in any application.

You can use cp to make a copy of a file in another location or under a new name:

cp test.txt copy_of_test.txt

You can also copy a whole bunch of files:

cp *.png /home//images

The instruction above copies all the PNG files in the current directory into an images/ directory hanging off of your home directory. The images/ directory has to exist before you try this, or cp will show an error. Also, be warned that, if you copy a file to a directory that contains another file with the same name, cp will silently overwrite the old file with the new one.

You can use

cp -i *.png /home//images

If you want cp to warn you of any dangers (the -i options stands for interactive).

You can also copy whole directories, but you need the -r option for that:

cp -rv directory_a/ directory_b

The -r option stands for recursive, meaning that cp will drill down into directory_a, copying over all the files and subdirectories contained within. I personally like to include the -v option, as it makes cp verbose, meaning that it will show you what it is doing instead of just copying silently and then exiting.

The mv command moves stuff. That is, it changes files from one location to another. In its simplest form, mv looks a lot like cp:

mv test.txt new_test.txt

The command above makes new_test.txt appear and test.txt disappear.

mv *.png /home//images

Moves all the PNG files in the current directory to a directory called images/ hanging of your home directory. Again you have to be careful you do not overwrite existing files by accident. Use

mv -i *.png /home//images

the same way you would with cp if you want to be on the safe side.

Apart from moving versus copying, another difference between mv and cpis when you move a directory:

mv directory_a/ directory_b

No need for a recursive flag here. This is because what you are really doing is renaming the directory, the same way in the first example, you were renaming the file*. In fact, even when you “move” a file from one directory to another, as long as both directories are on the same storage device and partition, you are renaming the file.

You can do an experiment to prove it. time is a tool that lets you measure how long a command takes to execute. Look for a hefty file, something that weighs several hundred MBs or even some GBs (say, something like a long video) and try copying it from one directory to another like this:

$ time cp hefty_file.mkv another_directory/
real 0m3,868s user 0m0,016s sys 0m0,887s

In bold is what you have to type into the terminal and below what time outputs. The number to focus on is the one on the first line, real time. It takes nearly 4 seconds to copy the 355 MBs of hefty_file.mkv to another_directory/.

Now let’s try moving it:

$ time mv hefty_file.mkv another_directory/
real 0m0,004s
user 0m0,000s sys 0m0,003s

Moving is nearly instantaneous! This is counterintuitive, since it would seem that mv would have to copy the file and then delete the original. That is two things mv has to do versus cp‘s one. But, somehow, mv is 1000 times faster.

That is because the file system’s structure, with all its tree of directories, only exists for the users convenience. At the beginning of each partition there is something called a partition table that tells the operating system where to find each file on the actual physical disk. On the disk, data is not split up into directories or even files. There are tracks, sectors and clusters instead. When you “move” a file within the same partition, what the operating system does is just change the entry for that file in the partition table, but it still points to the same cluster of information on the disk.

Yes! Moving is a lie! At least within the same partition that is. If you try and move a file to a different partition or a different device, mv is still fast, but is noticeably slower than moving stuff around within the same partition. That is because this time there is actually copying and erasing of data going on.

Renaming

There are several distinct command line rename utilities around. None are fixtures like cp or mv and they can work in slightly different ways. What they all have in common is that they are used to change parts of the names of files.

In Debian and Ubuntu, the default rename utility uses regular expressions (patterns of strings of characters) to mass change files in a directory. The instruction:

rename 's/\.JPEG$/.jpg/' *

will change all the extensions of files with the extension JPEG to jpg. The file IMG001.JPEG becomes IMG001.jpg, my_pic.JPEG becomes my_pic.jpg, and so on.

Another version of rename available by default in Manjaro, a derivative of Arch, is much simpler, but arguably less powerful:

rename .JPEG .jpg *

This does the same renaming as you saw above. In this version, .JPEG is the string of characters you want to change, .jpg is what you want to change it to, and * represents all the files in the current directory.

The bottom line is that you are better off using mv if all you want to do is rename one file or directory, and that’s because mv is realiably the same in all distributions everywhere.

Learning more

Check out the both mv and cp‘s man pages to learn more. Run

man cp

or

man mv

to read about all the options these commands come with and which make them more powerful and safer to use.

Posted on Leave a comment

Opening Doors to Collaboration with Open Source Projects

One of the biggest benefits of open source is the ability to collaborate and partner with others on projects. Another is being able to package and share resources, something Michelle Noorali has done using Kubernetes. In a presentation called “Open Source Opening Doors,” Noorali, a senior software engineer at Microsoft, told an audience at the recent LC3 conference in China about her work on the Azure containers team building open source tools for Kubernetes and containers.

Her team needed a way to reliably scale several containerized applications and found Kubernetes to be a good solution and the open source community to be very welcoming, she said.

“In the process of deploying a lot of microservices to Kubernetes we found that we wanted some additional tooling to make it easier to share and configure applications to run in our cluster,’’ she explained. “You can deploy and scale your containerized apps by giving Kubernetes some declaration of what you want it to do in the form of a Kubernetes manifest.” However, in reality, she added, to deploy one app to a cluster you may have to write several Kubernetes manifests that utilize many resources hundreds of lines long.

Read more at The Linux Foundation

Posted on Leave a comment

DuZeru OS: As Easy as It Gets

There are seemingly countless Linux distributions on the market, each one hoping to carve out its own little niche and enjoy a growing user base. Some of those distributions have some pretty nifty tricks up their sleeves, while others are gorgeous works of art on the desktop. Still, others go to great lengths to simply be a desktop distribution capable of making Linux a simple experience, with a hint of elegance.

It’s that latter form in which DuZeru OS lives. This take on Linux is developed in Brazil and is based on the Debian stable branch. The default desktop (out of the box) is xfce 4.12.1, which helps to make DuZeru a serious contender in the lightweight Linux distribution arena.
You won’t find much information about DuZeru OS, because it’s relatively new. Nor will you find much in the way of documentation. Fortunately, that’s okay, as DuZeru OS is as straightforward a Linux distribution as you will find. The added bonus is that the developers have created a desktop that is incredibly easy on the eyes and just as easy on the mind. It’s not a challenge to install or to use. It just is.

That, in and of itself, makes reviewing such a distribution a challenge, as it doesn’t really go too far out of its way to differentiate itself from others. However, that also makes it a great contender for the average user.

Why?

Simple: Users prefer the familiar. Instead of looking to a desktop operating system which will challenge their knowledge of how their daily workflow should be, they want to hop on board and instantly know how to work. That’s where DuZeru OS shines. It’s familiar. It’s simple. Anyone could sit down with this desktop and immediately know how it works.

Let me give you a quick tour.

Installation

We’ve reached the point where the installation of most Linux distributions has become as easy as installing an app. DuZeru OS is no exception. The installation is handled in eight screens:

  1. Welcome — greetings from the developers.

  2. Location — choose your location.

  3. Keyboard — select your desired keyboard.

  4. Partitions (Figure 1) — partition your device.

  5. Users — create a user account.

  6. Summary — view the installation summary and OK the install.

  7. Install — view the installation as it occurs.

  8. Finish — you’re done. Reboot.

Once installed, reboot the machine and you’ll be greeted by the DuZeru OS login (Figure 2).

One really nice touch added to the login screen is the ability to configure it. Click on the menu button in the upper right corner to reveal a sidebar that allows you to set a few options for the login screen (Figure 3).

The Desktop

Once you login, you’ll be greeted by a window that includes three helpful tabs (Figure 4):

  • ABOUT — An introduction to DuZeru OS.

  • TIPS — A few handy tips regarding installation, kernel installation (more on this in a bit), customizing the appearance, system settings, and system information.

  • CONTACT — How to contact the developers.

Click on the desktop menu button and you’ll find a bare minimum of applications. In fact, your first reaction will probably be that DuZeru OS is seriously lacking in default apps. You’ll find:

And that’s it.

Fortunately, DuZeru OS includes a Software Manager that should be instantly familiar to anyone. Open the tool (Figure 5), search for the software you want, and install.

Kernel Installer

This is the one area where DuZeru OS ventures away from the average user. From the desktop menu, type kernel and then click on DuZeru Kernel Install. After typing your administrative password, you will be greeted with a window explaining the different types of kernels you can download and install (Figure 6).

Click on the button in the bottom-right corner and you’ll see a new window (Figure 7), which allows you to select from the available kernel types (such as GENERIC and LOW LATENCY) and then install the version of that kernel type you want.

Click the slider for the kernel you want, OK the installation, and wait for the installation to complete. When the process finishes, reboot and select the newly installed kernel.

Control Center

Open the desktop menu and click on the gear icon directly to the right of the search bar. This will open up the DuZeru Control Center, where you can configure every aspect of the operating system and even get a quick glance at system information (Figure 8).

From both the desktop menu and the Control Center, you can start the Stacer application. Stacer is an amazing tool that allows you to optimize your system in numerous ways (which further expands the capability of the Control Center). Within Stacer (Figure 9), you can:

  • Get a glimpse of system information

  • Manage startup applications

  • Run a system cleaner

  • Configure/manage services

  • Manage processes

  • Uninstall packages

  • View system resource usage

  • Manage apt repositories

  • Configure Stacer

Solid and simple Linux

DuZeru isn’t going to blow your mind — it’s not that kind of distribution. What it does do is prove that simplicity on the desktop can go a long, long way to winning over new users. So if you’re looking for a solid and simple Linux distribution, that’s perfectly suited for new users, you should certainly consider this flavor of Linux.

Posted on Leave a comment

Open Source Akraino Edge Computing Project Leaps Into Action

The ubiquitous topic of edge computing has so far primarily focused on IoT and machine learning. A new Linux Foundation project called Akraino Edge Stack intends to standardize similar concepts for use on edge telecom and networking systems in addition to IoT gateways. The goal to build an “open source software stack that supports high-availability cloud services optimized for edge computing systems and applications,” says the project.

“The Akraino Edge Stack project is focused on anything related to the edge, including both telco and enterprise use cases,” said Akraino evangelist Kandan Kathirvel, Director of Cloud Strategy & Architecture at AT&T, in an interview with Linux.com.

The project announced it has “moved from formation into execution,” and revealed a slate of new members including Arm, Dell, Juniper, and Qualcomm. New member Ericsson is joining AT&T Labs to host the first developer conference on Aug. 23-24.

Akraino Edge Stack was announced in February based on code contributions from AT&T for carrier-scale edge computing. In March, Intel announced it was joining the project and open sourcing parts of its Wind River Titanium Cloud and Network Edge Virtualization SDK for the emerging Akraino stack. Intel was joined by a dozen, mostly China-based members including China Mobile, China Telecom, China Unicom, Docker, Huawei, Tencent, and ZTE.

The Akraino Edge Stack project has now announced broader based support with new members Arm, Dell EMC, Ericsson, inwinSTACK, Juniper Networks, Nokia, Qualcomm, Radisys, Red Hat, and Wind River. The project says it has begun to develop “blueprints that will consist of validated hardware and software configurations against defined use case and performance specifications.” The initial blueprints and seed code will be opened to the public at the end of the week following the Akraino Edge Stack Developer Summit at AT&T Labs in Middletown, New Jersey.

The project announced a lightweight governance framework with a Technical Steering Committee (TSC), composed of “active committers within the community.” There is “no prerequisite of financial contribution,” says the project.

Edge computing meets edge networking

Like most edge computing projects and products, such as AWS Greengrass, the Linux Foundation’s EdgeX Foundry, and Google’s upcoming Cloud IoT Edge, the technology aims to bring cloud technologies and analytics to smaller-scale computers that sit closer to the edge of the network. The goal is to reduce the latency of cloud/device interactions, while also reducing costly bandwidth delivery and improving reliability via a distributed network.

Akraino will offer blueprints for IoT, but it is more focused more on bringing edge services to telecom and networking systems such as cellular base stations, smaller networking servers, customer premises equipment, and virtualized central offices (VCOs). The project will supply standardized blueprints for implementing virtual network functions (VNFs) in these systems for applications ranging from threat detection to augmented reality to specialized services required to interconnect cars and drones. Virtualization avoids the cost and complexity of integrating specialized hardware with edge networking systems.

“One key difference from other communities is that we offer blueprints,” said AT&T’s Kathirvel. “Blueprints are declarative configurations of everything including the hardware, software, operational and security tools, security tools — everything you need to run production in large scale.”

When asked for further clarification between Akraino’s stack and the EdgeX Foundry’s middleware for industrial IoT, Kathirvel said that EdgeX is more focused on the intricacies of IIoT gateway/sensor communications whereas Akraino has a broader focus and is more concerned with cloud connections.

“Akraino Edge Stack is not limited to IoT — we’re bringing everything together in respect to the edge,” said Kathirvel. “It’s complementary with EdgeX Foundry in that you could take EdgeX code and create a blueprint and maintain that within the Akraino Edge Stack as an end to end stack. In addition, the community is working on additional use cases to support different classes of edge hardware.”

Meeting new demands for sub-20ms latency

Initially, Akraino Edge Stack use cases will be “focused on provider deployment,” said Kathirvel, referring to telecom applications. These will include emerging, 5G-enabled applications such as “AR/VR and connected cars” in which sub 20 millisecond or lower latency is required.  In addition, edge computing can reduce the extent to which network bandwidth must be boosted to accommodate demanding multimedia-rich and cloud-intensive end-user applications.

Akraino Edge Stack borrows virtualization and container technologies from open source networking projects such as OpenStack. The goal is to create a common API stack for deploying applications using VNFs running within containers. A VNF is a software-based implementation of the networked virtual machines implemented via closely related NFV (network functions virtualization) initiatives.

In a May 23 presentation (YouTube video) at the OpenStack Summit Vancouver, Kathirvel and fellow Akraino contributor Melissa Evers-Hood of Intel, listed several other projects and technologies that the stack will accommodate with blueprints, including Ceph (distributed cloud storage), Kata Containers, Kubernetes, and the Intel/Wind River backed StarlingX for open cloud infrastructure. Aside from EdgeX and OpenStack, other Linux Foundation hosted projects on the list include DANOS (Disaggregated Network Operating System) and the LF’s new Acumos AI project for developing a federated platform to manage and share models for AI and machine learning.

Akraino aligns closely with OpenStack edge computing initiatives, as well as the Linux Foundation’s ONAP (Open Network Automation Platform). ONAP, which was founded in Feb. 2017 from the merger of the earlier ECOMP and OPEN-O projects, is developing a framework for real-time, policy-driven software automation of VNFs.

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.

Posted on Leave a comment

Building in the Open: ONS Europe Demos Highlight Networking Industry Collaboration

LF Networking (LFN), launched on January 1st of this year, has already made a significant impact in the open source networking ecosystem gaining over 100 members in the just the first 100 days. Critically, LFN has also continues to attract support and participation from many of the world’s top network operators, including six new members announced in May: KT, KDDI, SK Telecom, Sprint, Swisscom; and Deutsche Telekom announced just last month. In fact, member companies of LFN now represent more than 60% of the world’s mobile subscribers. Open source is becoming the de facto way to develop software and it’s the technical collaboration at the project level that makes it so powerful.

Similar to the demos in the LFN Booth at ONS North America, the LFN Booth at ONS Europe will once again showcase the top, community-led, technical demos from the LFN family of projects. We have increased the number of demo stations from 8 to 10, and for the first time, are showcasing demos from the big data analytics project PNDA, and demos that include the newly added LFN Project, Tungsten Fabric (formerly OpenContrail). Technology from founding LFN Projects FD.ioONAPOPNFV, and OpenDaylight will also be represented, along with adjacent projects like AcumosKubernetesOpenCIOpen Compute Project, and OpenStack.

Read more at The Linux Foundation

Posted on Leave a comment

Building a Cloud Native Future

Cloud and open source are changing the world and can play an integral role in how companies transform themselves. That was the message from Abby Kearns, executive director of open source platform as a service provider Cloud Foundry Foundation, who delivered a keynote address earlier this summer at LinuxCon + ContainerCon + CloudOpen China, known as LC3.

“Cloud native technologies and cloud native applications are growing,’’ Kearns said. Over the next 18 months, there will be a 100 percent increase in the number of cloud native applications organizations are writing and using, she added. “This means you can no longer just invest in IT,” but need to in cloud and cloud technologies as well. …

To give the audience an idea of what the future will look like and where investments are being made in cloud and open source, Kearns cited a few examples. The automotive industry is changing rapidly, she said, and a Volkswagen automobile, for example, is no longer just a car; it has become a connected mobile device filled with sensors and data.

“Volkswagen realized they need to build out developer teams and applications that could take advantage of many clouds across 12 different brands,” she said. The car company has invested in Cloud Foundry and cloud native technologies to help them do that, she added.

“At the end of the day it’s about the applications that extend that car through mobile apps, supply chain management — all of that pulled together to bring a single concise experience for the automotive industry.”

Watch the complete keynote at The Linux Foundation.

Posted on Leave a comment

AryaLinux: A Distribution and a Platform

Most Linux distributions are simply that: A distribution of Linux that offers a variation on an open source theme. You can download any of those distributions, install it, and use it. Simple. There’s very little mystery to using Linux these days, as the desktop is incredibly easy to use and server distributions are required in business.

But not every Linux distribution ends with that idea; some go one step further and create both a distribution and a platform. Such is the case with AryaLinux. What does that mean? Easy. AryaLinux doesn’t only offer an installable, open source operating system, they offer a platform with which users can build a complete GNU/Linux operating system. The provided scripts were created based on the instructions from Linux From Scratch and Beyond Linux From Scratch.

If you’ve ever attempted to build you own Linux distribution, you probably know how challenging it can be. AryaLinux has made that process quite a bit less stressful. In fact, although the build can take quite a lot of time (up to 48 hours), the process of building the AryaLinux platform is quite easy.

But don’t think that’s the only way you can have this distribution. You can download a live version of AryaLinux and install as easily as if you were working with Ubuntu, Linux Mint, or Elementary OS.

Let’s get AryaLinux up and running from the live distribution and then walk through the process of building the platform, using the special builder image.

The Live distribution

From the AryaLinux download page, you can get a version of the operating system that includes either GNOME or Xfce. I chose the GNOME route and found it to be configured to include Dash to dock and Applications menu extensions. Both of these will please most average GNOME users. Once you’ve downloaded the ISO image, burn it to either a DVD/CD or to a USB flash drive and boot up the live instance. Do note, you need to have at least 25GB of space on a drive to install AryaLinux. If you’re planning on testing this out as a virtual machine, create a 30-40 GB virtual drive, otherwise the installer will fail every time.

Once booted, you will be presented with a login screen, with the default user selected. Simply click the user and login (there is no password required).

To locate the installer, click the Applications menu, click Activities Overview, type “installer” and click on the resulting entry. This will launch the AryaLinux installer … one that looks very familiar to many Linux installers (Figure 1).

In the next window (Figure 2), you are required to define a root partition. To do this, type “/” (no quotes) in the Choose the root partition section.

If you don’t define a home partition, it will be created for you. If you don’t define a swap partition, none will be created. If you have a need to create a home partition outside of the standard /home, do it here. The next installation windows have you do the following:

That’s all there is to the installation. Once it completes, reboot, remove the media (or delete the .iso from your Virtual Machine storage listing), and boot into your newly-installed AryaLinux operating system.

What’s there?

Out of the box, you should find everything necessary to use AryaLinux as a full-functioning desktop distribution. Included is:

The caveats

It should be noted that this is the first official release of AryaLinux, so there will be issues. Right off the bat I realized that no matter what I tried, I could not get the terminal to open. Unfortunately, the terminal is a necessary tool for this distribution, as there is no GUI for updating or installing packages. In order to get to a bash prompt, I had to use a virtual screen. That’s when the next caveat came into play. The package manager for AryaLinux is alps, but its primary purpose is working in conjunction with the build scripts to install the platform. Unfortunately there is no included man page for alps on AryaLinux and the documentation is very scarce. Fortunately, the developers did think to roll in Flatpak support, so if you’re a fan of Flatpak, you can install anything you need (so long as it’s available as a flatpak package) using that system.

Building the platform

Let’s talk about building the AryaLinux platform. This isn’t much harder than installing the standard distribution, only it’s done via the command line. Here’s what you do:

  1. Download the AryaLinux Builder Disk.

  2. Burn the ISO to either DVD/CD or USB flash drive.

  3. Boot the live image.

  4. Once you reach the desktop, open a terminal window from the menu.

  5. Change to the root user with the command sudo su.

  6. Change directories with the command cd aryalinux/base-system

  7. Run the build script with the command ./build-arya

You will first be asked if you want to start a fresh build or resume a build (Figure 3). Remember, the AryaLinux build takes a LOT of time, so there might be an instance where you’ve started a build and need to resume.

To start a new build, type “1” and then hit Enter on your keyboard. You will now be asked to define a  number of options (in order to fulfill the build script requirements). Those options are:

  • Bootloader Device

  • Root Partition

  • Home Partition

  • Locale

  • OS Name

  • OS Version

  • OS Codename

  • Domain Name

  • Keyboard Layout

  • Printer Paper Size

  • Enter Full Name

  • Username

  • Computer Name

  • Use multiple cores for build (y/n)

  • Create backups (y/n)

  • Install X Server (y/n)

  • Install Desktop Environment (y/n)

  • Choose Desktop Environment (XFCE, Mate, KDE, GNOME)

  • Do you want to configure advanced options (y/n)

  • Create admin password

  • Create password for standard user

  • Install bootloader (y/n)

  • Create Live ISO (y/n)

  • Select a timezone

After you’ve completed the above, the build will start. Don’t bother watching it, as it will take a very long time to complete (depending upon your system and network connection). In fact, the build can take anywhere from 8-48 hours. After the build completes, reboot and log into your newly built AryaLinux platform.

Who is AryaLinux for?

I’ll be honest, if you’re just a standard desktop user, AryaLinux is not for you. Although you can certainly get right to work on the desktop, if you need anything outside of the default applications, you might find it a bit too much trouble to bother with. If, on the other hand, you’re a developer, AryaLinux might be a great platform for you. Or, if you just want to see what it’s like to build a Linux distribution from scratch, AryaLinux is a pretty easy route.

Even with its quirks, AryaLinux holds a lot of promise as both a Linux distribution and platform. If the developers can see to it to build a GUI front-end for the alps package manager, AryaLinux could make some serious noise.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Posted on Leave a comment

Zephyr Project Embraces RISC-V with New Members and Expanded Board Support

The Linux Foundation’s Zephyr Project, which is developing the open source Zephyr real-time operating system (RTOS) for microcontrollers, announced six new members, including RISC-V members Antmicro and SiFive. The project also announced expanded support for developer boards. Zephyr is now certified to run 100 boards spanning ARM, x86, ARC, NIOS II, XTENSA, and RISCV32 architectures.

Antmicro, SiFive, and DeviceTone, which makes IoT-savvy smart clients, have signed up as Silver members, joining Oticon, runtime.io, Synopsys, and Texas Instruments. The other three new members — Beijing University of Posts and Telecommunications, The Institute of Communication and Computer Systems (ICCS), and Northeastern University – have joined the Vancouver Hack Space as Associate members.

The Platinum member leadership of Intel, Linaro, Nordic Semiconductor, and NXP remains the same. NXP, which has returned to an independent course after Qualcomm dropped its $44 billion bid, supplied one of the first Zephyr dev boards – its Kinetis-based FRDM-K64F (Freedom-K64F) – joining two Arduino boards and Intel’s Galileo Gen 2. Like Nordic, NXP is a leading microcontroller unit (MCU) chipmaker in addition to producing Linux-friendly Cortex-A SoCs like the i.MX8.

RTOSes go open source

Zephyr is still a toddler compared to more established open source RTOS projects like industry leader FreeRTOS, and the newer Arm Mbed, which has the advantage of being sponsored by the IP giant behind Cortex-M MCUs. Yet, the growing migration from proprietary to open source RTOSes signals good times for everyone.

“There is a major shift going on the RTOS space with so many things driving the increase in preference for open source choices,” said Thea Aldrich, the Zephyr Project’s new Evangelist and Developer Advocate, in an interview with Linux.com. “In a lot of ways, we’re seeing the same factors and motivations at play as happened with Linux many years ago. I am the most excited to see the movement on the low end.”

RISC-V alignment

The decision to align Zephyr with similarly future-looking open source projects like RISC-V appears to be a sound strategic move. “Antmicro and SiFive bring a lot of excitement and energy and great perspective to Zephyr,” said Aldrich.

With SiFive, the Zephyr Project now has the premiere RISC-V hardware player on board. SiFive created the first MCU-class RISC-V SoC with its open source Freedom E300, which powers its Arduino-compatible HiFive1 and Arduino Cinque boards. The company also produced the first Linux-friendly RISC-V SoC with its Freedom U540, the SoC that powers its HiFive Unleashed SBC. (SiFive will soon have RISC-V-on-Linux competition from an India-based project called Shakti.)

Antmicro is the official maintainer of RISC-V in the Zephyr Project and is active in the RISC-V community. Its open source Renode IoT development framework is integrated in the Mi-V platform of Microsemi, the leading RISC-V soft-core vendor. Antmicro has also developed a variety of custom software-based implementations of RISC-V for commercial customers.

Antmicro and SiFive announced a partnership in which SiFive will provide Renode to its customers as part of “a comprehensive solution covering build, debug and test in multi-node systems.” The announcement touts Renode’s ability to simulate an entire SoC for RISC-V developers, not just the CPU.

Zephyr now supports RISC-V on QEMU, as well as the SiFive HiFive1, Microsemi’s FPGA-based, soft-core M2GL025 Mi-V board, and the Zedboard Pulpino. The latter is an implementation of PULP’s open source PULPino RISC-V soft core that runs on the venerable Xilinx Zynq based ZedBoard.

Other development boards on the Zephyr dev board list include boards based on MCUs from Microchip, Nordic, NXP, ST, and others, as well as the BBC Microbit and 96Boards Carbon. Supported SBCs that primarily run Linux, but can also run Zephyr on their MCU companion chips, include the MinnowBoard Max, Udoo Neo, and UP Squared.

Zephyr 1.13 on track

The Zephyr Project is now prepping a 1.13 build due in September, following the usual three-month release cycle. The release adds support for Precision Time Protocol PTP and SPDX license tracking, among other features. Zephyr 1.13 continues to expand upon Zephyr’s “safety and security certifications and features,” says Aldrich, a former Eclipse Foundation Developer Advocate.  

Aldrich first encountered Zephyr when she found it to be an ideal platform for tracking her cattle with sensors on a small ranch in Texas. “Zephyr fits in really nicely as the operating system for sensors and other devices way out on the edge,” she says.

Zephyr has other advantages such as its foundation on the latest open source components and its support for the latest wireless and sensor devices. Aldrich was particularly attracted to the Zephyr Project’s independence and transparent open source governance.

“There are a lot of choices for open source RTOSes and each has its own strengths and weaknesses,” continued Aldrich, “We have a lot of really strong aspects of our project but the community and how we operate is what comes to mind first. It’s a truly collaborative effort. For us, open source is more than a license. We’ve made it transparent how technical decisions are made and community input is incorporated.”

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.