Posted on Leave a comment

KVM Forum

KVM Forum is an annual event that presents a rare opportunity for developers and users to meet, discuss the state of Linux virtualization technology, and plan for the challenges ahead. This highly technical conference unites the developers who drive KVM development and the users who depend on KVM as part of their offerings, or to power their data centers and clouds.

Learn more

Posted on Leave a comment

Kubernetes Contributor Summit Barcelona

Kubernetes

May 19, 2019

Fira Gran Via, CC4 + CC5

08908 Barcelona Barcelona

Spain

The yearly Kubernetes Contributor Summits bring together new and current Kubernetes contributors alike to connect and share face-to-face. With each event having different goals, the Barcelona event is shaping up to focus on strengthening our contributor base in other regions outside of the US, and adding in some activities for our current contributors to get hallway conversations in with their distributed peers.

Learn more

Click Here!

Posted on Leave a comment

Linux Foundation Welcomes LVFS Project

The Linux Foundation welcomes the Linux Vendor Firmware Service (LVFS) as a new project. LVFS is a secure website that allows hardware vendors to upload firmware updates. It’s used by all major Linux distributions to provide metadata for clients, such as fwupdmgr, GNOME Software and KDE Discover.

To learn more about the project’s history and goals, we talked with Richard Hughes, upstream maintainer of LVFS and Principal Software Engineer at Red Hat.

Linux Foundation: Briefly, what is Linux Vendor Firmware Service (LVFS)? Can you give us a little background on the project?

Richard Hughes:  A long time ago I wanted to design and build an OpenHardware colorimeter (a device used to measure the exact colors on screen) as a weekend hobby. To update the devices, I also built a command line tool and later a GUI tool to update just the ColorHug firmware, downloading a list of versions as an XML file from my personal homepage. I got lots of good design advice from Lapo Calamandrei for the GUI (a designer from GNOME), but we concluded it was bad having to reinvent the wheel and build a new UI for each open hardware device.

A few months prior, Microsoft made UEFI UpdateCapsule a requirement for the “Windows 10 sticker.” This meant vendors had to start supporting system firmware updates via a standardized format that could be used from any OS. Peter Jones (a colleague at Red Hat) did the hard work of working out how to deploy these capsules on Linux successfully. The capsules themselves are just binary executables, so what was needed was the same type of metadata that I was generating for ColorHug, but in a generic format.

Some vendors like Dell were already generating some kinds of metadata and trying to support Linux. A lot of the tools for applying the firmware updates were OEM-specific, usually only available for Windows, and sometimes made dubious security choices. By using the same container file format as proposed by Microsoft (the reason we use a cabinet archive, rather than .tar or .zip) vendors could build one deliverable that worked on Windows and Linux.

Dell has been a supporter ever since the early website prototypes. Mario Limonciello (Senior Principal Software Development Engineer from Dell) has worked with me on both the lvfs-website project and fwupd in equal measure, and I consider him a co-maintainer of both projects. Now the LVFS supports firmware updates on 72 different devices, from about 30 vendors, and has supplied over 5 million firmware updates to Linux clients.

The fwupd project is still growing, supporting more hardware with every release. The LVFS continues to grow, adding important features like 2 factor authentication, OAuth and various other tools designed to get high-quality metadata from the OEMs and integrate it into ODM pipelines. The LVFS is currently supported by donations, which funds the two server instances and some of the test hardware I use when helping vendors.

Hardware vendors upload redistributable firmware to the LVFS site packaged up in an industry-standard .cab archive along with a Linux-specific metadata file. The fwupd daemon allows session software to update device firmware on the local machine. Although fwupd and the LVFS were designed for desktops, both are also usable on phones, tablets, IoT devices and headless servers.

The LVFS and fwupd daemon are open source projects with contributions from dozens of people from many different companies. Plugins allow many different update protocols to be supported.

Linux Foundation: What are some of the goals of the LVFS project?

Richard Hughes: The short-term goal was to get 95% of updatable consumer hardware supported. With the recent addition of HP that’s now a realistic target, although you have to qualify the 95% with “new consumer non-enterprise hardware sold this year” as quite a few vendors will only support hardware no older than a few years at most, and most still charge for firmware updates for enterprise hardware. My long-term goal is for the LVFS to be seen like a boring, critical part of infrastructure in Linux, much like you’d consider an NTP server for accurate time, or a PGP keyserver for trust.

With the recent Spectre and Meltdown issues hitting the industry, firmware updates are no longer seen as something that just adds support for new hardware or fixes the occasional hardware issue. Now the EFI BIOS is a fully fledged operating system with networking capabilities, companies and government agencies are realizing that firmware updates are as important as kernel updates, and many are now writing in “must support LVFS” as part of any purchasing policy.

Linux Foundation: How can the community learn more and get involved?

Richard Hughes: The LVFS is actually just a Python Flask project, and it’s all free code. If there’s a requirement that you need supporting, either as an OEM, ODM, company, or end user we’re really pleased to talk about things either privately in email, or as an issue or pull request on GitHub. If a vendor wants a custom flashing protocol added to fwupd, the same rules apply, and we’re happy to help.

Quite a few vendors are testing the LVFS and fwupd in private, and we agree to only do the public announcement when everything is working and the legal and PR teams gives the thumbs up. From a user point of view, we certainly need to tell hardware vendors to support fwupd and the LVFS, before the devices are sitting on shelves.

We also have a low-volume LVFS announce mailing list, or a user fwupd mailing list for general questions. Quite a few people are helping to spread the word, by giving talks at local LUGs or conferences, or presenting information in meetings or elsewhere. I’m happy to help with that, too.

This article originally appeared at Linux Foundation

Posted on Leave a comment

Using Square Brackets in Bash: Part 1

After taking a look at how curly braces ({}) work on the command line, now it’s time to tackle brackets ([]) and see how they are used in different contexts.

Globbing

The first and easiest use of square brackets is in globbing. You have probably used globbing before without knowing it. Think of all the times you have listed files of a certain type, say, you wanted to list JPEGs, but not PNGs:

ls *.jpg

Using wildcards to get all the results that fit a certain pattern is precisely what we call globbing.

In the example above, the asterisk means “zero or more characters“. There is another globbing wildcard, ?, which means “exactly one character“, so, while

 ls d*k*

will list files called darkly and ducky (and dark and duck — remember * can also be zero characters),

 ls d*k? 

will not list darkly (or dark or duck), but it will list ducky.

Square brackets are used in globbing for sets of characters. To see what this means, make directory in which to carry out tests, cd into it and create a bunch of files like this:

 touch file0{0..9}{0..9} 

(If you don’t know why that works, take a look at the last installment that explains curly braces {}).

This will create files file000, file001, file002, etc., through file097, file098 and file099.

Then, to list the files in the 70s and 80s, you can do this:

 ls file0[78]? 

To list file022, file027, file028, file052, file057, file058, file092, file097, and file98 you can do this:

 ls file0[259][278] 

Of course, you can use globbing (and square brackets for sets) for more than just ls. You can use globbing with any other tool for listing, removing, moving, or copying files, although the last two may require a bit of lateral thinking.

Let’s say you want to create duplicates of files file010 through file029 and call the copies archive010, archive011, archive012, etc..

You can’t do:

 cp file0[12]? archive0[12]? 

Because globbing is for matching against existing files and directories and the archive… files don’t exist yet.

Doing this:

 cp file0[12]? archive0[1..2][0..9] 

won’t work either, because cp doesn’t let you copy many files to other many new files. Copying many files only works if you are copying them to a directory, so this:

 mkdir archive cp file0[12]? archive 

would work, but it would copy the files, using their same names, into a directory called archive/. This is not what you set out to do.

However, if you look back at the article on curly braces ({}), you will remember how you can use % to lop off the end of a string contained in a variable.

Of course, there is a way you can also lop of the beginning of string contained in a variable. Instead of %, you use #.

For practice, you can try this:

 myvar="Hello World" echo Goodbye Cruel ${myvar#Hello} 

It prints “Goodbye Cruel World” because #Hello gets rid of the Hello part at the beginning of the string stored in myvar.

You can use this feature alongside your globbing tools to make your archive duplicates:

 for i in file0[12]?;\ do\ cp $i archive${i#file};\ done 

The first line tells the Bash interpreter that you want to loop through all the files that contain the string file0 followed by the digits 1 or 2, and then one other character, which can be anything. The second line do indicates that what follows is the instruction or list of instructions you want the interpreter to loop through.

Line 3 is where the actually copying happens, and you use the contents of the loop variable i twice: First, straight out, as the first parameter of the cp command, and then you add archive to its contents, while at the same time cutting of file. So, if i contains, say, file019

 "archive" + "file019" - "file" = "archive019" 

the cp line is expanded to this:

 cp file019 archive019 

Finally, notice how you can use the backslash \ to split a chain of commands over several lines for clarity.

In part two, we’ll look at more ways to use square brackets. Stay tuned.

Posted on Leave a comment

How Open Source Is Accelerating NFV Transformation

Red Hat is noted for making open source a culture and business model, not just a way of developing software, and its message of open source as the path to innovation resonates on many levels.  

In anticipation of the upcoming Open Networking Summit, we talked with Thomas Nadeau, Technical Director NFV at Red Hat, who gave a keynote address at last year’s event, to hear his thoughts regarding the role of open source in innovation for telecommunications service providers.

One reason for open source’s broad acceptance in this industry, he said, was that some very successful projects have grown too large for any one company to manage, or single-handedly push their boundaries toward additional innovative breakthroughs.

“There are projects now, like Kubernetes, that are too big for any one company to do. There’s technology that we as an industry need to work on, because no one company can push it far enough alone,” said Nadeau. “Going forward, to solve these really hard problems, we need open source and the open source software development model.”

Here are more insights he shared on how and where open source is making an innovative impact on telecommunications companies.

Linux.com: Why is open source central to innovation in general for telecommunications service providers?

Nadeau: The first reason is that the service providers can be in more control of their own destiny. There are some service providers that are more aggressive and involved in this than others. Second, open source frees service providers from having to wait for long periods for the features they need to be developed.

And third, open source frees service providers from having to struggle with using and managing monolith systems when all they really wanted was a handful of features. Fortunately, network equipment providers are responding to this overkill problem. They’re becoming much more flexible, more modular, and open source is the best means to achieve that.

Linux.com: In your ONS keynote presentation, you said open source levels the playing field for traditional carriers in competing with cloud-scale companies in creating digital services and revenue streams. Please explain how open source helps.

Nadeau: Kubernetes again. OpenStack is another one. These are tools that these businesses really need, not to just expand, but to exist in today’s marketplace. Without open source in that virtualization space, you’re stuck with proprietary monoliths, no control over your future, and incredibly long waits to get the capabilities you need to compete.

There are two parts in the NFV equation: the infrastructure and the applications. NFV is not just the underlying platforms, but this constant push and pull between the platforms and the applications that use the platforms.

NFV is really virtualization of functions. It started off with monolithic virtual machines (VMs). Then came “disaggregated VMs” where individual functions, for a variety of reasons, were run in a more distributed way. To do so meant separating them, and this is where SDN came in, with the separation of the control plane from the data plane. Those concepts were driving changes in the underlying platforms too, which drove up the overhead substantially. That in turn drove interest in container environments as a potential solution, but it’s still NFV.

You can think of it as the latest iteration of SOA with composite applications. Kubernetes is the kind of SOA model that they had at Google, which dropped the worry about the complicated networking and storage underneath and simply allowed users to fire up applications that just worked. And for the enterprise application model, this works great.

But not in the NFV case. In the NFV case, in the previous iteration of the platform at OpenStack, everybody enjoyed near one-for-one network performance. But when we move it over here to OpenShift, we’re back to square one where you lose 80% of the performance because of the latest SOA model that they’ve implemented. And so now evolving the underlying platform rises in importance, and so the pendulum swing goes, but it’s still NFV. Open source allows you to adapt to these changes and influences effectively and quickly. Thus innovations happen rapidly and logically, and so do their iterations.  

Linux.com: Tell us about the underlying Linux in NFV, and why that combo is so powerful.

Nadeau: Linux is open source and it always has been in some of the purest senses of open source. The other reason is that it’s the predominant choice for the underlying operating system. The reality is that all major networks and all of the top networking companies run Linux as the base operating system on all their high-performance platforms. Now it’s all in a very flexible form factor. You can lay it on a Raspberry Pi, or you can lay it on a gigantic million-dollar router. It’s secure, it’s flexible, and scalable, so operators can really use it as a tool now.

Linux.com: Carriers are always working to redefine themselves. Indeed, many are actively seeking ways to move out of strictly defensive plays against disruptors, and onto offense where they ARE the disruptor. How can network function virtualization (NFV) help in either or both strategies?

Nadeau: Telstra and Bell Canada are good examples. They are using open source code in concert with the ecosystem of partners they have around that code which allows them to do things differently than they have in the past. There are two main things they do differently today. One is they design their own network. They design their own things in a lot of ways, whereas before they would possibly need to use a turnkey solution from a vendor that looked a lot, if not identical, to their competitors’ businesses.

These telcos are taking a real “in-depth, roll up your sleeves” approach. ow that they understand what they’re using at a much more intimate level, they can collaborate with the downstream distro providers or vendors. This goes back to the point that the ecosystem, which is analogous to partner programs that we have at Red Hat, is the glue that fills in gaps and rounds out the network solution that the telco envisions.

Learn more at Open Networking Summit, happening April 3-5 at the San Jose McEnery Convention Center.

Posted on Leave a comment

How to Monitor Disk IO in Linux

iostat is used to get the input/output statistics for storage devices and partitions. iostat is a part of the sysstat package. With iostat, you can monitor the read/write speeds of your storage devices (such as hard disk drives, SSDs) and partitions (disk partitions). In this article, I am going to show you how to monitor disk input/output using iostat in Linux. So, let’s get started.

Installing iostat on Ubuntu/Debian:

The iostat command is not available on Ubuntu/Debian by default. But, you can easily install the sysstat package from the official package repository of Ubuntu/Debian using the APT package manager. iostat is a part of the sysstat package as I’ve mentioned before.

First, update the APT package repository cache with the following command:

$ sudo apt update

Read more at LinuxHint

Click Here!

Posted on Leave a comment

How to Install OpenLDAP on Ubuntu Server 18.04

The Lightweight Directory Access Protocol (LDAP) allows for the querying and modification of an X.500-based directory service. In other words, LDAP is used over a Local Area Network (LAN) to manage and access a distributed directory service. LDAPs primary purpose is to provide a set of records in a hierarchical structure. What can you do with those records? The best use-case is for user validation/authentication against desktops. If both server and client are set up properly, you can have all your Linux desktops authenticating against your LDAP server. This makes for a great single point of entry so that you can better manage (and control) user accounts.

The most popular iteration of LDAP for Linux is OpenLDAP. OpenLDAP is a free, open-source implementation of the Lightweight Directory Access Protocol, and makes it incredibly easy to get your LDAP server up and running.

In this three-part series, I’ll be walking you through the steps of:

  1. Installing OpenLDAP server.

  2. Installing the web-based LDAP Account Manager.

  3. Configuring Linux desktops, such that they can communicate with your LDAP server.

In the end, all of your Linux desktop machines (that have been configured properly) will be able to authenticate against a centralized location, which means you (as the administrator) have much more control over the management of users on your network.

In this first piece, I’ll be demonstrating the installation and configuration of OpenLDAP on Ubuntu Server 18.04. All you will need to make this work is a running instance of Ubuntu Server 18.04 and a user account with sudo privileges.
Let’s get to work.

Update/Upgrade

The first thing you’ll want to do is update and upgrade your server. Do note, if the kernel gets updated, the server will need to be rebooted (unless you have Live Patch, or a similar service running). Because of this, run the update/upgrade at a time when the server can be rebooted.
To update and upgrade Ubuntu, log into your server and run the following commands:

sudo apt-get update sudo apt-get upgrade -y

When the upgrade completes, reboot the server (if necessary), and get ready to install and configure OpenLDAP.

Installing OpenLDAP

Since we’ll be using OpenLDAP as our LDAP server software, it can be installed from the standard repository. To install the necessary pieces, log into your Ubuntu Server and issue the following command:

sudo apt-get instal slapd ldap-utils -y

During the installation, you’ll be first asked to create an administrator password for the LDAP directory. Type and verify that password (Figure 1).

Configuring LDAP

With the installation of the components complete, it’s time to configure LDAP. Fortunately, there’s a handy tool we can use to make this happen. From the terminal window, issue the command:

sudo dpkg-reconfigure slapd

In the first window, hit Enter to select No and continue on. In the second window of the configuration tool (Figure 2), you must type the DNS domain name for your server. This will serve as the base DN (the point from where a server will search for users) for your LDAP directory. In my example, I’ve used example.com (you’ll want to change this to fit your needs).

In the next window, type your Organizational name (ie the name of your company or department). You will then be prompted to (once again) create an administrator password (you can use the same one as you did during the installation). Once you’ve taken care of that, you’ll be asked the following questions:

  • Database backend to use – select MDB.

  • Do you want the database to be removed with slapd is purged? – Select No.

  • Move old database? – Select Yes.

OpenLDAP is now ready for data.

Adding Initial Data

Now that OpenLDAP is installed and running, it’s time to populate the directory with a bit of initial data. In the second piece of this series, we’ll be installing a web-based GUI that makes it much easier to handle this task, but it’s always good to know how to add data the manual way.

One of the best ways to add data to the LDAP directory is via text file, which can then be imported in with the ldapadd command. Create a new file with the command:

nano ldap_data.ldif

In that file, paste the following contents:

dn: ou=People,dc=example,dc=com objectClass: organizationalUnit ou: People dn: ou=Groups,dc=EXAMPLE,dc=COM objectClass: organizationalUnit ou: Groups dn: cn=DEPARTMENT,ou=Groups,dc=EXAMPLE,dc=COM objectClass: posixGroup cn: SUBGROUP gidNumber: 5000 dn: uid=USER,ou=People,dc=EXAMPLE,dc=COM objectClass: inetOrgPerson objectClass: posixAccount objectClass: shadowAccount uid: USER sn: LASTNAME givenName: FIRSTNAME cn: FULLNAME displayName: DISPLAYNAME uidNumber: 10000 gidNumber: 5000 userPassword: PASSWORD gecos: FULLNAME loginShell: /bin/bash homeDirectory: USERDIRECTORY

In the above file, every entry in all caps needs to be modified to fit your company needs. Once you’ve modified the above file, save and close it with the [Ctrl]+[x] key combination.

To add the data from the file to the LDAP directory, issue the command:

ldapadd -x -D cn=admin,dc=EXAMPLE,dc=COM -W -f ldap_data.ldif

Remember to alter the dc entries (EXAMPLE and COM) in the above command to match your domain name. After running the command, you will be prompted for the LDAP admin password. When you successfully authentication to the LDAP server, the data will be added. You can then ensure the data is there, by running a search like so:

ldapsearch -x -LLL -b dc=EXAMPLE,dc=COM 'uid=USER' cn gidNumber

Where EXAMPLE and COM is your domain name and USER is the user to search for. The command should report the entry you searched for (Figure 3).

Now that you have your first entry into your LDAP directory, you can edit the above file to create even more. Or, you can wait until the next entry into the series (installing LDAP Account Manager) and take care of the process with the web-based GUI. Either way, you’re one step closer to having LDAP authentication on your network.

Posted on Leave a comment

Top 10 New Linux SBCs to Watch in 2019

A recent Global Market Insights report projects the single board computer market will grow from $600 million in 2018 to $1 billion by 2025. Yet, you don’t need to read a market research report to realize the SBC market is booming. Driven by the trends toward IoT and AI-enabled edge computing, new boards keep rolling off the assembly lines, many of them tailored for highly specific applications.

Much of the action has been in Linux-compatible boards, including the insanely popular Raspberry Pi. The number of different vendors and models has exploded thanks in part to the rise of community-backed, open-spec SBCs.

Here we examine 10 of the most intriguing, Linux-driven SBCs among the many products announced in the last four weeks that bookended the recent Embedded World show in Nuremberg. (There was also some interesting Linux software news at the show.) Two of the SBCs—the Intel Whiskey Lake based UP Xtreme and Nvidia Jetson Nano driven Jetson Nano Dev Kit—were announced only this week.

Our mostly open source list also includes a few commercial boards. Processors range from the modest, Cortex-A7 driven STM32MP1 to the high-powered Whiskey Lake and Snapdragon 845. Mid-range models include Google’s i.MX8M powered Coral Dev Board and a similarly AI-enhanced, TI AM5729 based BeagleBone AI. Deep learning acceleration chips—and standard RPi 40-pin or 96Boards expansion connectors—are common themes among most of these boards.

The SBCs are listed in reverse chronological order according to their announcement dates. The links in the product names go to recent LinuxGizmos reports, which link to vendor product pages.

UP Xtreme—The latest in Aaeon’s line of community-backed SBCs taps Intel’s 8th Gen Whiskey Lake-U CPUs, which maintain a modest 15W TDP while boosting performance with up to quad-core, dual threaded configurations. Depending on when it ships, this Linux-ready model will likely be the most powerful community-backed SBC around — and possibly the most expensive.

The SBC supports up to 16GB DDR4 and 128GB eMMC and offers 4K displays via HDMI, DisplayPort, and eDP. Other features include SATA, 2x GbE, 4x USB 3.0, and 40-pin “HAT” and 100-pin GPIO add-on board connectors. You also get mini-PCIe and dual M.2 slots that support wireless modems and more SATA options. The slots also support Aaeon’s new AI Core X modules, which offer Intel’s latest Movidius Myriad X VPUs for 1TOPS neural processing acceleration.

Jetson Nano Dev Kit—Nvidia just announced a low-end Jetson Nano compute module that’s sort of like a smaller (70 x 45mm) version of the old Jetson TX1. It offers the same 4x Cortex-A57 cores but has an even lower-end 128-core Maxwell GPU. The module has half the RAM and flash (4GB/16GB) of the TX1 and TX2, and no WiFi/Bluetooth radios. Like the hexa-core Jetson TX2, however, it supports 4K video and the GPU offers similar CUDA-X deep learning libraries.

Although Nvidia has backed all its Linux-driven Jetson modules with development kits, the Jetson Nano Dev Kit is its first community-backed, maker-oriented kit. It does not appear to offer open specifications, but it costs only $99 and there’s a forum and other community resources. Many of the specs match or surpass the Raspberry Pi 3B+, including the addition of a 40-pin GPIO. Highlights include an M.2 slot, GbE with Power-over-Ethernet, HDMI 2.0 and eDP links, and 4x USB 3.0 ports.

Coral Dev Board—Google’s very first Linux maker board arrived earlier this month featuring an NXP i.MX8M and Google’s Edge TPU AI chip—a stripped-down version of Google’s TPU Unit is designed to run TensorFlow Lite ML models. The $150, Raspberry Pi-like Coral Dev Board was joined by a similarly Edge TPU-enabled Coral USB Accelerator USB stick. These will be followed by an Edge TPU based Coral PCIe Accelerator and a Coral SOM compute module. All these devices are backed with schematics, community resources, and other open-spec resources.

The Coral Dev Board combines the Edge TPU chip with NXP’s quad-core, 1.5GHz Cortex-A53 i.MX8M with a 3D Vivante GPU/VPU and a Cortex-M4 MCU. The SBC is even more like the Raspberry Pi 3B+ than Nvidia’s Dev Kit, mimicking the size and much of the layout and I/O, including the 40-pin GPIO connector. Highlights include 4K-ready GbE, HDMI 2.0a, 4-lane MIPI-DSI and CSI, and USB 3.0 host and Type-C ports.

SBC-C43—Seco’s commercial, industrial temperature SBC-C43 board is the first SBC based on NXP’s high-end, up to hexa-core i.MX8. The 3.5-inch SBC supports the i.MX8 QuadMax with 2x Cortex-A72 cores and 4x Cortex-A53 cores, the QuadPlus with a single Cortex-A72 and 4x -A53, and the Quad with no -A72 cores and 4x -A53. There are also 2x Cortex-M4F real-time cores and 2x Vivante GPU/VPU cores. Yocto Project, Wind River Linux, and Android are available.

The feature-rich SBC-C43 supports up to 8GB DDR4 and 32GB eMMC, both soldered for greater reliability. Highlights include dual GbE, HDMI 2.0a in and out ports, WiFi/Bluetooth, and a variety of industrial interfaces. Dual M.2 slots support SATA, wireless, and more.

Nitrogen8M_Mini—This Boundary Devices cousin to the earlier, i.MX8M based Nitrogen8M is available for $135, with shipments due this Spring. The open-spec Nitrogen8M_Mini is the first SBC to feature NXP’s new i.MX8M Mini SoC. The Mini uses a more advanced 14LPC FinFET process than the i.MX8M, resulting in lower power consumption and higher clock rates for both the 4x Cortex-A53 (1.5GHz to 2GHz) and Cortex-M4 (400MHz) cores. The drawback is that you’re limited to HD video resolution.

Supported with Linux and Android, the Nitrogen8M_Mini ships with 2GB to 4GB LPDDR4 RAM and 8GB to 128GB eMMC. MIPI-DSI and -CSI interfaces support optional touchscreens and cameras, respectively. A GbE port is standard and PoE and WiFi/BT are optional. Other features include 3x USB ports, one or two PCIe slots, and optional -40 to 85°C support. A Nitrogen8M_Mini SOM module with similar specs is also in the works.

Pine H64 Model B—Pine64’s latest hacker board was teased in late January as part of an ambitious roll-out of open source products, including a laptop, tablet, and phone. The Raspberry Pi semi-clone, which recently went on sale for $39 (2GB) or $49 (3GB), showcases the high-end, but low-cost Allwinner H64. The quad -A53 SoC is notable for its 4K video with HDR support.

The Pine H64 Model B offers up to 128GB eMMC storage, WiFi/BT, and a GbE port. I/O includes 2x USB 2.0 and single USB 3.0 and HDMI 2.0a ports plus SPDIF audio and an RPi-like 40-pin connector. Images include Android 7.0 and an “in progress” Armbian Debian Stretch.

AI-ML Board—Arrow unveiled this i.MX8X based SBC early this month along with a similarly 96Boards CE Extended format, i.MX8M based Thor96 SBC. While there are plenty of i.MX8M boards these days, we’re more intrigued with the lowest-end i.MX8X member of the i.MX8 family. The AI-ML Board is the first SBC we’ve seen to feature the low-power i.MX8X, which offers up to 4x 64-bit, 1.2GHz Cortex-A35 cores, a 4-shader, 4K-ready Vivante GPU/VPU, a Cortex-M4F chip, and a Tensilica HiFi 4 DSP.

The open-spec, Yocto Linux driven AI-ML Board is targeted at low-power, camera-equipped applications such as drones. The board has 2GB LPDDR4, Ethernet, WiFi/BT, and a pair each of MIPI-DSI and USB 3.0 ports. Cameras are controlled via the 96Boards 60-pin, high-power GPIO connector, which is joined by the usual 40-pin low-power link. The launch is expected June 1.

BeagleBone AI—The long-awaited successor to the Cortex-A8 AM3358 based BeagleBone family of boards advances to TIs dual-core Cortex-A15 AM5729, with similar PowerVR GPU and MCU-like PRU cores. The real story, however, is the AI firepower enabled by the SoC’s dual TI C66x DSPs and four embedded-vision-engine (EVE) neural processing cores. BeagleBoard.org claims that calculations for computer-vision models using EVE run at 8x times the performance per watt compared to the similar, but EVE-less, AM5728. The EVE and DSP chips are supported through a TIDL machine learning OpenCL API and pre-installed tools.

Due to go on sale in April for about $100, the Linux-powered BeagleBone AI is based closely on the BeagleBone Black and offers backward header, mechanical, and software compatibility. It doubles the RAM to 1GB and quadruples the eMMC storage to 16GB. You now get GbE and high-speed WiFi, as well as a USB Type-C port.

Robotics RB3 Platform (DragonBoard 845c)—Qualcomm and Thundercomm are initially launching their 96Boards CE form factor, Snapdragon 845-based upgrade to the Snapdragon 820-based DragonBoard 820c SBC as part of a Qualcomm Robotics RB3 Platform. Yet, 96Boards.org has already posted a DragonBoard 845c product page, and we imagine the board will be available in the coming months without all the robotics bells and whistles. A compute module version is also said to be in the works.

The 10nm, octa-core, “Kryo” based Snapdragon 845 is one of the most powerful Arm SoCs around. It features an advanced Adreno 630 GPU with “eXtended Reality” (XR) VR technology and a Hexagon 685 DSP with a third-gen Neural Processing Engine (NPE) for AI applications. On the RB3 kit, the board’s expansion connectors are pre-stocked with Qualcomm cellular and robotics camera mezzanines. The $449 and up kit also includes standard 4K video and tracking cameras, and there are optional Time-of-Flight (ToF) and stereo SLM camera depth cameras. The SBC runs Linux with ROS (Robot Operating System).

Avenger96—Like Arrow’s AI-ML Board, the Avenger96 is a 96Boards CE Extended SBC aimed at low-power IoT applications. Yet, the SBC features an even more power-efficient (and slower) SoC: ST’s recently announced STM32MP153. The Avenger96 runs Linux on the high-end STM32MP157 model, which has dual, 650MHz Cortex-A7 cores, a Cortex-M4, and a Vivante 3D GPU.

This sandwich-style board features an Avenger96 module with the STM32MP157 SoC, 1GB of DDR3L, 2MB SPI flash, and a power management IC. It’s unclear if the 8GB eMMC and WiFi-ac/Bluetooth 4.2 module are on the module or carrier board. The Avenger96 SBC is further equipped with GbE, HDMI, micro-USB OTG, and dual USB 2.0 host ports. There’s also a microSD slot and the usual 40- and 60-pin GPIO connectors. The board is expected to go on sale in April.

Posted on Leave a comment

Datapractices.org Joins the Linux Foundation to Advance Best Practices and Offers Open Courseware Across Data Ecosystem

The Linux Foundation has announced datapractices.org,  a vendor-neutral community working on the first-ever template for modern data teamwork, has joined as an official Linux Foundation project.

DataPractices.org was pioneered by data.world as a “Manifesto for Data Practices” of four values and 12 principles that illustrate the most effective, ethical, and modern approach to data teamwork. As a member of the foundation, datapractices.org will expand to offer open courseware and establish a collaborative approach to defining and refining data best practices. 

We talked with Patrick McGarry, head of data.world, to learn more about DataPractices.org.

LF: Can you briefly describe datapractices.org and tell us about its history?
Patrick:
 The Data Practices movement originated back in 2017 at The Open Data Science Leadership Summit in San Francisco. This event gathered together leaders in data science, semantics, open source, visualization, and industry to discuss the current state of the data community. We discovered that there were many similarities between the then current challenges around data, and the previous difficulties felt in software development that Agile addressed.

The goal of the Data Practices movement was to start a similar “Agile for Data” movement that could help offer direction and improved data literacy across the ecosystem. While the first step was the “Manifesto for Data Practices” the intent was always to move past that and apply the values and principles to a series of free and open courseware that could benefit anyone who was interested.

Read more at Linux Foundation

Posted on Leave a comment

Community Demos at ONS to Highlight LFN Project Harmonization and More

A little more than one year since LF Networking (LFN) came together, the project continues to demonstrate strategic growth, with Telstra coming on in November, and the umbrella project representing ~70% of the world’s mobile subscribers. Working side by side with service providers in the technical projects has been critical to ensure that work coming out of LFN is relevant and useful for meeting their requirements. A small sample of these integrations and innovations will be on display once again in the LF Networking Booth at the Open Networking Summit Event, April 3-5 in San Jose, CA. We’re excited to be back in the Bay Area this year and are offering Day Pass and Hall Pass Options. Register today!

Due to demand from the community, we’ve expanded the number of demo stations from 8 to 10 and cover many areas in the open networking stack — with projects from within the LF Networking umbrella (FD,io, OpenDaylight, ONAPOPNFV, and Tungsten Fabric), as well as adjacent projects such as ConsulDockerEnvoyIstioLigatoKubernetesOpenstack, and SkyDive. We welcome you to come spend some time talking to and learning from these experts in the technical community.

The LFN promise of project harmonization will be on display from members iConectiv, Huawei, and Vodafone who will highlight ONAP’s onboarding process for testing Virtual Network Functions (VNFs), integrating with the OPNFV’s Dovetail project, supporting the expanding OPNFV Verification Program (OVP), and paving the way for future compliance and certification testing. Another example of project harmonization is use of common infrastructure for testing across LFN project and the folks from UNH-IOL will be on hand to walk users through the Lab-as-a-Service offering that is facilitating development and testing by hosting hardware and providing access to the community.

Other focus areas include network slicing, service mesh, SDN performance, testing, integration, and analysis.

Listed below is the full networking demo lineup, and you can read detailed descriptions of each demo here.

  • Demonstrating Service Provider Specific Test & Verification Requirements (OPNFV, ONAP) – Proposed By Vodafone and Demonstrated by Vodafone/Huawei/iConectiv
  • Integration of ODL and Network Service Mesh (OpenDaylight, Network Service Mesh) – Presented by Lumina Networks
  • SDN Performance Comparison Between Multi Data-paths (Tungsten Fabric, FD.io) – Presented by ATS, and Sofioni Networks
  • ONAP Broadband Service Use Case (ONAP, OPNFV) – Presented by Swisscom, Huawei, and Nokia
  • OpenStack Rolling Upgrade with Zero Downtime to Application (OPNFV, OpenStack) – Presented By Nokia and Deutsche Telekom
  • Skydive & VPP Integration: A Topology Analyzer (FD.io, Ligato, Skydive, Docker) – Presented by PANTHEON.tech
  • VSPERF: Beyond Performance Metrics, Towards Causation Analysis (OPNFV) – Presented by Spirent)
  • Network Slice with ONAP based Life Cycle Management (ONAP) – Presented by AT&T, and Ericsson
  • Service Mesh and SDN: At-Scale Cluster Load Balancing (Tungsten Fabric, Linux OS, Istio, Consul, Envoy, Kubernetes, HAProxy) – Presented by CloudOps, Juniper and the Tungsten Fabric Community
  • Lab as a Service 2.0 Demo (OPNFV) – Presented by UNH-IOL

We hope to see you at the show! Register today!

This article originally appeared at Linux Foundation