Posted on Leave a comment

AI Is Coming to Edge Computing Devices

Very few non-server systems run software that could be called machine learning (ML) and artificial intelligence (AI). Yet, server-class “AI on the Edge” applications are coming to embedded devices, and Arm intends to fight with Intel and AMD over every last one of them.

Arm recently announced a new Cortex-A76 architecture that is claimed to boost the processing of AI and ML algorithms on edge computing devices by a factor of four. This does not include ML performance gains promised by the new Mali-G76 GPU. There’s also a Mali-V76 VPU designed for high-res video. The Cortex-A76 and two Mali designs are designed to “complement” Arm’s Project Trillium Machine Learning processors (see below).

Improved performance

The Cortex-A76 differs from the Cortex-A73 and Cortex-A75 IP designs in that it’s designed as much for laptops as for smartphones and high-end embedded devices. Cortex-A76 provides “35 percent more performance year-over-year,” compared to Cortex-A75, claims Arm. The IP, which is expected to arrive in products a year from now, is also said to provide 40 percent improved efficiency.

Like Cortex-A75, which is equivalent to the latest Kyro cores available on Qualcomm’s Snapdragon 845, the Cortex-A76 supports DynamIQ, Arm’s more flexible version of its Big.Little multi-core scheme. Unlike Cortex-A75, which was announced with a Cortex-A55 companion chip, Arm had no new DynamIQ companion for the Cortex-A76.

Cortex-A76 enhancements are said to include decoupled branch prediction and instruction fetch, as well as Arm’s first 4-wide decode core, which boosts the maximum instruction per cycle capability. There’s also higher integer and vector execution throughput, including support for dual-issue native 16B (128-bit) vector and floating-point units. Finally, the new full-cache memory hierarchy is “co-optimized for latency and bandwidth,” says Arm.

Unlike the latest high-end Cortex-A releases, Cortex-A76 represents “a brand new microarchitecture,” says Arm. This is confirmed by AnandTech’s usual deep-dive analysis. Cortex-A73 and -A75 debuted elements of the new “Artemis” architecture, but the Cortex-A76 is built from scratch with Artemis.

The Cortex-A76 should arrive on 7nm-fabricated TSMC products running at 3GHz, says AnandTech. The 4x improvements in ML workloads are primarily due to new optimizations in the ASIMD pipelines “and how dot products are handled,” says the story.

Meanwhile, The Register noted that Cortex-A76 is Arm’s first design that will exclusively run 64-bit kernel-level code. The cores will support 32-bit code, but only at non-privileged levels, says the story..

Mali-G76 GPU and Mali-G72 VPU

The new Mali-G76 GPU announced with Cortex-A76 targets gaming, VR, AR, and on-device ML. The Mali-G76 is said to provide 30 percent more efficiency and performance density and 1.5x improved performance for mobile gaming. The Bifrost architecture GPU also provides 2.7x ML performance improvements compared to the Mali-G72, which was announced last year with the Cortex-A75.

The Mali-V76 VPU supports UHD 8K viewing experiences. It’s aimed at 4×4 video walls, which are especially popular in China and is designed to support the 8K video coverage, which Japan is promising for the 2020 Olympics. 8K@60 streams require four times the bandwidth of 4K@60 streams. To achieve this, Arm added an extra AXI bus and doubled the line buffers throughout the video pipeline. The VPU also supports 8K@30 decode.

Project Trillium’s ML chip detailed

Arm previously revealed other details about the Machine Learning (ML) processor, also referred to as MLP. The ML chip will accelerate AI applications including machine translation and face recognition. 

The new processor architecture is part of the Project Trillium initiative for AI, and follows Arm’s second-gen Object Detection (OD) Processor for optimizing visual processing and people/object detection. The ML design will initially debut as a co-processor in mobile phones by late 2019.

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.

Posted on Leave a comment

Video: Linus Torvalds Explains How Linux Still Surprises and Motivates Him

Hear about Linux development directly from Linus Torvalds in this video from our archives.

Linus Torvalds took to the stage in China for the first time Monday at LinuxCon + ContainerCon + CloudOpen China 2017 in Beijing. In front of a crowd of nearly 2,000, Torvalds spoke with VMware Head of Open Source Dirk Hohndel in one of their famous “fireside chats” about what motivates and surprises him and how aspiring open source developers can get started. Here are some highlights of their talk.

What’s surprising about Linux development

What I find interesting is code that I thought was stable continually gets improved. There are things we haven’t touched for many years, then someone comes along and improves them or makes bug reports in something I thought no one used. We have new hardware, new features that are developed, but after 25 years, we still have old, very basic things that people care about and still improve.”

What motivates him

“I really like what I’m doing. I like waking up and having a job that is technically interesting and challenging without being too stressful so I can do it for long stretches; something where I feel I am making a real difference and doing something meaningful not just for me.”

“I occasionally have taken breaks from my job. The 2-3 weeks I worked on Git to get that started for example. But every time I take a longer break, I get bored. When I go diving for a week, I look forward to getting back. I never had the feeling that I need to take a longer break.”

The future of Linux leadership

“Our processes have not only worked for 25 years, we still have a very strong maintainer group. We complain that we don’t have enough maintainers – which is true, we only have tens of top maintainers who do the daily work of merging stuff. That’s a strong team for an open source project. And as these maintainers get older and fatter, we have new people coming in. It takes years to go from a new developer to a top maintainer, so I don’t feel that we should necessarily worry about the process and Linux for the next 20 years.”

Will Linux be replaced

“Maybe some new aggressive project will come along and show they can do what we do better, but I don’t worry about that. There have been lots of very successful forks of Linux. What makes people not think of them as forks is that they are harmonious. If someone says they want to do this and change everything and make the kernel so much better, my feeling is do it, prove yourself. I may think it’s a bad idea, but you can prove me wrong.”

Thoughts on Git

“I’m very surprised about how widely Git has spread. I’m pleased obviously, and it validates my notion of doing distributed development. At the same time, looking at most source control versions, it tends to be a huge slog and difficult to introduce a new software control version. I expected it to be limited mostly to the kernel — as it’s tailored to what we do.”

“For the first 3 to 4 years, the complaint about Git was it was so different and hard to use. About 5 years ago something changed. Enough projects and developers had started using Git that it wasn’t different anymore; it was what people were used to. They started taking advantage of the development model and the feeling of security that using Git meant nothing would be corrupted or lost.”

“In certain circles, Git is more well known than Linux. Linux is often hidden – on an Android phone you’re running Linux, but you don’t think about it. With Git, you know you are using Git.”

Forking Linux

“When I sat down and wrote Git, a prime principle was that you should be able to fork and go off on your own and do something on your own. If you have forks that are friendly — the type that prove me wrong and do something interesting that improves the kernel — in that situation, someone can come back and say they actually improved the kernel and there are no bad feelings. I’ll take your improved code and merge it back. That’s why you should encourage forks. You also want to make it easy to take back the good ones.”

How to get started as an open source developer

“For me, I was always self-motivated and knew what I wanted to do. I was never told what I should look at doing. I’m not sure my example is the right thing for people to follow. There are a ton of open source projects and, if you are a beginning programmer, find something you’re interested in that you can follow for more than just a few weeks. Get to know the code so well that you get to the point where you are an expert on a code piece. It doesn’t need to be the whole project. No one is an expert on the whole kernel, but you can know an area well.  

If you can be part of a community and set up patches, it’s not just about the coding, but about the social aspect of open source. You make connections and improve yourself as a programmer. You are basically showing off – I made these improvements, I’m capable of going far in my community or job. You’ll have to spend a certain amount of time to learn a project, but there’s a huge upside — not just from a career aspect, but having an amazing project in your life.”

Watch the complete video below:

[embedded content]

Posted on Leave a comment

Systemd Services: Reacting to Change

I have one of these Compute Sticks (Figure 1) and use it as an all-purpose server. It is inconspicuous and silent and, as it is built around an x86 architecture, I don’t have problems getting it to work with drivers for my printer, and that’s what it does most days: it interfaces with the shared printer and scanner in my living room.

Most of the time it is idle, especially when we are out, so I thought it would be good idea to use it as a surveillance system. The device doesn’t come with its own camera, and it wouldn’t need to be spying all the time. I also didn’t want to have to start the image capturing by hand because this would mean having to log into the Stick using SSH and fire up the process by writing commands in the shell before rushing out the door.

So I thought that the thing to do would be to grab a USB webcam and have the surveillance system fire up automatically just by plugging it in. Bonus points if the surveillance system fired up also after the Stick rebooted, and it found that the camera was connected.

In prior installments, we saw that systemd services can be started or stopped by hand or when certain conditions are met. Those conditions are not limited to when the OS reaches a certain state in the boot up or powerdown sequence but can also be when you plug in new hardware or when things change in the filesystem. You do that by combining a Udev rule with a systemd service.

Hotplugging with Udev

Udev rules live in the /etc/udev/rules directory and are usually a single line containing conditions and assignments that lead to an action.

That was a bit cryptic. Let’s try again:

Typically, in a Udev rule, you tell systemd what to look for when a device is connected. For example, you may want to check if the make and model of a device you just plugged in correspond to the make and model of the device you are telling Udev to wait for. Those are the conditions mentioned earlier.

Then you may want to change some stuff so you can use the device easily later. An example of that would be to change the read and write permissions to a device: if you plug in a USB printer, you’re going to want users to be able to read information from the printer (the user’s printing app would want to know the model, make, and whether it is ready to receive print jobs or not) and write to it, that is, send stuff to print. Changing the read and write permissions for a device is done using one of the assignments you read about earlier.

Finally, you will probably want the system to do something when the conditions mentioned above are met, like start a backup application to copy important files when a certain external hard disk drive is plugged in. That is an example of an action mentioned above.

With that in mind, ponder this:

ACTION=="add", SUBSYSTEM=="video4linux", ATTRS{idVendor}=="03f0", ATTRS{idProduct}=="e207", SYMLINK+="mywebcam", TAG+="systemd", MODE="0666", ENV{SYSTEMD_WANTS}="webcam.service"

The first part of the rule,

ACTION=="add", SUBSYSTEM=="video4linux", ATTRS{idVendor}=="03f0", ATTRS{idProduct}=="e207" [etc... ]

shows the conditions that the device has to meet before doing any of the other stuff you want the system to do. The device has to be added (ACTION=="add") to the machine, it has to be integrated into the video4linux subsystem. To make sure the rule is applied only when the correct device is plugged in, you have to make sure Udev correctly identifies the manufacturer (ATTRS{idVendor}=="03f0") and a model (ATTRS{idProduct}=="e207") of the device.

In this case, we’re talking about this device (Figure 2):

Notice how you use == to indicate that these are a logical operation. You would read the above snippet of the rule like this:

if the device is added and the device controlled by the video4linux subsystem and the manufacturer of the device is 03f0 and the model is e207, then...

But where do you get all this information? Where do you find the action that triggers the event, the manufacturer, model, and so on? You will probably have to use several sources. The IdVendor and idProduct you can get by plugging the webcam into your machine and running lsusb:

lsusb
Bus 002 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 002 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 003: ID 03f0:e207 Hewlett-Packard
Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 001 Device 003: ID 04f2:b1bb Chicony Electronics Co., Ltd
Bus 001 Device 002: ID 8087:0024 Intel Corp. Integrated Rate Matching Hub Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub

The webcam I’m using is made by HP, and you can only see one HP device in the list above. The ID gives you the manufacturer and the model numbers separated by a colon (:). If you have more than one device by the same manufacturer and not sure which is which, unplug the webcam, run lsusb again and check what’s missing.

OR…

Unplug the webcam, wait a few seconds, run the command udevadmin monitor --environment and then plug the webcam back in again. When you do that with the HP webcam, you get:

udevadmin monitor --environment
UDEV [35776.495221] add /devices/pci0000:00/0000:00:1c.3/0000:04:00.0
  /usb3/3-1/3-1:1.0/input/input21/event11 (input) .MM_USBIFNUM=00 ACTION=add BACKSPACE=guess DEVLINKS=/dev/input/by-path/pci-0000:04:00.0-usb-0:1:1.0-event   /dev/input/by-id/usb-Hewlett_Packard_HP_Webcam_HD_2300-event-if00 DEVNAME=/dev/input/event11 DEVPATH=/devices/pci0000:00/0000:00:1c.3/0000:04:00.0/
  usb3/3-1/3-1:1.0/input/input21/event11 ID_BUS=usb ID_INPUT=1 ID_INPUT_KEY=1 ID_MODEL=HP_Webcam_HD_2300 ID_MODEL_ENC=HP\x20Webcam\x20HD\x202300 ID_MODEL_ID=e207 ID_PATH=pci-0000:04:00.0-usb-0:1:1.0 ID_PATH_TAG=pci-0000_04_00_0-usb-0_1_1_0 ID_REVISION=1020 ID_SERIAL=Hewlett_Packard_HP_Webcam_HD_2300 ID_TYPE=video ID_USB_DRIVER=uvcvideo ID_USB_INTERFACES=:0e0100:0e0200:010100:010200:030000: ID_USB_INTERFACE_NUM=00 ID_VENDOR=Hewlett_Packard ID_VENDOR_ENC=Hewlett\x20Packard ID_VENDOR_ID=03f0 LIBINPUT_DEVICE_GROUP=3/3f0/e207:usb-0000:04:00.0-1/button MAJOR=13 MINOR=75 SEQNUM=3162 SUBSYSTEM=input USEC_INITIALIZED=35776495065 XKBLAYOUT=es XKBMODEL=pc105 XKBOPTIONS= XKBVARIANT=

That may look like a lot to process, but, check this out: the ACTION field early in the list tells you what event just happened, i.e., that a device got added to the system. You can also see the name of the device spelled out on several of the lines, so you can be pretty sure that it is the device you are looking for. The output also shows the manufacturer’s ID number (ID_VENDOR_ID=03f0) and the model number (ID_VENDOR_ID=03f0).

This gives you three of the four values the condition part of the rule needs. You may be tempted to think that it a gives you the fourth, too, because there is also a line that says:

SUBSYSTEM=input

Be careful! Although it is true that a USB webcam is a device that provides input (as does a keyboard and a mouse), it is also belongs to the usb subsystem, and several others. This means that your webcam gets added to several subsystems and looks like several devices. If you pick the wrong subsystem, your rule may not work as you want it to, or, indeed, at all.

So, the third thing you have to check is all the subsystems the webcam has got added to and pick the correct one. To do that, unplug your webcam again and run:

ls /dev/video*

This will show you all the video devices connected to the machine. If you are using a laptop, most come with a built-in webcam and it will probably show up as /dev/video0. Plug your webcam back in and run ls /dev/video* again.

Now you should see one more video device (probably /dev/video1).

Now you can find out all the subsystems it belongs to by running udevadm info -a /dev/video1:

udevadm info -a /dev/video1 Udevadm info starts with the device specified by the devpath and then walks up the chain of parent devices. It prints for every device found, all possible attributes in the udev rules key format. A rule to match, can be composed by the attributes of the device and the attributes from one single parent device. looking at device '/devices/pci0000:00/0000:00:1c.3/0000:04:00.0
  /usb3/3-1/3-1:1.0/video4linux/video1': KERNEL=="video1" SUBSYSTEM=="video4linux" DRIVER=="" ATTR{dev_debug}=="0" ATTR{index}=="0" ATTR{name}=="HP Webcam HD 2300: HP Webcam HD" [etc...]

The output goes on for quite a while, but what you’re interested is right at the beginning: SUBSYSTEM=="video4linux". This is a line you can literally copy and paste right into your rule. The rest of the output (not shown for brevity) gives you a couple more nuggets, like the manufacturer and mode IDs, again in a format you can copy and paste into your rule.

Now you have a way of identifying the device and what event should trigger the action univocally, it is time to tinker with the device.

The next section in the rule, SYMLINK+="mywebcam", TAG+="systemd", MODE="0666" tells Udev to do three things: First, you want to create symbolic link from the device to (e.g. /dev/video1) to /dev/mywebcam. This is because you cannot predict what the system is going to call the device by default. When you have an in-built webcam and you hotplug a new one, the in-built webcam will usually be /dev/video0 while the external one will become /dev/video1. However, if you boot your computer with the external USB webcam plugged in, that could be reversed and the internal webcam can become /dev/video1 and the external one /dev/video0. What this is telling you is that, although your image-capturing script (which you will see later on) always needs to point to the external webcam device, you can’t rely on it being /dev/video0 or /dev/video1. To solve this problem, you tell Udev to create a symbolic link which will never change in the moment the device is added to the video4linux subsystem and you will make your script point to that.

The second thing you do is add "systemd" to the list of Udev tags associated with this rule. This tells Udev that the action that the rule will trigger will be managed by systemd, that is, it will be some sort of systemd service.

Notice how in both cases you use += operator. This adds the value to a list, which means you can add more than one value to SYMLINK and TAG.

The MODE values, on the other hand, can only contain one value (hence you use the simple = assignment operator). What MODE does is tell Udev who can read from or write to the device. If you are familiar with chmod (and, if you are reading this, you should be), you will also be familiar of how you can express permissions using numbers. That is what this is: 0666 means “give read and write privileges to the device to everybody“.

At last, ENV{SYSTEMD_WANTS}="webcam.service" tells Udev what systemd service to run.

Save this rule into file called 90-webcam.rules (or something like that) in /etc/udev/rules.d and you can load it either by rebooting your machine, or by running:

sudo udevadm control --reload-rules && udevadm trigger

Service at Last

The service the Udev rule triggers is ridiculously simple:

# webcam.service [Service] Type=simple ExecStart=/home/[user name]/bin/checkimage.sh

Basically, it just runs the checkimage.sh script stored in your personal bin/ and pushes it the background. This is something you saw how to do in prior installments. It may seem something little, but just because it is called by a Udev rule, you have just created a special kind of systemd unit called a device unit. Congratulations.

As for the checkimage.sh script webcam.service calls, there are several ways of grabbing an image from a webcam and comparing it to a prior one to check for changes (which is what checkimage.sh does), but this is how I did it:

#!/bin/bash # This is the checkimage.sh script mplayer -vo png -frames 1 tv:// -tv driver=v4l2:width=640:height=480:device=
  /dev/mywebcam &>/dev/null mv 00000001.png /home/[user name]/monitor/monitor.png while true do mplayer -vo png -frames 1 tv:// -tv driver=v4l2:width=640:height=480:device=/dev/mywebcam &>/dev/null mv 00000001.png /home/[user name]/monitor/temp.png imagediff=`compare -metric mae /home/[user name]/monitor/monitor.png /home/[user name]
  /monitor/temp.png /home/[user name]/monitor/diff.png 2>&1 > /dev/null | cut -f 1 -d " "` if [ `echo "$imagediff > 700.0" | bc` -eq 1 ] then mv /home/[user name]/monitor/temp.png /home/[user name]/monitor/monitor.png fi sleep 0.5 done

Start by using MPlayer to grab a frame (00000001.png) from the webcam. Notice how we point mplayer to the mywebcam symbolic link we created in our Udev rule, instead of to video0 or video1. Then you transfer the image to the monitor/ directory in your home directory. Then run an infinite loop that does the same thing again and again, but also uses Image Magick’s compare tool to see if there any differences between the last image captured and the one that is already in the monitor/ directory.

If the images are different, it means something has moved within the webcam’s frame. The script overwrites the original image with the new image and continues comparing waiting for some more movement.

Plugged

With all the bits and pieces in place, when you plug your webcam in, your Udev rule will be triggered and will start the webcam.service. The webcam.service will execute checkimage.sh in the background, and checkimage.sh will start taking pictures every half a second. You will know because your webcam’s LED will start flashing indicating every time it takes a snap.

As always, if something goes wrong, run

systemctl status webcam.service

to check what your service and script are up to.

Coming up

You may be wondering: Why overwrite the original image? Surely you would want to see what’s going on if the system detects any movement, right? You would be right, but as you will see in the next installment, leaving things as they are and processing the images using yet another type of systemd unit makes things nice, clean and easy.

Just wait and see.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Posted on Leave a comment

LF Deep Learning Foundation Announces Project Contribution Process

The LF Deep Learning Foundation, a community umbrella project of The Linux Foundation with the mission of supporting artificial intelligence, machine learning and deep learning open source projects, is working to build a self-sustaining ecosystem of projects.  Having a clear roadmap for how to contribute projects is a first step. Contributed projects operate under their own technical governance with collaboration resources allocated and provided by the LF Deep Learning Foundation’s Governing Board. Membership in the LF Deep Learning Foundation is not required to propose a project contribution.

The project lifecycle and contribution process documents can be found here: https://lists.deeplearningfoundation.org/g/tac-general/wiki/Lifecycle-Document-and-Project-Proposal-Process

Read more at The Linux Foundation

Posted on Leave a comment

Speak at ELC + OpenIoT Summit EU – Proposals due by Sunday, July 1

Share your expertise! Submit your proposal to speak at ELC + OpenIoT Summit Europe by July 1.

For the past 13 years, Embedded Linux Conference (ELC) has been the premier vendor-neutral technical conference for companies and developers using Linux in embedded products. ELC has become the preeminent space for product vendors as well as kernel and systems developers to collaborate with user-space developers – the people building applications on embedded Linux.

View Full List of Suggested Topics and Submit Now >>

Read more at The Linux Foundation

Posted on Leave a comment

Turn Your Raspberry Pi into a Tor Relay Node

If you’re anything like me, you probably got yourself a first- or second-generation Raspberry Pi board when they first came out, played with it for a while, but then shelved it and mostly forgot about it. After all, unless you’re a robotics enthusiast, you probably don’t have that much use for a computer with a pretty slow processor and 256 megabytes of RAM. This is not to say that there aren’t cool things you can do with one of these, but between work and other commitments, I just never seem to find the right time for some good old nerding out.

However, if you would like to put it to good use without sacrificing too much of your time or resources, you can turn your old Raspberry Pi into a perfectly functioning Tor relay node.

What is a Tor Relay node

You have probably heard about the Tor project before, but just in case you haven’t, here’s a very quick summary. The name “Tor” stands for “The Onion Router” and it is a technology created to combat online tracking and other privacy violations.

Everything you do on the Internet leaves a set of digital footprints in every piece of equipment that your IP packets traverse: all of the switches, routers, load balancers and destination websites log the IP address from which your session originated and the IP address of the internet resource you are accessing (and often its hostname, even when using HTTPS). If you’re browsing from home, then your IP can be directly mapped to your household. If you’re using a VPN service (as you should be), then your IP can be mapped to your VPN provider, and then they are the ones who can map it to your household. In any case, odds are that someone somewhere is assembling an online profile on you based on the sites you visit and how much time you spend on each of them. Such profiles are then sold, aggregated with matching profiles collected from other services, and then monetized by ad networks. At least, that’s the optimist’s view of how that data is used — I’m sure you can think of many examples of how your online usage profiles can be used against you in much more nefarious ways.

The Tor project attempts to provide a solution to this problem by making it impossible (or, at least, unreasonably difficult) to trace the endpoints of your IP session. Tor achieves this by bouncing your connection through a chain of anonymizing relays, consisting of an entry node, relay node, and exit node:

  1. The entry node only knows your IP address, and the IP address of the relay node, but not the final destination of the request;

  2. The relay node only knows the IP address of the entry node and the IP address of the exit node, and neither the origin nor the final destination

  3. The exit node only knows the IP address of the relay node and the final destination of the request; it is also the only node that can decrypt the traffic before sending it over to its final destination

Relay nodes play a crucial role in this exchange because they create a cryptographic barrier between the source of the request and the destination. Even if exit nodes are controlled by adversaries intent on stealing your data, they will not be able to know the source of the request without controlling the entire Tor relay chain.

As long as there are plenty of relay nodes, your privacy when using the Tor network remains protected — which is why I heartily recommend that you set up and run a relay node if you have some home bandwidth to spare.

Things to keep in mind regarding Tor relays

A Tor relay node only receives encrypted traffic and sends encrypted traffic — it never accesses any other sites or resources online, so you do not need to worry that someone will browse any worrisome sites directly from your home IP address. Having said that, if you reside in a jurisdiction where offering anonymity-enhancing services is against the law, then, obviously, do not operate your own Tor relay. You may also want to check if operating a Tor relay is against the terms and conditions of your internet access provider.

What you will need

  • A Raspberry Pi (any model/generation) with some kind of enclosure

  • An SD card with Raspbian Stretch Lite

  • An ethernet cable

  • A micro-USB cable for power

  • A keyboard and an HDMI-capable monitor (to use during the setup)

This guide will assume that you are setting this up on your home connection behind a generic cable or ADSL modem router that performs NAT translation (and it almost certainly does). Most of them have a USB port you can use to power up your Raspberry Pi, and if you’re only using the wifi functionality of the router, then it should have a free ethernet port for you to plug into. However, before we get to the point where we can set-and-forget your Raspberry Pi, we’ll need to set it up as a Tor relay node, for which you’ll need a keyboard and a monitor.

The bootstrap script

I’ve adapted a popular Tor relay node bootstrap script for use with Raspbian Stretch — you can find it in my GitHub repository here: https://github.com/mricon/tor-relay-bootstrap-rpi. Once you have booted up your Raspberry Pi and logged in with the default “pi” user, do the following:

sudo apt-get install -y git
git clone https://github.com/mricon/tor-relay-bootstrap-rpi
cd tor-relay-bootstrap-rpi
sudo ./bootstrap.sh

Here is what the script will do:

  1. Install the latest OS updates to make sure your Pi is fully patched

  2. Configure your system for automated unattended updates, so you automatically receive security patches when they become available

  3. Install Tor software

  4. Tell your NAT router to forward the necessary ports to reach your relay (the ports we’ll use are 443 and 8080, since they are least likely to be filtered by your internet provider)

Once the script is done, you’ll need to configure the torrc file — but first, decide how much bandwidth you’ll want to donate to Tor traffic. First, type “Speed Test” into Google and click the “Run Speed Test” button. You can disregard the “Download speed” result, as your Tor relay can only operate as fast as your maximum upload bandwidth.

Therefore, take the “Mbps upload” number, divide by 8 and multiply by 1024 to find out the bandwidth speed in Kilobytes per second. E.g. if you got 21.5 Mbps for your upload speed, then that number is:

21.5 Mbps / 8 * 1024 = 2752 KBytes per second

You’ll want to limit your relay bandwidth to about half that amount, and allow bursting to about three-quarters of it. Once decided, open /etc/tor/torrc using your favourite editor and tweak the bandwidth settings.

RelayBandwidthRate 1300 KBytes
RelayBandwidthBurst 2400 KBytes

Of course, if you’re feeling more generous, then feel free to put in higher numbers, though you don’t want to max out your outgoing bandwidth — it will noticeably impact your day-to-day usage if these numbers are set too high.

While you have that file open, you should set two more things. First, the Nickname — just for your own recordkeeping, and second the ContactInfo line, which should list a single email address. Since your relay will be running unattended, you should use an email address that you regularly check — you will receive an alert from the “Tor Weather” service if your relay goes offline for longer than 48 hours.

Nickname myrpirelay
ContactInfo you@example.com

Save the file and reboot the system to start the Tor relay.

Testing to make sure Tor traffic is flowing

If you would like to make sure that the relay is functioning, you can run the “arm” tool:

sudo -u debian-tor arm

It will take a while to start, especially on older-generation boards, but eventually it will show you a bar chart of incoming and outgoing traffic (or error messages that will help you troubleshoot your setup).

Once you are convinced that everything is functioning, you can unplug the keyboard and the monitor and relocate the Raspberry Pi into the basement where it will quietly sit and shuffle encrypted bits around. Congratulations, you’ve helped improve privacy and combat malicious tracking online!

Posted on Leave a comment

Crack Open ACRN – A Device Hypervisor Designed for IoT

As the Internet of Things has grown in scale, IoT developers are increasingly expected to support a range of hardware resources, operating systems, and software tools/applications. This is a challenge given many connected devices are size-constrained. Virtualization can help meet these broad needs, but existing options don’t offer the right mix of size, flexibility, and functionality for IoT development.

ACRN™ is different by design. Launched at Embedded Linux Conference 2018, ACRN is a flexible, lightweight reference hypervisor, built with real-time and safety-criticality in mind and optimized to streamline embedded development through an open source platform.

One of ACRN’s biggest advantages is its small size — roughly only 25K lines of code at launch.

Read more at The Linux Foundation

Posted on Leave a comment

First Keynotes Announced for Open Source Summit North America – Register Now & Save $300

Join us in Vancouver in August for 250+ educational sessions covering the latest technologies and topics in open source, and hear from industry experts including keynotes from:

  • Ajay Agrawal, Artificial Intelligence & Machine Learning Expert, Author of Prediction Machines, and Founder, The Creative Destruction Lab
  • Jennifer Cloer, Founder of reTHINKit and Creator and Executive Producer, The Chasing Grace Project
  • Wim Coekaerts, Senior Vice President of Operating Systems and Virtualization Engineering, Oracle
  • Ben Golub, Executive Chairman and Interim CEO, and Shawn Wilkinson, Co-founder, Storj Labs
  • and more

Read more at The Linux Foundation

Posted on Leave a comment

How to Install and Use Flatpak on Linux

The landscape of applications is quickly changing. Many platforms are migrating to containerized applications… and with good cause. An application wrapped in a bundled container is easier to install, includes all the necessary dependencies, doesn’t directly affect the hosting platform libraries, automatically updates (in some cases), and (in most cases) is more secure than a standard application. Another benefit of these containerized applications is that they are universal (i.e., such an application would install on Ubuntu Linux or Fedora Linux, without having to convert a .deb package to an .rpm).

As of now, there are two main universal package systems: Snap and Flatpak. Both function in similar fashion, but one is found by default on Ubuntu-based systems (Snap) and one on Fedora-based systems (Flatpak). It should come as no surprise that both can be installed on either type of system. So if you want to run Snaps on Fedora, you can. If you want to run Flatpak on Ubuntu, you can.

I will walk you through the process of installing and using Flatpak on Ubuntu 18.04. If your platform of choice is Fedora (or a Fedora derivative), you can skip the installation process.

Installation

The first thing to do is install Flatpak. The process is simple. Open up a terminal window and follow these steps:

  1. Add the necessary repository with the command sudo add-apt-repository ppa:alexlarsson/flatpak.

  2. Update apt with the command sudo apt update.

  3. Install Flatpak with the command sudo apt install flatpak.

  4. Install Flatpak support for GNOME Software with the command sudo apt install gnome-software-plugin-flatpak.

  5. Reboot your system.

Usage

I’ll first show you how to install a Flatpak package from the command line, and then via the GUI. Let’s say you want to install the Spotify desktop client via Flatpak. To do this, you must first instruct Flatpak to retrieve the necessary app. The Spotify Flatpak (along with others) is hosted on Flathub. The first thing we’re going to do is add the Flathub remote repository with the following command:

sudo flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo

Now you can install any Flatpak app found on Flathub. For example, to install Spotify, the command would be:

sudo flatpak install flathub com.spotify.Client

To find out the exact command for each install, you only have to visit the app’s page on Flathub and the installation command is listed beneath the description.

Running a Flatpak-installed app is a bit different than a standard app (at least from the command line). Head back to the terminal window and issue the command:

flatpak run com.spotify.Client

Of course, after you’ve re-started your machine (upon installing the GNOME Software Support), those apps should appear in your desktop menu, making it unnecessary to start them from the command line.

To uninstall a Flatpak from the command line, you would go back to the terminal and issue the command:

sudo flatpak uninstall NAME

where NAME is the name of the app to remove. In our Spotify case, that would be:

sudo flatpak uninstall com.spotify.Client

Now we want to update our Flatpak apps. To do this, first list all of your installed Flatpak apps by issuing the command:

flatpak list

Now that we have our list of apps (Figure 1), we can update with the command sudo flatpak update NAME (where NAME is the name of our app to update).

So if we want to update GIMP, we’d issue the command:

sudo flatpak update org.gimp.GIMP

If there are any updates to be applied, they’’ll be taken care of. If there are no updates to be applied, nothing will be reported.

Installing from GNOME Software

Let’s make this even easier. Since we installed GNOME Software support for flatpak, we don’t actually have to bother with the command line. Don’t be mistaken, unlike Snap support, you won’t actually find Flatpak apps listed within GNOME Software (even though we’ve installed Software support). Instead, you’ll find support through the web browser.

Let me show you. Point your browser to Flathub.

Let’s say you want to install Slack via Flatpak. Go to the Slack Flathub page and then click on the INSTALL button. Since we installed GNOME Software support, the standard browser dialog window will appear with an included option to open the file via Software Install (Figure 2).

This action will then open GNOME Software (or, in the case of Ubuntu, Ubuntu Software), where you can click the Install button (Figure 3) to complete the process.

Once the installation completes, you can then either click the Launch button, or close GNOME Software and launch the application from the desktop menu (in the case of GNOME, the Dash).

After you’ve installed a Flatpak app via GNOME Software, it can also be removed from the same system (so there’s still not need to go through the command line).

What about KDE?

If you prefer using the KDE desktop environment, you’re in luck. If you issue the command sudo apt install plasma-discover-flatpak-backend, it’ll install Flatpak support for the KDE app store, Discover. Once you’ve added Flatpak support, you then need to add a repository. Open Discover and then click on Settings. In the settings window, you’ll now see a Flatpak listing (Figure 4).

Click on the Flatpak drop-down and then click Add Flathub. Click on the Applications tab (in the left navigation) and you can then search for (and install) any applications found on Flathub (Figure 5).

Easy Flatpak management

And that’s the gist of using Flatpak. These universal packages can be used on most Linux distributions and can even be managed via the GUI on some desktop environments. I highly recommend you give Flatpak a try. With the combination of standard installation, Flatpak, and Snaps, you’ll find software management on Linux has become incredibly easy.

Learn more about Linux through the free “Introduction to Linux” course from The Linux Foundation and edX.

Posted on Leave a comment

Open FinTech Forum

Focusing on the intersection of financial services and open source, Open FinTech Forum will provide CIOs and senior technologists guidance on building internal open source programs as well as an in-depth look at cutting-edge open source technologies, including AI, Blockchain/Distributed Ledger, Kubernetes/Containers, Quantum Computing, that can be leveraged to drive efficiencies and flexibility.