Posted on Leave a comment

4 cool new projects to try in COPR from October 2020

COPR is a collection of personal repositories for software
that isn’t carried in Fedora. Some software doesn’t conform to
standards that allow easy packaging. Or it may not meet other Fedora
standards, despite being free and open-source. COPR can offer these
projects outside the Fedora set of packages. Software in COPR isn’t
supported by Fedora infrastructure or signed by the project. However,
it can be a neat way to try new or experimental software.

This article presents a few new and interesting projects in COPR. If
you’re new to using COPR, see the COPR User Documentation
for how to get started.

Dialect

Dialect translates text to foreign languages using Google Translate. It remembers your translation history and supports features such as automatic language detection and text to speech. The user interface is minimalistic and mimics the Google Translate tool itself, so it is really easy to use.

Installation instructions

The repo currently provides Dialect for Fedora 33 and Fedora
Rawhide. To install it, use these commands:

sudo dnf copr enable lyessaadi/dialect
sudo dnf install dialect

GitHub CLI

gh is an official GitHub command-line client. It provides fast
access and full control over your project issues, pull requests, and
releases, right in the terminal. Issues (and everything else) can also
be easily viewed in the web browser for a more standard user interface
or sharing with others.

Installation instructions

The repo currently provides gh for Fedora 33 and Fedora
Rawhide. To install it, use these commands:

sudo dnf copr enable jdoss/github-cli
sudo dnf install github-cli

Glide

Glide is a minimalistic media player based on GStreamer. It
can play both local and remote files in any multimedia format
supported by GStreamer itself. If you are in need of a multi-platform
media player with a simple user interface, you might want to give Glide a try.

Installation instructions

The repo currently provides Glide for Fedora 32, 33, and
Rawhide. To install it, use these commands:

sudo dnf copr enable atim/glide-rs
sudo dnf install glide-rs

Vim ALE

ALE is a plugin for Vim text editor, providing syntax and
semantic error checking. It also brings support for fixing code and
many other IDE-like features such as TAB-completion, jumping to
definitions, finding references, viewing documentation, etc.

Installation instructions

The repo currently provides vim-ale for Fedora 31,
32, 33, and Rawhide, as well as for EPEL8. To install it, use these
commands:

sudo dnf copr enable praiskup/vim-ale
sudo dnf install vim-ale

Editors note: Previous COPR articles can be found here.

Posted on Leave a comment

How to rebase to Fedora 33 on Silverblue

Silverblue is an operating system for your desktop built on Fedora. It’s excellent for daily use, development, and container-based workflows. It offers numerous advantages such as being able to roll back in case of any problems. If you want to update to Fedora 33 on your Silverblue system, this article tells you how. It not only shows you what to do, but also how to revert things if something unforeseen happens.

Prior to actually doing the rebase to Fedora 33, you should apply any pending updates. Enter the following in the terminal:

$ rpm-ostree update

or install updates through GNOME Software and reboot.

Rebasing using GNOME Software

The GNOME Software shows you that there is new version of Fedora available on the Updates screen.

Fedora 33 is available

First thing you need to do is to download the new image, so click on the Download button. This will take some time and after it’s done you will see that the update is ready to install.

Fedora 33 is ready for installation

Click on the Install button. This step will take only a few moments and then you will be prompted to restart your computer.

Restart is needed to rebase to Fedora 33 Silverblue

Click on Restart button and you are done. After restart you will end up in new and shiny release of Fedora 33. Easy, isn’t it?

Rebasing using terminal

If you prefer to do everything in a terminal, than this next guide is for you.

Rebasing to Fedora 33 using terminal is easy. First, check if the 33 branch is available:

$ ostree remote refs fedora

You should see the following in the output:

fedora:fedora/33/x86_64/silverblue

Next, rebase your system to the Fedora 33 branch.

$ rpm-ostree rebase fedora:fedora/33/x86_64/silverblue

Finally, the last thing to do is restart your computer and boot to Fedora 33.

How to roll back

If anything bad happens—for instance, if you can’t boot to Fedora 33 at all—it’s easy to go back. Pick the previous entry in the GRUB menu at boot, and your system will start in its previous state before switching to Fedora 33. To make this change permanent, use the following command:

$ rpm-ostree rollback

That’s it. Now you know how to rebase Silverblue to Fedora 33 and roll back. So why not do it today?

Posted on Leave a comment

What’s new in Fedora 33 Workstation

Fedora 33 Workstation is the latest release of our free, leading-edge operating system. You can download it from the official website here right now. There are several new and noteworthy changes in Fedora 33 Workstation. Read more details below.

GNOME 3.38

Fedora 33 Workstation includes the latest release of GNOME Desktop Environment for users of all types. GNOME 3.38 in Fedora 33 Workstation includes many updates and improvements, including:

A new GNOME Tour app

New users are now greeted by “a new Tour application, highlighting the main functionality of the desktop and providing first time users a nice welcome to GNOME.”

The new GNOME Tour application in Fedora 33

Drag to reorder apps

GNOME 3.38 replaces the previously split Frequent and All apps views with a single customizable and consistent view that allows you to reorder apps and organize them into custom folders. Simply click and drag to move apps around.

GNOME 3.38 Drag to Reorder

Improved screen recording

The screen recording infrastructure in GNOME Shell has been improved to take advantage of PipeWire and kernel APIs. This will help reduce resource consumption and improve responsiveness.

GNOME 3.38 also provides many additional features and enhancements. Check out the GNOME 3.38 Release Notes for further information.


B-tree file system

As announced previously, new installations of Fedora 33 will default to using Btrfs. Features and enhancements are added to Btrfs with each new kernel release. The change log has a complete summary of the features that each new kernel version brings to Btrfs.


Swap on ZRAM

Anaconda and Fedora IoT have been using swap-on-zram by default for years. With Fedora 33, swap-on-zram will be enabled by default instead of a swap partition. Check out the Fedora wiki page for more details about swap-on-zram.


Nano by default

Fresh Fedora 33 installations will set the EDITOR environment variable to nano by default. This change affects several command line tools that spawn a text editor when they require user input. With earlier releases, this environment variable default was unspecified, leaving it up to the individual application to pick a default editor. Typically, applications would use vi as their default editor due to it being a small application that is traditionally available on the base installation of most Unix/Linux operating systems. Since Fedora 33 includes nano in its base installation, and since nano is more intuitive for a beginning user to use, Fedora 33 will use nano by default. Users who want vi can, of course, override the value of the EDITOR variable in their own environment. See the Fedora change request for more details.

Posted on Leave a comment

Fedora 33 is officially here!

Today, I’m excited to share the results of the hard work of thousands of contributors to the Fedora Project: our latest release, Fedora 33, is here! This is a big release with a lot of change, but I believe all that work will also make it a comfortable one, fulfilling our goal of bringing you the latest stable, powerful, and robust free and open source software in many easy to use offerings. 

If you just want to get to the bits without delay, head over to https://getfedora.org/ right now. For details, read on!

Find the Fedora flavor that’s right for you!

Fedora Editions are targeted outputs geared toward specific “showcase” uses on the desktop, in server and cloud environments—and now for Internet of Things as well.

Fedora Workstation focuses on the desktop, and in particular, it’s geared toward software developers who want a “just works” Linux operating system experience. This release features GNOME 3.38, which has plenty of great improvements as usual. The addition of the Tour application helps new users learn their way around. And like all of our other desktop-oriented variants, Fedora Workstation now uses BTRFS as the default filesystem. This advanced filesystem lays the foundation for bringing a lot of great enhancements in upcoming releases. For your visual enjoyment, Fedora 33 Workstation now features an animated background (based on time of day) by default.

Fedora CoreOS is an emerging Fedora Edition. It’s an automatically-updating, minimal operating system for running containerized workloads securely and at scale. It offers several update streams that can be followed for automatic updates that occur roughly every two weeks. Currently the next stream is based on Fedora 33, with the testing and stable streams to follow. You can find information about released artifacts that follow the next stream from the download page and information about how to use those artifacts in the Fedora CoreOS Documentation.

Fedora IoT, newly promoted to Edition status, provides a strong foundation for IoT ecosystems and edge computing use cases. Among many other features, Fedora 33 IoT introduces the Platform AbstRaction for SECurity (PARSEC), an open-source initiative to provide a common API to hardware security and cryptographic services in a platform-agnostic way.

Of course, we produce more than just the Editions. Fedora Spins and Labs target a variety of audiences and use cases, including Fedora CompNeuro, which brings a plethora of open source computational modelling tools for neuroscience, and desktop environments like KDE Plasma and Xfce

And, don’t forget our alternate architectures: ARM AArch64, Power, and S390x. New in Fedora 33, AArch64 users can use the .NET Core language for cross-platform development. We have improved support for Pine64 devices, NVidia Jetson 64 bit platforms, and the Rockchip system-on-a-chip devices including the Rock960, RockPro64, and Rock64. (However, a late-breaking note: there may be problems booting on some of these devices. Upgrading from existing Fedora 32 will be fine. More info will be on the Common Bugs page as we have it.)

We’re also excited to announce that the Fedora Cloud Base Image and Fedora CoreOS will be available in Amazon’s AWS Marketplace for the first time with Fedora 33. Fedora cloud images have been available in the Amazon cloud for over a decade, and you can launch our official images by AMI ID or with a click. The Marketplace provides an alternate way to get the same thing, with significantly wider visibility for Fedora. This will also make our cloud images available in new AWS regions more quickly. Thank you especially to David Duncan for making this happen!

General improvements

No matter what variant of Fedora you use, you’re getting the latest the open source world has to offer. Following our “First” foundation, we’ve updated key programming language and system library packages, including Python 3.9, Ruby on Rails 6.0, and Perl 5.32. In Fedora KDE, we’ve followed the work in Fedora 32 Workstation and enabled the EarlyOOM service by default to improve the user experience in low-memory situations.

To make the default Fedora experience better, we’ve set nano as the default editor. nano is a friendly editor for new users. Those of you who want the power of editors like vi can, of course, set your own default.

We’re excited for you to try out the new release! Go to https://getfedora.org/ and download it now. Or if you’re already running a Fedora operating system, follow the easy upgrade instructions. For more information on the new features in Fedora 33, see the release notes.

A note on Secure Boot

Secure Boot is a security standard which ensures that only officially-signed operating system software can load on your computer. This is important for preventing persistent malware which could hide itself in your computer’s firmware and survive even an operating system reinstallation. However, in the wake of the Boot Hole vulnerability, the cryptographic certificate used to sign Fedora bootloader software will be revoked and replaced with a new one. Because this will have a broad impact, revocation should not happen widely until the second quarter of 2021 or later.

However, some users may have received this revocation from other operating systems or firmware updates already. In that case, Fedora installations will not boot with Secure Boot enabled. To be clear, this will not affect most users. If it does affect you, you can boot with Secure Boot disabled for the time being. We will release an update signed with the new certificate to be available on all supported releases well before broad-scale certificate revocation takes place, and at that point Secure Boot should be reenabled.

In the unlikely event of a problem….

If you run into a problem, check out the Fedora 33 Common Bugs page, and if you have questions, visit our Ask Fedora user-support platform.

Thank you everyone

Thanks to the thousands of people who contributed to the Fedora Project in this release cycle, and especially to those of you who worked extra hard to make this another on-time release during a pandemic. Fedora is a community, and it’s great to see how much we’ve supported each other.

Posted on Leave a comment

Contribute at the Fedora Test Week for Kernel 5.9

The kernel team is working on final integration for kernel 5.9. This version was just recently released, and will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week from Monday, October 26, 2020 through Monday, November 02, 2020. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

How does a test week work?

A test week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results. We have a document which provides all the steps written.

Happy testing, and we hope to see you on test day.

Posted on Leave a comment

Secure NTP with NTS

Many computers use the Network Time Protocol (NTP) to synchronize their system clocks over the internet. NTP is one of the few unsecured internet protocols still in common use. An attacker that can observe network traffic between a client and server can feed the client with bogus data and, depending on the client’s implementation and configuration, force it to set its system clock to any time and date. Some programs and services might not work if the client’s system clock is not accurate. For example, a web browser will not work correctly if the web servers’ certificates appear to be expired according to the client’s system clock. Use Network Time Security (NTS) to secure NTP.

Fedora 331 is the first Fedora release to support NTS. NTS is a new authentication mechanism for NTP. It enables clients to verify that the packets they receive from the server have not been modified while in transit. The only thing an attacker can do when NTS is enabled is drop or delay packets. See RFC8915 for further details about NTS.

NTP can be secured well with symmetric keys. Unfortunately, the server has to have a different key for each client and the keys have to be securely distributed. That might be practical with a private server on a local network, but it does not scale to a public server with millions of clients.

NTS includes a Key Establishment (NTS-KE) protocol that automatically creates the encryption keys used between the server and its clients. It uses Transport Layer Security (TLS) on TCP port 4460. It is designed to scale to very large numbers of clients with a minimal impact on accuracy. The server does not need to keep any client-specific state. It provides clients with cookies, which are encrypted and contain the keys needed to authenticate the NTP packets. Privacy is one of the goals of NTS. The client gets a new cookie with each server response, so it doesn’t have to reuse cookies. This prevents passive observers from tracking clients migrating between networks.

The default NTP client in Fedora is chrony. Chrony added NTS support in version 4.0. The default configuration hasn’t changed. Chrony still uses public servers from the pool.ntp.org project and NTS is not enabled by default.

Currently, there are very few public NTP servers that support NTS. The two major providers are Cloudflare and Netnod. The Cloudflare servers are in various places around the world. They use anycast addresses that should allow most clients to reach a close server. The Netnod servers are located in Sweden. In the future we will probably see more public NTP servers with NTS support.

A general recommendation for configuring NTP clients for best reliability is to have at least three working servers. For best accuracy, it is recommended to select close servers to minimize network latency and asymmetry caused by asymmetric network routing. If you are not concerned about fine-grained accuracy, you can ignore this recommendation and use any NTS servers you trust, no matter where they are located.

If you do want high accuracy, but you don’t have a close NTS server, you can mix distant NTS servers with closer non-NTS servers. However, such a configuration is less secure than a configuration using NTS servers only. The attackers still cannot force the client to accept arbitrary time, but they do have a greater control over the client’s clock and its estimate of accuracy, which may be unacceptable in some environments.

Enable client NTS in the installer

When installing Fedora 33, you can enable NTS in the Time & Date dialog in the Network Time configuration. Enter the name of the server and check the NTS support before clicking the + (Add) button. You can add one or more servers or pools with NTS. To remove the default pool of servers (2.fedora.pool.ntp.org), uncheck the corresponding mark in the Use column.

Network Time configuration in
Fedora installer

Enable client NTS in the configuration file

If you upgraded from a previous Fedora release, or you didn’t enable NTS in the installer, you can enable NTS directly in /etc/chrony.conf. Specify the server with the nts option in addition to the recommended iburst option. For example:

server time.cloudflare.com iburst nts
server nts.sth1.ntp.se iburst nts
server nts.sth2.ntp.se iburst nts

You should also allow the client to save the NTS keys and cookies to disk,
so it doesn’t have to repeat the NTS-KE session on each start. Add the
following line to chrony.conf, if it is not already present:

ntsdumpdir /var/lib/chrony

If you don’t want NTP servers provided by DHCP to be mixed with the servers you
have specified, remove or comment out the following line in
chrony.conf:

sourcedir /run/chrony-dhcp

After you have finished editing chrony.conf, save your changes and restart the chronyd service:

systemctl restart chronyd

Check client status

Run the following command under the root user to check whether the NTS key
establishment was successful:

# chronyc -N authdata
Name/IP address Mode KeyID Type KLen Last Atmp NAK Cook CLen
=========================================================================
time.cloudflare.com NTS 1 15 256 33m 0 0 8 100
nts.sth1.ntp.se NTS 1 15 256 33m 0 0 8 100
nts.sth2.ntp.se NTS 1 15 256 33m 0 0 8 100

The KeyID, Type, and KLen columns should have non-zero values. If they are zero, check the system log for error messages from chronyd. One possible cause of failure is a firewall is blocking the client’s connection to the server’s TCP port ( port 4460).

Another possible cause of failure is a certificate that is failing to verify because the client’s clock is wrong. This is a chicken-or-the-egg type problem with NTS. You may need to manually correct the date or temporarily disable NTS in order to get NTS working. If your computer has a real-time clock, as almost all computers do, and it’s backed up by a good battery, this operation should be needed only once.

If the computer doesn’t have a real-time clock or battery, as is common with
some small ARM computers like the Raspberry Pi, you can add the -s
option to /etc/sysconfig/chronyd to restore time saved on the last
shutdown or reboot. The clock will be behind the true time, but if the
computer wasn’t shut down for too long and the server’s certificates were not
renewed too close to their expiration, it should be sufficient for the time
checks to succeed. As a last resort, you can disable the time checks with the
nocerttimecheck directive. See the chrony.conf(5) man page
for details.

Run the following command to confirm that the client is making NTP
measurements:

# chronyc -N sources
MS Name/IP address Stratum Poll Reach LastRx Last sample ===============================================================================
^* time.cloudflare.com 3 6 377 45 +355us[ +375us] +/- 11ms
^+ nts.sth1.ntp.se 1 6 377 44 +237us[ +237us] +/- 23ms
^+ nts.sth2.ntp.se 1 6 377 44 -170us[ -170us] +/- 22ms

The Reach column should have a non-zero value; ideally 377. The value 377 shown above is an octal number. It indicates that the last eight requests all had a valid response. The validation check will include NTS authentication if enabled. If the value only rarely or never gets to 377, it indicates that NTP requests or responses are getting lost in the network. Some major network operators are known to have middleboxes that block or limit rate of large NTP packets as a mitigation for amplification attacks that exploit the monitoring protocol of ntpd. Unfortunately, this impacts NTS-protected NTP packets, even though they don’t cause any amplification. The NTP working group is considering an alternative port for NTP as a workaround for this issue.

Enable NTS on the server

If you have your own NTP server running chronyd, you can enable server NTS support to allow its clients to be synchronized securely. If the server is a client of other servers, it should use NTS or a symmetric key for its own synchronization. The clients assume the synchronization chain is secured between all servers up to the primary time servers.

Enabling server NTS is similar to enabling HTTPS on a web server. You just need a private key and certificate. The certificate could be signed by the Let’s Encrypt authority using the certbot tool, for example. When you have the key and certificate file (including intermediate certificates), specify them in chrony.conf with the following directives:

ntsserverkey /etc/pki/tls/private/foo.example.net.key
ntsservercert /etc/pki/tls/certs/foo.example.net.crt

Make sure the ntsdumpdir directive mentioned previously in the
client configuration is present in chrony.conf. It allows the server
to save its keys to disk, so the clients of the server don’t have to get new
keys and cookies when the server is restarted.

Restart the chronyd service:

systemctl restart chronyd

If there are no error messages in the system log from chronyd, it should be
accepting client connections. If the server has a firewall, it needs to allow
both the UDP 123 and TCP 4460 ports for NTP and NTS-KE respectively.

You can perform a quick test from a client machine with the following command:

$ chronyd -Q -t 3 'server foo.example.net iburst nts maxsamples 1'
2020-10-13T12:00:52Z chronyd version 4.0 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +ASYNCDNS +NTS +SECHASH +IPV6 +DEBUG)
2020-10-13T12:00:52Z Disabled control of system clock
2020-10-13T12:00:55Z System clock wrong by -0.001032 seconds (ignored)
2020-10-13T12:00:55Z chronyd exiting

If you see a System clock wrong message, it’s working
correctly.

On the server, you can use the following command to check how many NTS-KE
connections and authenticated NTP packets it has handled:

# chronyc serverstats
NTP packets received : 2143106240
NTP packets dropped : 117180834
Command packets received : 16819527
Command packets dropped : 0
Client log records dropped : 574257223
NTS-KE connections accepted: 104
NTS-KE connections dropped : 0
Authenticated NTP packets : 52139

If you see non-zero NTS-KE connections accepted and Authenticated
NTP packets
, it means at least some clients were able to connect to the
NTS-KE port and send an authenticated NTP request.

— Cover photo by Louis. K on Unsplash —


1. The Fedora 33 Beta installer contains an older chrony prerelease which doesn’t work with current NTS servers because the NTS-KE port has changed. Consequently, in the Network Time configuration in the installer, the servers will always appear as not working. After installation, the chrony package needs to be updated before it will work with current servers.

Posted on Leave a comment

Incremental backup with Butterfly Backup

Introduction

This article explains how to make incremental or differential backups, with a catalog available to restore (or export) at the point you want, with Butterfly Backup.

Requirements

Butterfly Backup is a simple wrapper of rsync written in python; the first requirement is python3.3 or higher (plus module cryptography for init action). Other requirements are openssh and rsync (version 2.5 or higher). Ok, let’s go!

[Editors note: rsync version 3.2.3 is already installed on Fedora 33 systems]

$ sudo dnf install python3 openssh rsync git
$ sudo pip3 install cryptography

Installation

After that, installing Butterfly Backup is very simple by using the following commands to clone the repository locally, and set up Butterfly Backup for use:

$ git clone https://github.com/MatteoGuadrini/Butterfly-Backup.git
$ cd Butterfly-Backup
$ sudo python3 setup.py
$ bb --help
$ man bb

To upgrade, you would use the same commands too.

Example

Butterfly Backup is a server to client tool and is installed on a server (or workstation). The restore process restores the files into the specified client. This process shares some of the options available to the backup process.

Backups are organized accord to precise catalog; this is an example:

$ tree destination/of/backup
.
├── destination
│ ├── hostname or ip of the PC under backup
│ │ ├── timestamp folder
│ │ │ ├── backup folders
│ │ │ ├── backup.log
│ │ │ └── restore.log
│ │ ├─── general.log
│ │ └─── symlink of last backup
│
├── export.log
├── backup.list
└── .catalog.cfg

Butterfly Backup has six main operations, referred to as actions, you can get information about them with the –help command.

$ bb --help
usage: bb [-h] [--verbose] [--log] [--dry-run] [--version] {config,backup,restore,archive,list,export} ... Butterfly Backup optional arguments: -h, --help show this help message and exit --verbose, -v Enable verbosity --log, -l Create a log --dry-run, -N Dry run mode --version, -V Print version action: Valid action {config,backup,restore,archive,list,export} Available actions config Configuration options backup Backup options restore Restore options archive Archive options list List options export Export options

Configuration

Configuration mode is straight forward; If you’re already familiar with the exchange keys and OpenSSH, you probably won’t need it. First, you must create a configuration (rsa keys), for instance:

$ bb config --new
SUCCESS: New configuration successfully created!

After creating the configuration, the keys will be installed (copied) on the hosts you want to backup:

$ bb config --deploy host1
Copying configuration to host1; write the password:
/usr/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/home/arthur/.ssh/id_rsa.pub"
/usr/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/usr/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
arthur@host1's password: Number of key(s) added: 1 Now try logging into the machine, with: "ssh 'arthur@host1'"
and check to make sure that only the key(s) you wanted were added. SUCCESS: Configuration copied successfully on host1!

Backup

There are two backup modes: single and bulk.
The most relevant features of the two backup modes are the parallelism and retention of old backups. See the two parameters –parallel and –retention in the documentation.

Single backup

The backup of a single machine consists in taking the files and folders indicated in the command line, and putting them into the cataloging structure indicated above. In other words, copy all file and folders of a machine into a path.

This is an examples:

$ bb backup --computer host1 --destination /mnt/backup --data User Config --type Unix
Start backup on host1
SUCCESS: Command rsync -ah --no-links arthur@host1:/home :/etc /mnt/backup/host1/2020_09_19__10_28

Bulk backup

Above all, bulk mode backups share the same options as single mode, with the difference that they accept a file containing a list of hostnames or ips. In this mode backups will performed in parallel (by default 5 machines at a time). Above all, if you want to run fewer or more machines in parallel, specify the –parallel parameter.

Incremental of the previous backup, for instance:

$ cat /home/arthur/pclist.txt
host1
host2
host3
$ bb backup --list /home/arthur/pclist.txt --destination /mnt/backup --data User Config --type Unix
ERROR: The port 22 on host2 is closed!
ERROR: The port 22 on host3 is closed!
Start backup on host1
SUCCESS: Command rsync -ahu --no-links --link-dest=/mnt/backup/host1/2020_09_19__10_28 arthur@host1:/home :/etc /mnt/backup/host1/2020_09_19__10_50

There are four backup modes, which you specify with the –mode flag: Full (backup all files) , Mirror (backup all files in mirror mode), Differential (is based on the latest Full backup) and Incremental (is based on the latest backup).
The default mode is Incremental; Full mode is set by default when the flag is not specified.

Listing catalog

The first time you run backup commands, the catalog is created. The catalog is used for future backups and all the restores that are made through Butterfly Backup. To query this catalog use the list command.
First, let’s query the catalog in our example:

$ bb list --catalog /mnt/backup BUTTERFLY BACKUP CATALOG Backup id: aba860b0-9944-11e8-a93f-005056a664e0
Hostname or ip: host1
Timestamp: 2020-09-19 10:28:12 Backup id: dd6de2f2-9a1e-11e8-82b0-005056a664e0
Hostname or ip: host1
Timestamp: 2020-09-19 10:50:59

Press q for exit and select a backup-id:

$ bb list --catalog /mnt/backup --backup-id dd6de2f2-9a1e-11e8-82b0-005056a664e0
Backup id: dd6de2f2-9a1e-11e8-82b0-005056a664e0
Hostname or ip: host1
Type: Incremental
Timestamp: 2020-09-19 10:50:59
Start: 2020-09-19 10:50:59
Finish: 2020-09-19 11:43:51
OS: Unix
ExitCode: 0
Path: /mnt/backup/host1/2020_09_19__10_50
List: backup.log
etc
home

To export the catalog list use it with an external tool like cat, include the log flag:

$ bb list --catalog /mnt/backup --log
$ cat /mnt/backup/backup.list

Restore

The restore process is the exact opposite of the backup process. It takes the files from a specific backup and push it to the destination computer.
This command perform a restore on the same machine of the backup, for instance:

$ bb restore --catalog /mnt/backup --backup-id dd6de2f2-9a1e-11e8-82b0-005056a664e0 --computer host1 --log
Want to do restore path /mnt/backup/host1/2020_09_19__10_50/etc? To continue [Y/N]? y
Want to do restore path /mnt/backup/host1/2020_09_19__10_50/home? To continue [Y/N]? y
SUCCESS: Command rsync -ahu -vP --log-file=/mnt/backup/host1/2020_09_19__10_50/restore.log /mnt/backup/host1/2020_09_19__10_50/etc arthur@host1:/restore_2020_09_19__10_50
SUCCESS: Command rsync -ahu -vP --log-file=/mnt/backup/host1/2020_09_19__10_50/restore.log /mnt/backup/host1/2020_09_19__10_50/home/* arthur@host1:/home

Without specifying the “type” flag that indicates the operating system on which the data are being retrieved, Butterfly Backup will select it directly from the catalog via the backup-id.

Archive old backup

Archive operations are used to store backups by saving disk space.

$ bb archive --catalog /mnt/backup/ --days 1 --destination /mnt/archive/ --verbose --log
INFO: Check archive this backup f65e5afe-9734-11e8-b0bb-005056a664e0. Folder /mnt/backup/host1/2020_09_18__17_50
INFO: Check archive this backup 4f2b5f6e-9939-11e8-9ab6-005056a664e0. Folder /mnt/backup/host1/2020_09_15__07_26
SUCCESS: Delete /mnt/backup/host1/2020_09_15__07_26 successfully.
SUCCESS: Archive /mnt/backup/host1/2020_09_15__07_26 successfully.
$ ls /mnt/archive
host1
$ ls /mnt/archive/host1
2020_09_15__07_26.zip

After that, look in the catalog and see that the backup was actually archived:

$ bb list --catalog /mnt/backup/ -i 4f2b5f6e-9939-11e8-9ab6-005056a664e0
Backup id: 4f2b5f6e-9939-11e8-9ab6-005056a664e0
Hostname or ip: host1
Type: Incremental
Timestamp: 2020-09-15 07:26:46
Start: 2020-09-15 07:26:46
Finish: 2020-09-15 08:43:45
OS: Unix
ExitCode: 0
Path: /mnt/backup/host1/2020_09_15__07_26
Archived: True

Conclusion

Butterfly Backup was born from a very complex need; this tool provides superpowers to rsync, automates the backup and restore process. In addition, the catalog allows you to have a system similar to a “time machine”.

In conclusion, Butterfly Backup is a lightweight, versatile, simple and scriptable backup tool.

One more thing; Easter egg: bb -Vv

Thank you for reading my post.

Full documentation: https://butterfly-backup.readthedocs.io/
Github: https://github.com/MatteoGuadrini/Butterfly-Backup


Photo by Manu M on Unsplash.

Posted on Leave a comment

Web of Trust, Part 2: Tutorial

The previous article looked at how the Web of Trust works in concept, and how the Web of Trust is implemented at Fedora. In this article, you’ll learn how to do it yourself. The power of this system lies in everybody being able to validate the actions of others—if you know how to validate somebody’s work, you’re contributing to the strength of our shared security.

Choosing a project

Remmina is a remote desktop client written in GTK+. It aims to be useful for system administrators and travelers who need to work with lots of remote computers in front of either large monitors or tiny netbooks. In the current age, where many people must work remotely or at least manage remote servers, the security of a program like Remmina is critical. Even if you do not use it yourself, you can contribute to the Web of Trust by checking it for others.

The question is: how do you know that a given version of Remmina is good, and that the original developer—or distribution server—has not been compromised?

For this tutorial, you’ll use Flatpak and the Flathub repository. Flatpak is intentionally well-suited for making verifiable rebuilds, which is one of the tenets of the Web of Trust. It’s easier to work with since it doesn’t require users to download independent development packages. Flatpak also uses techniques to prevent in‑flight tampering, using hashes to validate its read‑only state. As far as the Web of Trust is concerned, Flatpak is the future.

For this guide, you use Remmina, but this guide generally applies to every application you use. It’s also not exclusive to Flatpak, and the general steps also apply to Fedora’s repositories. In fact, if you’re currently reading this article on Debian or Arch, you can still follow the instructions. If you want to follow along using traditional RPM repositories, make sure to check out this article.

Installing and checking

To install Remmina, use the Software Center or run the following from a terminal:

flatpak install flathub org.remmina.Remmina -y

After installation, you’ll find the files in:

/var/lib/flatpak/app/org.remmina.Remmina/current/active/files/ 

Open a terminal here and find the following directories using ls -la:

total 44
drwxr-xr-x. 2 root root 4096 Jan 1 1970 bin
drwxr-xr-x. 3 root root 4096 Jan 1 1970 etc
drwxr-xr-x. 8 root root 4096 Jan 1 1970 lib
drwxr-xr-x. 2 root root 4096 Jan 1 1970 libexec
-rw-r--r--. 2 root root 18644 Aug 25 14:37 manifest.json
drwxr-xr-x. 2 root root 4096 Jan 1 1970 sbin
drwxr-xr-x. 15 root root 4096 Jan 1 1970 share

Getting the hashes

In the bin directory you will find the main binaries of the application, and in lib you find all dependencies that Remmina uses. Now calculate a hash for ./bin/remmina:

sha256sum ./bin/*

This will give you a list of numbers: checksums. Copy them to a temporary file, as this is the current version of Remmina that Flathub is distributing. These numbers have something special: only an exact copy of Remmina can give you the same numbers. Any change in the code—no matter how minor—will produce different numbers.

Like Fedora’s Koji and Bodhi build and update services, Flathub has all its build servers in plain view. In the case of Flathub, look at Buildbot to see who is responsible for the official binaries of a package. Here you will find all of the logs, including all the failed builds and their paper trail.

Illustration image, which shows the process-graph of Buildbot on Remmina.

Getting the source

The main Flathub project is hosted on GitHub, where the exact compile instructions (“manifest” in Flatpak terms) are visible for all to see. Open a new terminal in your Home folder. Clone the instructions, and possible submodules, using one command:

git clone --recurse-submodules https://github.com/flathub/org.remmina.Remmina

Developer tools

Start off by installing the Flatpak Builder:

sudo dnf install flatpak-builder

After that, you’ll need to get the right SDK to rebuild Remmina. In the manifest, you’ll find the current SDK is.

 "runtime": "org.gnome.Platform", "runtime-version": "3.38", "sdk": "org.gnome.Sdk", "command": "remmina",

This indicates that you need the GNOME SDK, which you can install with:

flatpak install org.gnome.Sdk//3.38

This provides the latest versions of the Free Desktop and GNOME SDK. There are also additional SDK’s for additional options, but those are beyond the scope of this tutorial.

Generating your own hashes

Now that everything is set up, compile your version of Remmina by running:

flatpak-builder build-dir org.remmina.Remmina.json --force-clean

After this, your terminal will print a lot of text, your fans will start spinning, and you’re compiling Remmina. If things do not go so smoothly, refer to the Flatpak Documentation; troubleshooting is beyond the scope of this tutorial.

Once complete, you should have the directory ./build-dir/files/, which should contain the same layout as above. Now the moment of truth: it’s time to generate the hashes for the built project:

sha256sum ./bin/*
Illustrative image, showing the output of sha256sum. To discourage copy-pasting old hashes, they are not provided as in-text.

You should get exactly the same numbers. This proves that the version on Flathub is indeed the version that the Remmina developers and maintainers intended for you to run. This is great, because this shows that Flathub has not been compromised. The web of trust is strong, and you just made it a bit better.

Going deeper

But what about the ./lib/ directory? And what version of Remmina did you actually compile? This is where the Web of Trust starts to branch. First, you can also double-check the hashes of the ./lib/ directory. Repeat the sha256sum command using a different directory.

But what version of Remmina did you compile? Well, that’s in the Manifest. In the text file you’ll find (usually at the bottom) the git repository and branch that you just used. At the time of this writing, that is:

 "type": "git", "url": "https://gitlab.com/Remmina/Remmina.git", "tag": "v1.4.8", "commit": "7ebc497062de66881b71bbe7f54dabfda0129ac2"

Here, you can decide to look at the Remmina code itself:

git clone --recurse-submodules https://gitlab.com/Remmina/Remmina.git cd ./Remmina git checkout tags/v1.4.8

The last two commands are important, since they ensure that you are looking at the right version of Remmina. Make sure you use the corresponding tag of the Manifest file. you can see everything that you just built.

What if…?

The question on some minds is: what if the hashes don’t match? Quoting a famous novel: “Don’t Panic.” There are multiple legitimate reasons as to why the hashes do not match.

It might be that you are not looking at the same version. If you followed this guide to a T, it should give matching results, but minor errors will cause vastly different results. Repeat the process, and ask for help if you’re unsure if you’re making errors. Perhaps Remmina is in the process of updating.

But if that still doesn’t justify the mismatch in hashes, go to the maintainers of Remmina on Flathub and open an issue. Assume good intentions, but you might be onto something that isn’t totally right.

The most obvious upstream issue is that Remmina does not properly support reproducible builds yet. The code of Remmina needs to be written in such a way that repeating the same action twice, gives the same result. For developers, there is an entire guide on how to do that. If this is the case, there should be an issue on the upstream bug-tracker, and if it is not there, make sure that you create one by explaining your steps and the impact.

If all else fails, and you’ve informed upstream about the discrepancies and they to don’t know what is happening, then it’s time to send an email to the Administrators of Flathub and the developer in question.

Conclusion

At this point, you’ve gone through the entire process of validating a single piece of a bigger picture. Here, you can branch off in different directions:

  • Try another Flatpak application you like or use regularly
  • Try the RPM version of Remmina
  • Do a deep dive into the C code of Remmina
  • Relax for a day, knowing that the Web of Trust is a collective effort

In the grand scheme of things, we can all carry a small part of responsibility in the Web of Trust. By taking free/libre open source software (FLOSS) concepts and applying them in the real world, you can protect yourself and others. Last but not least, by understanding how the Web of Trust works you can see how FLOSS software provides unique protections.

Posted on Leave a comment

systemd-resolved: introduction to split DNS

Fedora 33 switches the default DNS resolver to systemd-resolved. In simple terms, this means that systemd-resolved will run as a daemon. All programs wanting to translate domain names to network addresses will talk to it. This replaces the current default lookup mechanism where each program individually talks to remote servers and there is no shared cache.

If necessary, systemd-resolved will contact remote DNS servers. systemd-resolved is a “stub resolver”—it doesn’t resolve all names itself (by starting at the root of the DNS hierarchy and going down label by label), but forwards the queries to a remote server.

A single daemon handling name lookups provides significant benefits. The daemon caches answers, which speeds answers for frequently used names. The daemon remembers which servers are non-responsive, while previously each program would have to figure this out on its own after a timeout. Individual programs only talk to the daemon over a local transport and are more isolated from the network. The daemon supports fancy rules which specify which name servers should be used for which domain names—in fact, the rest of this article is about those rules.

Split DNS

Consider the scenario of a machine that is connected to two semi-trusted networks (wifi and ethernet), and also has a VPN connection to your employer. Each of those three connections has its own network interface in the kernel. And there are multiple name servers: one from a DHCP lease from the wifi hotspot, two specified by the VPN and controlled by your employer, plus some additional manually-configured name servers. Routing is the process of deciding which servers to ask for a given domain name. Do not mistake this with the process of deciding where to send network packets, which is called routing too.

The network interface is king in systemd-resolved. systemd-resolved first picks one or more interfaces which are appropriate for a given name, and then queries one of the name servers attached to that interface. This is known as “split DNS”.

There are two flavors of domains attached to a network interface: routing domains and search domains. They both specify that the given domain and any subdomains are appropriate for that interface. Search domains have the additional function that single-label names are suffixed with that search domain before being resolved. For example, a lookup for “server” is treated as a lookup for “server.example.com” if the search domain is “example.com.” In systemd-resolved config files, routing domains are prefixed with the tilde (~) character.

Specific example

Now consider a specific example: your VPN interface tun0 has a search domain private.company.com and a routing domain ~company.com. If you ask for mail.private.company.com, it is matched by both domains, so this name would be routed to tun0.

A request for www.company.com is matched by the second domain and would also go to tun0. If you ask for www, (in other words, if you specify a single-label name without any dots), the difference between routing and search domains comes into play. systemd-resolved attempts to combine the single-label name with the search domain and tries to resolve www.private.company.com on tun0.

If you have multiple interfaces with search domains, single-label names are suffixed with all search domains and resolved in parallel. For multi-label names, no suffixing is done; search and routing domains are are used to route the name to the appropriate interface. The longest match wins. When there are multiple matches of the same length on different interfaces, they are resolved in parallel.

A special case is when an interface has a routing domain ~. (a tilde for a routing domain and a dot for the root DNS label). Such an interface always matches any names, but with the shortest possible length. Any interface with a matching search or routing domain has higher priority, but the interface with ~. is used for all other names. Finally, if no routing or search domains matched, the name is routed to all interfaces that have at least one name server attached.

Lookup routing in systemd-resolved

Domain routing

This seems fairly complex, partially because of the historic names which are confusing. In actual practice it’s not as complicated as it seems.

To introspect a running system, use the resolvectl domain command. For example:

$ resolvectl domain
Global:
Link 4 (wlp4s0): ~.
Link 18 (hub0):
Link 26 (tun0): redhat.com

You can see that www would resolve as www.redhat.com. over tun0. Anything ending with redhat.com resolves over tun0. Everything else would resolve over wlp4s0 (the wireless interface). In particular, a multi-label name like www.foobar would resolve over wlp4s0, and most likely fail because there is no foobar top-level domain (yet).

Server routing

Now that you know which interface or interfaces should be queried, the server or servers to query are easy to determine. Each interface has one or more name servers configured. systemd-resolved will send queries to the first of those. If the server is offline and the request times out or if the server sends a syntactically-invalid answer (which shouldn’t happen with “normal” queries, but often becomes an issue when DNSSEC is enabled), systemd-resolved switches to the next server on the list. It will use that second server as long as it keeps responding. All servers are used in a round-robin rotation.

To introspect a running system, use the resolvectl dns command:

$ resolvectl dns
Global:
Link 4 (wlp4s0): 192.168.1.1 8.8.4.4 8.8.8.8
Link 18 (hub0):
Link 26 (tun0): 10.45.248.15 10.38.5.26

When combined with the previous listing, you know that for www.redhat.com, systemd-resolved will query 10.45.248.15, and—if it doesn’t respond—10.38.5.26. For www.google.com, systemd-resolved will query 192.168.1.1 or the two Google servers 8.8.4.4 and 8.8.8.8.

Differences from nss-dns

Before going further detail, you may ask how this differs from the previous default implementation (nss-dns). With nss-dns there is just one global list of up to three name servers and a global list of search domains (specified as nameserver and search in /etc/resolv.conf).

Each name to query is sent to the first name server. If it doesn’t respond, the same query is sent to the second name server, and so on. systemd-resolved implements split-DNS and remembers which servers are currently considered active.

For single-label names, the query is performed with each of the the search domains suffixed. This is the same with systemd-resolved. For multi-label names, a query for the unsuffixed name is performed first, and if that fails, a query for the name suffixed by each of the search domains in turn is performed. systemd-resolved doesn’t do that last step; it only suffixes single-label names.

A second difference is that with nss-dns, this module is loaded into each process. The process itself communicates with remote servers and implements the full DNS stack internally. With systemd-resolved, the nss-resolve module is loaded into the process, but it only forwards the query to systemd-resolved over a local transport (D-Bus) and doesn’t do any work itself. The systemd-resolved process is heavily sandboxed using systemd service features.

The third difference is that with systemd-resolved all state is dynamic and can be queried and updated using D-Bus calls. This allows very strong integration with other daemons or graphical interfaces.

Configuring systemd-resolved

So far, this article talked about servers and the routing of domains without explaining how to configure them. systemd-resolved has a configuration file (/etc/systemd/resolv.conf) where you specify name servers with DNS= and routing or search domains with Domains= (routing domains with ~, search domains without). This corresponds to the Global: lists in the two listings above.

In this article’s examples, both lists are empty. Most of the time configuration is attached to specific interfaces, and “global” configuration is not very useful. Interfaces come and go and it isn’t terribly smart to contact servers on an interface which is down. As soon as you create a VPN connection, you want to use the servers configured for that connection to resolve names, and as soon as the connection goes down, you want to stop.

How does then systemd-resolved acquire the configuration for each interface? This happens dynamically, with the network management service pushing this configuration over D-Bus into systemd-resolved. The default in Fedora is NetworkManager and it has very good integration with systemd-resolved. Alternatives like systemd’s own systemd-networkd implement similar functionality. But the interface is open and other programs can do the appropriate D-Bus calls.

Alternatively, resolvectl can be used for this (it is just a wrapper around the D-Bus API). Finally, resolvconf provides similar functionality in a form compatible with a tool in Debian with the same name.

Scenario: Local connection more trusted than VPN

The important thing is that in the common scenario, systemd-resolved follows the configuration specified by other tools, in particular NetworkManager. So to understand how systemd-resolved names, you need to see what NetworkManager tells it to do. Normally NM will tell systemd-resolved to use the name servers and search domains received in a DHCP lease on some interface. For example, look at the source of configuration for the two listings shown above:

There are two connections: “Parkinson” wifi and “Brno (BRQ)” VPN. In the first panel DNS:Automatic is enabled, which means that the DNS server received as part of the DHCP lease (192.168.1.1) is passed to systemd-resolved. Additionally. 8.8.4.4 and 8.8.8.8 are listed as alternative name servers. This configuration is useful if you want to resolve the names of other machines in the local network, which 192.168.1.1 provides. Unfortunately the hotspot DNS server occasionally gets stuck, and the other two servers provide backup when that happens.

The second panel is similar, but doesn’t provide any special configuration. NetworkManager combines routing domains for a given connection from DHCP, SLAAC RDNSS, and VPN, and finally manual configuration and forward this to systemd-resolved. This is the source of the search domain redhat.com in the listing above.

There is an important difference between the two interfaces though: in the second panel, “Use this connection only for resources on its network” is checked. This tells NetworkManager to tell systemd-resolved to only use this interface for names under the search domain received as part of the lease (Link 26 (tun0): redhat.com in the first listing above). In the first panel, this checkbox is unchecked, and NetworkManager tells systemd-resolved to use this interface for all other names (Link 4 (wlp4s0): ~.). This effectively means that the wireless connection is more trusted.

Scenario: VPN more trusted than local network

In a different scenario, a VPN would be more trusted than the local network and the domain routing configuration reversed. If a VPN without “Use this connection only for resources on its network” is active, NetworkManager tells systemd-resolved to attach the default routing domain to this interface. After unchecking the checkbox and restarting the VPN connection:

$ resolvectl domain
Global:
Link 4 (wlp4s0):
Link 18 (hub0):
Link 28 (tun0): ~. redhat.com
$ resolvectl dns
Global:
Link 4 (wlp4s0):
Link 18 (hub0):
Link 28 (tun0): 10.45.248.15 10.38.5.26

Now all domain names are routed to the VPN. The network management daemon controls systemd-resolved and the user controls the network management daemon.

Additional systemd-resolved functionality

As mentioned before, systemd-resolved provides a common name lookup mechanism for all programs running on the machine. Right now the effect is limited: shared resolver and cache and split DNS (the lookup routing logic described above). systemd-resolved provides additional resolution mechanisms beyond the traditional unicast DNS. These are the local resolution protocols MulticastDNS and LLMNR, and an additional remote transport DNS-over-TLS.

Fedora 33 does not enable MulticastDNS and DNS-over-TLS in systemd-resolved. MulticastDNS is implemented by nss-mdns4_minimal and Avahi. Future Fedora releases may enable these as the upstream project improves support.

Implementing this all in a single daemon which has runtime state allows smart behaviour: DNS-over-TLS may be enabled in opportunistic mode, with automatic fallback to classic DNS if the remote server does not support it. Without the daemon which can contain complex logic and runtime state this would be much harder. When enabled, those additional features will apply to all programs on the system.

There is more to systemd-resolved: in particular LLMNR and DNSSEC, which only received brief mention here. A future article will explore those subjects.

Posted on Leave a comment

Web of Trust, Part 1: Concept

Every day we rely on technologies who nobody can fully understand. Since well before the industrial revolution, complex and challenging tasks required an approach that broke out the different parts into smaller scale tasks. Each resulting in specialized knowledge used in some parts of our lives, leaving other parts to trust in skills that others had learned. This shared knowledge approach also applies to software. Even the most avid readers of this magazine, will likely not compile and validate every piece of code they run. This is simply because the world of computers is itself also too big for one person to grasp.

Still, even though it is nearly impossible to understand everything that happens within your PC when you are using it, that does not leave you blind and unprotected. FLOSS software shares trust, giving protection to all users, even if individual users can’t grasp all parts in the system. This multi-part article will discuss how this ‘Web of Trust’ works and how you can get involved.

But first we’ll have to take a step back and discuss the basic concepts, before we can delve into the details and the web. Also, a note before we start, security is not just about viruses and malware. Security also includes your privacy, your economic stability and your technological independence.

One-Way System

By their design, computers can only work and function in the most rudimentary ways of logic: True or false. And or Or. This (boolean logic) is not readily accessible to humans, therefore we must do something special. We write applications in a code that we can (reasonably) comprehend (human readable). Once completed, we turn this human readable code into a code that the computer can comprehend (machine code).

The step of conversion is called compilation and/or building, and it’s a one-way process. Compiled code (machine code) is not really understandable by humans, and it takes special tools to study in detail. You can understand small chunks, but on the whole, an entire application becomes a black box.

This subtle difference shifts power. Power, in this case being the influence of one person over another person. The person who has written the human-readable version of the application and then releases it as compiled code to use by others, knows all about what the code does, while the end user knows a very limited scope. When using (software) in compiled form, it is impossible to know for certain what an application is intended to do, unless the original human readable code can be viewed.

The Nature of Power

Spearheaded by Richard Stallman, this shift of power became a point of concern. This discussion started in the 1980s, for this was the time that computers left the world of academia and research, and entered the world of commerce and consumers. Suddenly, that power became a source of control and exploitation.

One way to combat this imbalance of power, was with the concept of FLOSS software. FLOSS Software is built on 4-Freedoms, which gives you a wide array of other ‘affiliated’ rights and guarantees. In essence, FLOSS software uses copyright-licensing as a form of moral contract, that forces software developers not to leverage the one-way power against their users. The principle way of doing this, is with the the GNU General Public Licenses, which Richard Stallman created and has since been promoting.

One of those guarantees, is that you can see the code that should be running on your device. When you get a device using FLOSS software, then the manufacturer should provide you the code that the device is using, as well as all instructions that you need to compile that code yourself. Then you can replace the code on the device with the version you can compile yourself. Even better, if you compare the version you have with the version on the device, you can see if the device manufacturer tried to cheat you or other customers.

This is where the web of Trust comes back into the picture. The Web of Trust implies that even if the vast majority of people can’t validate the workings of a device, that others can do so on their behalf. Journalists, security analysts and hobbyists, can do the work that others might be unable to do. And if they find something, they have the power to share their findings.

Security by Blind Trust

This is of course, if the application and all components underneath it, are FLOSS. Proprietary software, or even software which is merely Open Source, has compiled versions that nobody can recreate and validate. Thus, you can never truly know if that software is secure. It might have a backdoor, it might sell your personal data, or it might be pushing a closed ecosystem to create a vendor-lock. With closed-source software, your security is as good as the company making the software is trustworthy.

For companies and developers, this actually creates another snare. While you might still care about your users and their security, you’re a liability: If a criminal can get to your official builds or supply-chain, then there is no way for anybody to discover that afterwards. An increasing number of attacks do not target users directly, but instead try to get in, by exploiting the trust the companies/developers have carefully grown.

You should also not underestimate pressure from outside: Governments can ask you to ignore a vulnerability, or they might even demand cooperation. Investment firms or shareholders, may also insist that you create a vendor-lock for future use. The blind trust that you demand of your users, can be used against you.

Security by a Web of Trust

If you are a user, FLOSS software is good because others can warn you when they find suspicious elements. You can use any FLOSS device with minimal economic risk, and there are many FLOSS developers who care for your privacy. Even if the details are beyond you, there are rules in place to facilitate trust.

If you are a tinkerer, FLOSS is good because with a little extra work, you can check the promises of others. You can warn people when something goes wrong, and you can validate the warnings of others. You’re also able to check individual parts in a larger picture. The libraries used by FLOSS applications, are also open for review: It’s “Trust all the way down”.

For companies and developers, FLOSS is also a great reassurance that your trust can’t be easily subverted. If malicious actors wish to attack your users, then any irregularity can quickly be spotted. Last but not least, since you also stand to defend your customers economic well-being and privacy, you can use that as an important selling point to customers who care about their own security.

Fedora’s case

Fedora embraces the concept of FLOSS and it stands strong to defend it. There are comprehensive legal guidelines, and Fedora’s principles are directly referencing the 4-Freedoms: Freedom, Friends, Features, and First

Fedora's Foundation logo, with Freedom highlighted. Illustrative.

To this end, entire systems have been set up to facilitate this kind of security. Fedora works completely in the open, and any user can check the official servers. Koji is the name of the Fedora Buildsystem, and you can see every application and it’s build logs there. For added security, there is also Bohdi, which orchestrates the deployment of an application. Multiple people must approve it, before the application can become available.

This creates the Web of Trust on which you can rely. Every package in the repository goes through the same process, and at every point somebody can intervene. There are also escalation systems in place to report issues, so that issues can quickly be tackled when they occur. Individual contributors also know that they can be reviewed at every time, which itself is already enough of a precaution to dissuade mischievous thoughts.

You don’t have to trust Fedora (implicitly), you can get something better; trust in users like you.