Posted on Leave a comment

rpmdistro-repoquery: a cross-distribution repoquery tool

This article showcases rpmdistro-repoquery, and describes how to use it to simplify doing RPM-based package operations across multiple distributions. This does not require using SSH to log into another host or starting a container or VM.

Introduction

Whether you’re a packager, system administrator, or a user of Fedora Linux, CentOS Stream, or their derivatives (RHEL, AlmaLinux, Rocky Linux etc.), you might already be familiar with dnf repoquery. This tool allows you to query the repositories configured on the system for information about available packages, whether or not they are currently installed on the local machine.

This is great, within limits. For instance, on Fedora Linux, you can query packages built for stable and branched Fedora Linux releases and, if you install fedora-repos-rawhide, packages in the development branch. Sufficient care is required to make sure you don’t enable repos meant for different Fedora Linux releases by default and thus accidentally upgrade the running system.

Enter rpmdistro-repoquery: it comes with a set of repo definitions for different RPM-based distributions, but instead of putting them in /etc/yum.repos.d with the repositories meant for actual use, put them in /usr/share/rpmdistro-repoquery (or, if you so choose, you can clone the repository and use definitions that come in the checkout). DNF is then invoked with a custom configuration file and a custom cache location that points at one of the repos for one of the distributions rather than the default location.

The various supported distributions come with the relevant repositories enabled by default. Some have additional repositories that need to be enabled explicitly. For example, source repos are off by default. Also, CentOS Stream configurations come with additional repos for SIG packages that are off by default.

This opens up a lot of use cases. I highlight some of them below.

Note: The primary author of this tool, Neal Gompa, works on a lot of RPM-based Linux distributions. I became involved using it in ebranch.

Real-life rpmdistro-repoquery use cases

Quickly seeing if a CentOS Stream update has made it to the mirrors

In Fedora’s build system, updates go through Bodhi, and once they are marked testing or stable that means there is a compose containing those updates, and they tend to hit mirrors shortly after.

In CentOS Stream, the situation is more complicated, as the QA process is not visible to the public. Take clang for example: given a commit, and a matching Koji build on January 27th, can we be sure this is pushed out to the mirrors?

It turns out, as of February 9th, it’s not in the mirrors yet:

$ rpmdistro-repoquery centos-stream 9 clang 2>/dev/null
clang-0:14.0.0-1.el9.i686
clang-0:14.0.0-1.el9.x86_64
clang-0:14.0.5-1.el9.i686
clang-0:14.0.5-1.el9.x86_64
clang-0:14.0.6-1.el9.i686
clang-0:14.0.6-1.el9.x86_64
clang-0:15.0.1-2.el9.i686
clang-0:15.0.1-2.el9.x86_64
clang-0:15.0.7-2.el9.i686
clang-0:15.0.7-2.el9.x86_64

Comparing what is packaged in different distributions

Scenario: you use / manage a heterogeneous fleet of different distributions. You want to find out if all the packages you need are available (because you might need to package what’s missing).

Let’s see if myrepos is available on openSUSE Tumbleweed (the rolling distribution):

$ rpm -q myrepos
myrepos-1.20180726-14.fc37.noarch $ rpmdistro-repoquery opensuse-tumbleweed 0 myrepos $ rpmdistro-repoquery opensuse-tumbleweed 0 /usr/bin/mr
mr-0:1.20180726-1.9.noarch

Searching by the Fedora Linux package name yields nothing, but in this case, searching by the binary shows a match (since those are in the RPM metadata): myrepos is available, but you’ll need a different package name in your configuration management.

ebranch

This is a special case of the former. ebranch is a tool for branching Fedora Linux packages for EPEL.

Given that CentOS Stream (and its downstreams, such as Red Hat Enterprise Linux, AlmaLinux and Rocky Linux) only carries the subset of Fedora Linux packages that Red Hat is committed to supporting, EPEL provides a way for the community to maintain additional packages built against RHEL (or CentOS Stream).

A major problem here is dealing with dependency hell: a missing package might have several missing dependencies, which in turn have more missing dependencies… Getting retsnoop in EPEL 9 involves branching 189 packages in total!

ebranch utilizes rpmdistro-repoquery to compare what is available in Rawhide (rpmdistro-repoquery fedora rawhide) with what is available in CentOS Stream + EPEL (rpmdistro-repoquery centos-stream-legacy 8 and rpmdistro-repoquery centos-stream 9) to build up a transitive closure of missing dependencies and report on any dependency loops. ebranch also computes a chain build order for the missing dependencies, grouping packages that can be built in parallel.

Checking the impact of a soname bump

Fedora’s updates policy for stable releases and EPEL’s incompatible upgrades policy both discourage ABI-breaking updates, but sometimes they are necessary, as in the case of libkdumpfile in EPEL.

With rpmdistro-repoquery, finding the delta between any two distribution releases that it supports is trivial:

$ comm <(rpmdistro-repoquery fedora rawhide \ --provides libkdumpfile >/dev/null) \ <(rpmdistro-repoquery centos-stream 9 \ --provides libkdumpfile >/dev/null) libaddrxlat.so.2()(64bit) libaddrxlat.so.2(LIBADDRXLAT_0)(64bit)
libaddrxlat.so.3
libaddrxlat.so.3()(64bit)
libaddrxlat.so.3(LIBADDRXLAT_0)
libaddrxlat.so.3(LIBADDRXLAT_0)(64bit) libkdumpfile = 0.4.1-5.el9
libkdumpfile = 0.5.0-3.fc38
libkdumpfile(x86-32) = 0.5.0-3.fc38 libkdumpfile(x86-64) = 0.4.1-5.el9
libkdumpfile(x86-64) = 0.5.0-3.fc38
libkdumpfile.so.10
libkdumpfile.so.10()(64bit)
libkdumpfile.so.10(LIBKDUMPFILE_0)
libkdumpfile.so.10(LIBKDUMPFILE_0)(64bit) libkdumpfile.so.9()(64bit) libkdumpfile.so.9(LIBKDUMPFILE_0)(64bit)

And likewise, finding the blast radius of said update:

$ rpmdistro-repoquery centos-stream 9 \ --whatrequires "libaddrxlat.so.2()(64bit)"
libkdumpfile-devel-0:0.4.1-5.el9.x86_64
libkdumpfile-util-0:0.4.1-5.el9.x86_64
python3-libkdumpfile-0:0.4.1-5.el9.x86_64 $ rpmdistro-repoquery centos-stream 9 \ --whatrequires "libkdumpfile.so.9()(64bit)"
drgn-0:0.0.22-1.el9.x86_64
libkdumpfile-devel-0:0.4.1-5.el9.x86_64
libkdumpfile-util-0:0.4.1-5.el9.x86_64
python3-libkdumpfile-0:0.4.1-5.el9.x86_64 $ rpmdistro-repoquery centos-stream-legacy 8 \ --whatrequires "libaddrxlat.so.2()(64bit)"
libkdumpfile-devel-0:0.4.1-5.el8.x86_64
libkdumpfile-util-0:0.4.1-5.el8.x86_64
python3-libkdumpfile-0:0.4.1-5.el8.x86_64 $ rpmdistro-repoquery centos-stream-legacy 8 \ --whatrequires "libkdumpfile.so.9()(64bit)"
drgn-0:0.0.22-1.el8.x86_64
libkdumpfile-devel-0:0.4.1-5.el8.x86_64
libkdumpfile-util-0:0.4.1-5.el8.x86_64
python3-libkdumpfile-0:0.4.1-5.el8.x86_64

Building OS images

mkosi is a tool for generating OS images; currently it contains the logic for different distributions (e.g. Fedora, CentOS), but this makes it hard to, for example, build an image for CentOS SIGs such as Hyperscale.

With Daan De Meyer’s refactor rpmdistro-repoquery’s repo files can now be reused by mkosi so in the future, tailoring what repositories are used to build an OS image should be much easier.

Conclusion

The contributors for this tool have found it very useful in our Linux distribution work, and we hope this article can help introduce it to others who likewise find it useful.

Please try it yourself — on Fedora Linux, and on any CentOS Stream or derivatives with EPEL enabled, simply do:

$ sudo dnf install rpmdistro-repoquery

If the distro you want to work with is not supported, pull requests are welcome! Likewise with suggestions or requests. If you want to package rpmdistro-repoquery in a different distribution, feel free to use the Fedora packaging as reference.

Posted on Leave a comment

Working with Btrfs – Compression

This article will explore transparent filesystem compression in Btrfs and how it can help with saving storage space. This is part of a series that takes a closer look at Btrfs, the default filesystem for Fedora Workstation, and Fedora Silverblue since Fedora Linux 33.

In case you missed it, here’s the previous article from this series: https://fedoramagazine.org/working-with-btrfs-snapshots

Introduction

Most of us have probably experienced running out of storage space already. Maybe you want to download a large file from the internet, or you need to quickly copy over some pictures from your phone, and the operation suddenly fails. While storage space is steadily becoming cheaper, an increasing number of devices are either manufactured with a fixed amount of storage or are difficult to extend by end-users.

But what can you do when storage space is scarce? Maybe you will resort to cloud storage, or you find some means of external storage to carry around with you.

In this article I’ll investigate another solution to this problem: transparent filesystem compression, a feature built into Btrfs. Ideally, this will solve your storage problems while requiring hardly any modification to your system at all! Let’s see how.

Transparent compression explained

First, let’s investigate what transparent compression means. You can compress files with compression algorithms such as gzip, xz, or bzip2. This is usually an explicit operation: You take a compression utility and let it operate on your file. While this provides space savings, depending on the file content, it has a major drawback: When you want to access the file to read or modify it, you have to decompress it first.

This is not only a tedious process, but also temporarily defeats the space savings you had achieved previously. Moreover, you end up (de)compressing parts of the file that you didn’t intend to touch in the first place. Clearly there is something better than that!

Transparent compression on the other hand takes place at the filesystem level. Here, compressed files still look like regular uncompressed files to the user. However, they are stored with compression applied on disk. This works because the filesystem selectively decompresses only the parts of a file that you access and makes sure to compress them again as it writes changes to disk.

The compression here is transparent in that it isn’t noticeable to the user, except possibly for a small increase in CPU load during file access. Hence, you can apply this to existing systems without performing hardware modifications or resorting to cloud storage.

Comparing compression algorithms

Btrfs offers multiple compression algorithms to choose from. For technical reasons it cannot use arbitrary compression programs. It currently supports:

  • zstd
  • lzo
  • zlib

The good news is that, due to how transparent compression works, you don’t have to install these programs for Btrfs to use them. In the following paragraphs, you will see how to run a simple benchmark to compare the individual compression algorithms. In order to perform the benchmark, however, you must install the necessary executables. There’s no need to keep them installed afterwards, so you’ll use a podman container to make sure you don’t leave any traces in your system.

Because typing the same commands over and over is a tedious task, I have prepared a ready-to-run bash script that is hosted on Gitlab (https://gitlab.com/hartang/btrfs-compression-test). This will run a single compression and decompression with each of the above-mentioned algorithms at varying compression levels.

First, download the script:

$ curl -LO https://gitlab.com/hartang/btrfs-compression-test/-/raw/main/btrfs_compression_test.sh

Next, spin up a Fedora Linux container that mounts your current working directory so you can exchange files with the host and run the script in there:

$ podman run --rm -it --security-opt label=disable -v "$PWD:$PWD" \ -w "$PWD" registry.fedoraproject.org/fedora:37

Finally run the script with:

$ chmod +x ./btrfs_compression_test.sh
$ ./btrfs_compression_test.sh

The output on my machine looks like this:

[INFO] Using file 'glibc-2.36.tar' as compression target
[INFO] Target file 'glibc-2.36.tar' not found, downloading now...
################################################################### 100.0%
[ OK ] Download successful!
[INFO] Copying 'glibc-2.36.tar' to '/tmp/tmp.vNBWYg1Vol/' for benchmark...
[INFO] Installing required utilities
[INFO] Testing compression for 'zlib' Level | Time (compress) | Compression Ratio | Time (decompress)
-------+-----------------+-------------------+------------------- 1 | 0.322 s | 18.324 % | 0.659 s 2 | 0.342 s | 17.738 % | 0.635 s 3 | 0.473 s | 17.181 % | 0.647 s 4 | 0.505 s | 16.101 % | 0.607 s 5 | 0.640 s | 15.270 % | 0.590 s 6 | 0.958 s | 14.858 % | 0.577 s 7 | 1.198 s | 14.716 % | 0.561 s 8 | 2.577 s | 14.619 % | 0.571 s 9 | 3.114 s | 14.605 % | 0.570 s [INFO] Testing compression for 'zstd' Level | Time (compress) | Compression Ratio | Time (decompress)
-------+-----------------+-------------------+------------------- 1 | 0.492 s | 14.831 % | 0.313 s 2 | 0.607 s | 14.008 % | 0.341 s 3 | 0.709 s | 13.195 % | 0.318 s 4 | 0.683 s | 13.108 % | 0.306 s 5 | 1.300 s | 11.825 % | 0.292 s 6 | 1.824 s | 11.298 % | 0.286 s 7 | 2.215 s | 11.052 % | 0.284 s 8 | 2.834 s | 10.619 % | 0.294 s 9 | 3.079 s | 10.408 % | 0.272 s 10 | 4.355 s | 10.254 % | 0.282 s 11 | 6.161 s | 10.167 % | 0.283 s 12 | 6.670 s | 10.165 % | 0.304 s 13 | 12.471 s | 10.183 % | 0.279 s 14 | 15.619 s | 10.075 % | 0.267 s 15 | 21.387 s | 9.989 % | 0.270 s [INFO] Testing compression for 'lzo' Level | Time (compress) | Compression Ratio | Time (decompress)
-------+-----------------+-------------------+------------------- 1 | 0.447 s | 25.677 % | 0.438 s 2 | 0.448 s | 25.582 % | 0.438 s 3 | 0.444 s | 25.582 % | 0.441 s 4 | 0.444 s | 25.582 % | 0.444 s 5 | 0.445 s | 25.582 % | 0.453 s 6 | 0.438 s | 25.582 % | 0.444 s 7 | 8.990 s | 18.666 % | 0.410 s 8 | 34.233 s | 18.463 % | 0.405 s 9 | 41.328 s | 18.450 % | 0.426 s [INFO] Cleaning up...
[ OK ] Benchmark complete!

It is important to note a few things before making decisions based on the numbers from the script:

  • Not all files compress equally well. Modern multimedia formats such as images or movies compress their contents already and don’t compress well beyond that.
  • The script performs each compression and decompression exactly once. Running it repeatedly on the same input file will generate slightly different outputs. Hence, the times should be understood as estimates, rather than an exact measurement.

Given the numbers in my output, I decided to use the zstd compression algorithm with compression level 3 on my systems. Depending on your needs, you may want to choose higher compression levels (for example, if your storage devices are comparatively slow). To get an estimate of the achievable read/write speeds, you can divide the source archives size (about 260 MB) by the (de)compression times.

The compression test works on the GNU libc 2.36 source code by default. If you want to see the results for a custom file, you can give the script a file path as the first argument. Keep in mind that the file must be accessible from inside the container.

Feel free to read the script code and modify it to your liking if you want to test a few other things or perform a more detailed benchmark!

Configuring compression in Btrfs

Transparent filesystem compression in Btrfs is configurable in a number of ways:

  • As mount option when mounting the filesystem (applies to all subvolumes of the same Btrfs filesystem)
  • With Btrfs file properties
  • During btrfs filesystem defrag (not permanent, not shown here)
  • With the chattr file attribute interface (not shown here)

I’ll only take a look at the first two of these.

Enabling compression at mount-time

There is a Btrfs mount option that enables file compression:

$ sudo mount -o compress=<ALGORITHM>:<LEVEL> ...

For example, to mount a filesystem and compress it with the zstd algorithm on level 3, you would write:

$ sudo mount -o compress=zstd:3 ...

Setting the compression level is optional. It is important to note that the compress mount option applies to the whole Btrfs filesystem and all of its subvolumes. Additionally, it is the only currently supported way of specifying the compression level to use.

In order to apply compression to the root filesystem, it must be specified in /etc/fstab. The Fedora Linux Installer, for example, enables zstd compression on level 1 by default, which looks like this in /etc/fstab:

$ cat /etc/fstab
[ ... ]
UUID=47b03671-39f1-43a7-b0a7-db733bfb47ff / btrfs subvol=root,compress=zstd:1,[ ... ] 0 0

Enabling compression per-file

Another way of specifying compressions is via Btrfs filesystem properties. To read the compression setting for any file, folder or subvolume, use the following command:

$ btrfs property get <PATH> compression

Likewise, you can configure compression like this:

$ sudo btrfs property set <PATH> compression <VALUE>

For example, to enable zlib compression for all files under /etc:

$ sudo btrfs property set /etc compression zlib

You can get a list of supported values with man btrfs-property. Keep in mind that this interface doesn’t allow specifying the compression level. In addition, if a compression property is set, it overrides other compression configured at mount time.

Compressing existing files

At this point, if you apply compression to your existing filesystem and check the space usage with df or similar commands, you will notice that nothing has changed. That is because Btrfs, by itself, doesn’t “recompress” all your existing files. Compression will only take place when writing new data to disk. There are a few ways to perform an explicit recompression:

  1. Wait and do nothing: As files are modified and written back to disk, Btrfs compresses the newly written file contents as configured. At some point, if we wait long enough, an increasing part of our files are rewritten and, hence, compressed.
  2. Move files to a different filesystem and back again: Depending on which files you want to apply compression to, this can become a rather tedious operation.
  3. Perform a Btrfs defragmetation

The last option is probably the most convenient, but it comes with a caveat on Btrfs filesystems that already contain snapshots: it will break shared extent between snapshots. In other words, all the shared content between two snapshots, or a snapshot and its’ parent subvolume, will be present multiple times after a defrag operation.

Hence, if you already have a lot of snapshots on your filesystem, you shouldn’t run a defragmentation on the whole filesystem. This isn’t necessary either, since with Btrfs you can defragment specific directories or even single files, if you wish to do so.

You can use the following command to perform a defragmentation:

$ sudo btrfs filesystem defragment -r /path/to/defragment

For example, you can defragment your home directory like this:

$ sudo btrfs filesystem defragment -r "$HOME"

In case of doubt it’s a good idea to start with defragmenting individual large files and continuing with increasingly large directories while monitoring free space on the file system.

Measuring filesystem compression

At some point, you may wonder just how much space you have saved thanks to file system compression. But how do you tell? First, to tell if a Btrfs filesystem is mounted with compression applied, you can use the following command:

$ findmnt -vno OPTIONS /path/to/mountpoint | grep compress

If you get a result, the filesystem at the given mount point is using compression! Next, the command compsize can tell you how much space your files need:

$ sudo compsize -x /path/to/examine

On my home directory, the result looks like this:

$ sudo compsize -x "$HOME"
Processed 942853 files, 550658 regular extents (799985 refs), 462779 inline.
Type Perc Disk Usage Uncompressed Referenced
TOTAL 81% 74G 91G 111G
none 100% 67G 67G 77G
zstd 28% 6.6G 23G 33G

The individual lines tell you the “Type” of compression applied to files. The “TOTAL” is the sum of all the lines below it. The columns, on the other hand, tell you how much space our files need:

  • “Disk Usage” is the actual amount of storage allocated on the hard drive,
  • “Uncompressed” is the amount of storage the files would need without compression applied,
  • “Referenced” is the total size of all uncompressed files added up.

“Referenced” can differ from the numbers in “Uncompressed” if, for example, one has deduplicated files previously, or if there are snapshots that share extents. In the example above, you can see that 91 GB worth of uncompressed files occupy only 74 GB of storage on my disk! Depending on the type of files stored in a directory and the compression level applied, these numbers can vary significantly.

Additional notes about file compression

Btrfs uses a heuristic algorithm to detect compressed files. This is done because compressed files usually do not compress well, so there is no point in wasting CPU cycles in attempting further compression. To this end, Btrfs measures the compression ratio when compressing data before writing it to disk. If the first portions of a file compress poorly, the file is marked as incompressible and no further compression takes place.

If, for some reason, you want Btrfs to compress all data it writes, you can mount a Btrfs filesystem with the compress-force option, like this:

$ sudo mount -o compress-force=zstd:3 ...

When configured like this, Btrfs will compress all data it writes to disk with the zstd algorithm at compression level 3.

An important thing to note is that a Btrfs filesystem with a lot of data and compression enabled may take a few seconds longer to mount than without compression applied. This has technical reasons and is normal behavior which doesn’t influence filesystem operation.

Conclusion

This article detailed transparent filesystem compression in Btrfs. It is a built-in, comparatively cheap, way to get some extra storage space out of existing hardware without needing modifications.

The next articles in this series will deal with:

  • Qgroups – Limiting your filesystem size
  • RAID – Replace your mdadm configuration

If there are other topics related to Btrfs that you want to know more about, have a look at the Btrfs Wiki [1] and Docs [2]. Don’t forget to check out the first three articles of this series, if you haven’t already! If you feel that there is something missing from this article series, let me know in the comments below. See you in the next article!

Sources

[1]: https://btrfs.wiki.kernel.org/index.php/Main_Page
[2]: https://btrfs.readthedocs.io/en/latest/Introduction.html

Posted on Leave a comment

Podman Checkpoint

Podman is a tool which runs, manages and deploys containers under the OCI standard. Running containers with rootless access and creating pods (a Pod is a group of containers ) are additional features of Podman. This article describes and explains how to use checkpointing in Podman to save the state of a running container for later use.

Checkpointing Containers : Checkpoint / Restore In User-space, or CRIU, is Linux software available in the Fedora Linux repository as the “criu” package. It can freeze a running container (or an individual application) and checkpoint its state to disk (Reference : https://criu.org/Main_Page). The saved data can be used to restore the container and run it exactly as it was during the time of the freeze. Using this, we can achieve live migration,  snapshots, or remote debugging of applications or containers. This capability requires CRIU 3.11 or later installed on the system. 

Podman Checkpoint

# podman container checkpoint <containername> 

This command will create a checkpoint of the container and freeze its state. Checkpointing a container will stop the running container as well. If you do podman ps there will be no container existing named <containername>.

You can export the checkpoint to a specific location as a file and copy that file to a different server

# podman container checkpoint <containername> -e /tmp/mycheckpoint.tar.gz

Podman Restore

# podman container restore --keep <containername> 

the –keep option will restore the container with all the temporary files.

To import the container checkpoint you can use:

# podman container restore -i /tmp/mycheckpoint.tar.gz

Live Migration using Podman Checkpoint

This section describes how to migrate a container from client1 to client2 using the podman checkpoint feature. This example uses the https://lab.redhat.com/tracks/rhel-system-roles playground provided by Red Hat as it has multiple hosts with ssh-keygen already configured.

The example will run a container with some process on client1, create a checkpoint, and migrate it to client2. First run a container on the client1 machine with the commands below:

podman run --name=demo1 -d docker.io/httpd
podman exec -it demo1 bash
sleep 600& (run a process for verification )
exit

The above snippet runs a container as demo1 with the httpd process which runs a sleep process for 600 seconds ( 10 mins ) in background. You can verify this by doing:

# podman top demo1 USER PID PPID %CPU ELAPSED TTY TIME COMMAND
root 1 0 0.000 5m40.61208846s ? 0s httpd -DFOREGROUND www-data 3 1 0.000 5m40.613179941s ? 0s httpd -DFOREGROUND www-data 4 1 0.000 5m40.613258012s ? 0s httpd -DFOREGROUND www-data 5 1 0.000 5m40.613312515s ? 0s httpd -DFOREGROUND root 88 1 0.000 16.613370018s ? 0s sleep 600

Now create a container checkpoint and export it to a specific file:

# podman container checkpoint myapache2 -e /tmp/mycheckpoint.tar.gz
# scp /tmp/mycheckpoint.tar.gz client2:/tmp/

Then on client2:

# cd /tmp
# podman container restore -i mycheckpoint.tar.gz
# podman top demo1

You should see the output as follows:

USER PID PPID %CPU ELAPSED TTY TIME COMMAND
root 1 0 0.000 5m40.61208846s ? 0s httpd -DFOREGROUND www-data 3 1 0.000 5m40.613179941s ? 0s httpd -DFOREGROUND www-data 4 1 0.000 5m40.613258012s ? 0s httpd -DFOREGROUND www-data 5 1 0.000 5m40.613312515s ? 0s httpd -DFOREGROUND root 88 1 0.000 16.613370018s ? 0s sleep 600

In this way you can achieve a live migration using the podman checkpoint feature.

Posted on Leave a comment

Join the conversation

U.S. politician Daniel Webster described the U.S. government as, “… the people’s government, made for the people, made by the people, and answerable to the people.”[1] Similarly, the Fedora Project is “a community of people working together”[2] and it is “led by contributors from across the community.”[3] In other words, “It is what you make of it.”[4]

The Fedora community invites you to join the conversation and help advance the Fedora Project and free software in general. Traditionally much of the collaboration in the Fedora Project had occurred over IRC. And IRC support will continue for the foreseeable future. But Fedora is also rolling out some newer technologies that we think might improve the user experience. Fedora has moved primary communications to Matrix for real time communication and collaboration. If you haven’t already done so, we encourage you to sign up for a Fedora account, open the Fedora Matrix space at chat.fedoraproject.org, and explore the vast world that is the Fedora Project via Matrix. As much as possible, the Fedora Project strives to be an open community. Anyone can contribute to Fedora and everyone of good will is welcome to join.

A high-level overview of the Fedora communication channels

As the saying goes, “Communication is key.” But communication comes in many forms. One subdivision of the various forms of communiction is synchronous and asynchronous. Traditionally, the Fedora Project has used email for asynchronous communication and IRC for synchronous communication. The forum discussion.fedoraproject.org is a new option for asynchronous communication and Matrix via chat.fedoraproject.org is a new option for synchronous communication.

Regarding the synchronous — asynchronous differentiation: It is good and important to differentiate our instruments in this dimension, and also to explain something. Synchronous does not mean that you get a reaction immediately, it can take a few days, if only because of the different time zones. But you can also “ping” someone specifically or invite them to a direct conversation. After some time, however, the topic will often be forgotten about on the timeline. Asynchronous tools, on the other hand, are organized thematically, bringing a topic to the front again and again as something is added to it. This provides a more systematic approach.

Importantly, the new tools are being provided as an option that you can choose. There is no requirement to use the new tools. You can expect both email and IRC to be around for a long time to come.

If you prefer email, you might want to check out the post: Guide to interacting with [discussion.fedoraproject.org] by email. If you prefer the IRC chat protocol, many of the rooms on Matrix at chat.fedoraproject.org are bridged to corresponding rooms on libera.chat.

Blog posts are yet another form of communication that will continue to be available at communityblog.fedoraproject.org and fedoramagazine.org. The former provides information expected to be of interest to the Fedora developers and Fedora special interest groups (SIGs). Posts about the tools used to build Fedora Linux, for example, are often found on the Community Blog. In contrast, this site — fedoramagazine.org — hosts articles expected to be of interest to the general Fedora community.

In a way, blog posts can be thought of as a super-asynchronous form of communication. The trade-off, as the forms of communication go from less-synchronous to more-synchronous, is that they tend to become somewhat lower in quality. That is, you can expect a much quicker response on IRC or chat.fedoraproject.org than if you request that a blog post be written about a subject on communityblog.fedoraproject.org. But of course, there is no guarantee that you will get a response on any of the channels. All contributions to the Fedora Project are voluntary. No one is ever obliged to provide any service to anyone else. But also don’t take a lack of response personally. Your question might just be outside the area of expertise of those who noticed it.

I like to think of the relationship between the various forms of communication that the Fedora Project uses as having an inversely proportional relation between frequency and contemplativeness.

The point is that these communication methods each serve different needs but they are complementary. You won’t want to limit yourself to just one of the communications channels. If you need rapid responses to simple questions, use the chat server. If you want to go in-depth on a complex topic, it might be something that would make a good blog post. And the forum is for everything in between.

So what are you waiting for! Sign up! Explore the community! If you come across something you think you can help out with or even just something that you think you might want to get involved with, jump in and offer to help! And above all, have FUN!

See also: What can I do for Fedora?

Thanks to Peter Boy, Kevin Fenzi, and others who provided helpful feedback and content for this announcement.


References

  1. Webster-Hayne debate
  2. Fedora’s Mission and Foundations — What is Fedora
  3. Fedora Leadership
  4. Paraphrased version of “A man’s life is what his thoughts make of it.” (Marcus Aurelius)
Posted on Leave a comment

Announcing the Display/HDR hackfest

Hi all,

This is Carlos Soriano, Engineering Manager at the GPU team at Red Hat. I’m here together with Sebastian Wick, primary HDR developer at Red Hat, and Niels de Graef, GPU team Product Owner at Red Hat, to announce that we’re organizing the Display/HDR hackfest in Brno in the Czech Republic, April 24-26! The focus will be on planning and development of the technical infrastructure needed for various display technologies, specifically those that need GNOME Shell to work in tandem with the GPU stack. One of the main examples of this is HDR support, which we know you have all been waiting for!

Details

The purpose of the hackfest is to bring together contributors from across the display/GPU stack. Attendees will include those from projects such as Freedesktop, GNOME, KDE, Mesa, Wayland and the Linux kernel. This is going to be a great opportunity to meet and collaborate on the holistic approach necessary to make these technologies work well across various vendors and projects. 

The proposed length of the hackfest is 2 full days, and a third day for wrapping up during the morning and doing a local activity during the afternoon.

Now, you might be asking why are these technologies, such as HDR, important for us? And what is the plan to integrate them in Fedora? Well, let’s take HDR as a primary example, and start by explaining what HDR is.

What is HDR? – By Sebastian Wick

When most people talk about High Dynamic Range (HDR) they often refer not only to HDR but also to Wide Color Gamut (WCG). Both of these terms describe an improvement of display technologies.

The dynamic range of a display describes the ratio of the lowest luminance to the highest luminance it can produce at the same time. A high dynamic range thus means an increase in the highest luminance (colloquially called brighter whites) or a decrease of the lowest luminance (darker blacks), or both.

The color gamut of a display describes the colors it is able to reproduce. If a gamut is “wide”, it means the display is able to reproduce more chromaticities compared to a small gamut. To put it colloquially, it can show more colors.

We humans are able to perceive images which have up to a certain dynamic range and color gamut. The closer displays get to those capabilities the more immersive the resulting images are. HDR, in its broader meaning, is therefore all about being able to show more colors, and we can use HDR modes on displays to unlock their full potential.

From a technical perspective, enabling those modes and presenting content is not hard, as long as the display only has to show a single source. While this may work for some use cases, a general purpose desktop requires composition of various Standard Dynamic Range (SDR) signals, color managed SDR signals and various HDR signals to various SDR displays, color managed displays and HDR displays at the same time. You can take a look at Apple’s EDR concept if you want to see how this looks when done right.

There are no industry standards yet for this kind of composition and most HDR modes, unfortunately all the common modes, are also not designed for this use case. Instead they focus on presenting a single HDR source.

With the increase in composition complexity, offloading the composition and achieving a zero-copy direct-scanout scenario becomes much harder. But this is required to keep power consumption in check and thus improve battery life.

Wow, that sounds complex

Yeah, this all sounds more complex than someone could imagine, but we’re confident we can get there. Now, why is HDR important for us, and what is our plan for integrating it in Fedora once it is ready? Niels de Graef has been working as the primary HDR feature owner at Red Hat for a couple of months now and can help us understand that.

Why is HDR important for us, and what is our plan for integrating it in Fedora? – By Niels de Graef

By adding support for HDR, we want to be an enabler for several key groups.

On one hand, we want to support content creators who see HDR as a very interesting feature. It allows them to present their work to people the way they intend it to be seen, eliminating the effect of “washed down” colors due to the monitor only supporting a relatively small color space. For example: as an artist, you might want to specify exactly how bright the sun in a desert scene should look while making sure the rest of the scene does not degrade in detail.

As content creators go, an important stakeholder is the VFX industry, which consists of big players like Disney. Red Hat closely collaborates with the industry, which also recommends Red Hat Enterprise Linux (RHEL) as their choice of distribution. We want to make certain we get the industry’s feedback so we can incorporate it in this story and make sure we get this right from the start.

On the other hand, we want to enable Linux users. Hardware that supports HDR is becoming more commonplace and is becoming more affordable as of late. HDR is becoming more supported, and an increasing amount of content is making use of it. As long as we don’t have HDR support, Linux users will have a degraded experience compared to Windows and Mac users.

Finally, supporting HDR fits into the foundations of Fedora, where we want to do the right thing, making sure everyone is free to enjoy the latest innovations and features. This follows our move to Wayland, which, as a modern graphics stack, allows us to build new features like this.

Wrapping up

We hope that you enjoy the work we’re doing to enable the Linux ecosystem and users to make use of the latest technologies. We’re definitely excited with what we are aiming to achieve. Thanks to everyone who is contributing to this effort, and the organization of the hackfest.

We hope to see you all in Brno in April!

Posted on Leave a comment

Automatically decrypt your disk using TPM2

This article demonstrates how to configure clevis and systemd-cryptenroll using a Trusted Platform Module 2 chip to automatically decrypt your LUKS-encrypted partitions at boot.

If you just want to get automatic decryption going you may skip directly to the Prerequisites section.

Motivation

Disk encryption protects your data (private keys and critical documents) through direct access of your hardware. Think of selling your notebook / smartphone or it being stolen by an opportunistic evil actor. Any data, even if “deleted”, is recoverable and hence may fall into the hands of an unknown third party.

Disk encryption does not protect your data from access on the running system. For example, disk encryption does not protect your data from access by malware running as your user or in kernel space. It’s already decrypted at that point.

Entering the passphrase to decrypt the disk at boot can become quite tedious. On modern systems a secure hardware chip called “TPM” (Trusted Platform Module) can store a secret and automatically decrypt your disk. This is an alternative factor, not a second factor. Keep that in mind. Done right, this is an alternative with a level of security similar to a passphrase.

Background

A TPM2 chip is a little hardware module inside your device which basically provides APIs for either WRITE-only or READ-only information. This way you might write a secret onto it, but you can never read it out later (but the TPM may use it later internally). Or you write info at one point that you only read out later. The TPM2 provides something called PCRs (Platform Configuration Registers). These registers take SHA1 or SHA256 hashes and contain measurements used to assert integrity of, for example, the UEFI configuration.

Enable or disable Secure Boot in the system’s UEFI. Among other things, Secure Boot computes hashes of every component in the boot chain (UEFI and its configuration, bootloader, etc.) and chains them together such that a change in one of those components changes the computed and stored hashes in all following PCRs. This way you can build up trust about the environment you are in. Having a measure of the trustworthiness of your environment is useful, for example, when decrypting your disk. The UEFI Secure Boot specification defines PCRs 0 – 7. Everything beyond that is free for the OS and applications to use.

A summary of what is measured into which PCRs according to the spec

  • PCR 0: the EFI Firmware info like its version
  • PCR 1: additional config and info related to the EFI Firmware
  • PCR 2: EFI drives from hardware components (like RAID controller)
  • PCR 3: additional config and info to drivers stored in 2
  • PCR 4: pre-OS diagnostics and the EFI OS Loader
  • PCR 5: config of the EFI OS Loader and GPT table
  • PCR 6: is reserved for host platform manufacturer variables and is not used by EFI
  • PCR 7: stores secure boot policy configuration

Some examples on what is measured into which PCR

  • Changes to the initramfs measure into PCRs 9 and 10. So if you regenerate the initramfs using dracut -f you have to rebind. This will happen on every update to the kernel.
  • Changes to the Grub configuration, like adding kernel arguments, kernels, etc. measure into PCRs 8, 9 and 10.
  • Storage devices measure into PCRs 8 and 10. However, Hubs and YubiKeys do not seem to measure in any PCR.
  • Additional operating systems measure into PCR 1. This occurs, for example, when attaching a USB stick before boot with a Fedora Linux live image.
  • Booting into a live image changes PCRs 1, 4, 5, 8, 9 and 10.

A tool called clevis generates a new decryption secret for the LUKS encrypted disk, stores it in the TPM2 chip and configures the TPM2 to only return the secret if the PCR state matches the one at configuration time. Clevis will attempt to retrieve the secret and automatically decrypt the disk at boot time only if the state is as expected.

Security implications

As you establish an alternative unlock method using only the on-board hardware of your platform, you have to trust your platform manufacturer to do their job right. This is a delicate topic. There is trust in a secure hardware and firmware design. Then there is trust that the UEFI, bootloader, kernel, initramfs, etc. are all unmodified. Combined you expect a trustworthy environment where it is OK to automatically decrypt the disk.

That being said you have to trust (or better, verify) that the manufacturer did not mess anything up in the overall platform design for this to be considered a fairly safe decryption alternative. There are a range of cases where things did not work out as planned. For example, when security researches showed that BitLocker on a Lenovo notebook would use unencrypted SPI communication with the TPM2 leaking the LUKS passphrase in plain text without even altering the system, or that BitLocker used the native encryption features of SSD drives that you can by-pass through factory reset.

These examples are all about BitLocker but it should make it clear that if the overall design is broken, then the secret is accessible and this alternative method less secure than a passphrase only present in your head (and somewhere safe like a password manager). On the other hand, keep in mind that in most cases elaborate research and attacks to access a drive’s data are not worth the effort for an opportunistic bad actor. Additionally, not having to enter a passphrase on every boot should help adoption of this technology as it is transparent but adds additional hurdles to unwanted access.

Prerequisites

First check that:

  • Secure Boot is enabled and working
  • A TPM2 chip is available
  • The clevis package is installed

Clevis is where the magic happens. It’s a tool you use in the running OS to bind the TPM2 as an alternative decryption method and use it inside the initramfs to read the decryption secret from the TPM2.

Check that secure boot is enabled. The output of dmesg should look like this:

$ dmesg | grep Secure
[ 0.000000] secureboot: Secure boot enabled
[ 0.000000] Kernel is locked down from EFI Secure Boot mode; see man kernel_lockdown.7
[ 0.005537] secureboot: Secure boot enabled
[ 1.582598] integrity: Loaded X.509 cert 'Fedora Secure Boot CA: fde32599c2d61db1bf5807335d7b20e4cd963b42'
[ 35.382910] Bluetooth: hci0: Secure boot is enabled

Check dmesg for the presence of a TPM2 chip:

$ dmesg | grep TPM
[ 0.005598] ACPI: TPM2 0x000000005D757000 00004C (v04 DELL Dell Inc 00000002 01000013)

Install the clevis dependencies and regenerate your initramfs using dracut.

sudo dnf install clevis clevis-luks clevis-dracut clevis-udisks2 clevis-systemd
sudo dracut -fv --regenerate-all
sudo systemctl reboot

The reboot is important to get the correct PCR measurements based on the new initramfs image used for the next step.

Configure clevis

To bind the LUKS-encrypted partition with the TPM2 chip. Point clevis to your (root) LUKS partition and specify the PCRs it should use.

Enter your current LUKS passphrase when asked. The process uses this to generate a new independent secret that will tie your LUKS partition to the TPM2 for use as an alternative decryption method. So if it does not work you will still have the option to enter your decryption passphrase directly.

sudo clevis luks bind -d /dev/nvme... tpm2 '{"pcr_ids":"1,4,5,7,9"}'

As mentioned previously, PCRs 1, 4 and 5 change when booting into another system such as a live disk. PCR 7 tracks the current UEFI Secure Boot policy and PCR 9 changes if the initramfs loaded via EFI changes.

Note: If you just want to protect the LUKS passphrase from live images but don’t care about more “elaborate” attacks such as altering the unsigned initramfs on the unencrypted boot partition, then you might omit PCR 9 and save yourself the trouble of rebinding on updates.

Automatically decrypt additional partitions

In case of secondary encrypted partitions use /etc/crypttab.

Use systemd-cryptenroll to register the disk for systemd to unlock:

sudo systemd-cryptenroll /dev/nvme0n1... --tpm2-device=auto --tpm2-pcrs=1,4,5,7,9

Then reflect that config in your /etc/crypttab by appending the options tpm2-device=auto,tpm2-pcrs=1,4,5,7,9.

Unbind, rebind and edit

List all current bindings of a device:

$ sudo clevis luks list -d /dev/nvme0n1... tpm2
1: tpm2 '{"hash":"sha256","key":"ecc","pcr_bank":"sha256","pcr_ids":"0,1,2,3,4,5,7,9"}'

Unbind a device:

sudo clevis luks unbind -d /dev/nvme0n1... -s 1 tpm2

The -s parameter specifies the slot of the alternative secret for this disk stored in the TPM. It should be 1 if you always unbind before binding again.

Regenerate binding, in case the PCRs have changed:

sudo clevis luks regen -d /dev/nvme0n1... -s 1 tpm2

Edit the configuration of a device:

sudo clevis luks edit -d /dev/nvme0n1... -s 1 -c '{"pcr_ids":"0,1,2,3,4,5,7,9"}'

Troubleshooting

Disk decryption passphrase prompt shows at boot, but goes away after a while:

Add a sleep command to the systemd-ask-password-playmouth.service file using systemctl edit to avoid requests to the TPM before its kernel module is loaded:

[Service]
ExecStartPre=/bin/sleep 10

Add the following to the config file /etc/dracut.conf.d/systemd-ask-password-plymouth.conf:

install_items+=" /etc/systemd/system/systemd-ask-password-plymouth.service.d/override.conf "

Then regenerate dracut via sudo dracut -fv –regenerate-all.

Reboot and then regenerate the binding:

sudo systemctl reboot
...
sudo clevis luks regen -d /dev/nvme0n1... -s 1

Resources

Posted on Leave a comment

Using .NET 7 on Fedora Linux

.NET 7 is now available in Fedora Linux. This article briefly describes what .NET is, some of its recent and interesting features, how to install it, and presents some examples showing how it can be used.

.NET 7

.NET is a platform for building cross platform applications. It allows you to write code in C#, F#, or VB.NET. You can easily develop applications on one platform and deploy and execute them on another platform or architecture.

In particular, you can develop applications on Windows and run them on Fedora Linux instead! This is one less hurdle if you want to move from a proprietary platform to Fedora Linux. It’s also possible to develop on Fedora and deploy to Windows. Please note that in this last scenario, some Windows-specific application types, such as GUI Windows applications, are not available.

.NET 7 includes a number of new and exciting features. It includes a large number of performance enhancements to the runtime and the .NET libraries, better APIs for working with Unix file permissions and tar files, better support for observability via OpenTelemetry, and compiling applications ahead-of-time. For more details about all the new features in .NET 7, see https://learn.microsoft.com/en-us/dotnet/core/whats-new/dotnet-7.

Fedora Linux builds of .NET 7 can even run on the IBM Power (ppc64le) architecture. This is in addition to support for 64-bit ARM/Aarch64 (which Fedora Linux calls aarch64 and .NET calls arm64), IBM Z (s390x) and 64-bit Intel/AMD platforms (which Fedora Linux calls x86_64 and .NET calls x64).

.NET 7 is a Standard Term Support (STS) release, which means upstream will stop maintaining it on May 2024. .NET in Fedora Linux will follow that end date. If you want to use a Long Term Support (LTS) release, please use .NET 6 instead. .NET 6 reaches its end of Life on November 2024. For more details about the .NET lifecycle, see https://dotnet.microsoft.com/en-us/platform/support/policy/dotnet-core.

If you are looking to set up a development environment for developing .NET applications on Fedora Linux, take a look at https://fedoramagazine.org/set-up-a-net-development-environment/.

The .NET Special Interest Group (DotNetSIG) maintains .NET in Fedora Linux. Please come and join us to improve .NET on Fedora Linux! You can reach us via IRC (#fedora-devel) or mailing lists (dotnet-sig@lists.fedoraproject.org) if you have any feedback, questions, ideas or suggestions.

How to install .NET 7

To build C#, F# or VB.NET code on Fedora Linux, you will need the .NET SDK. If you only want to run existing applications, you will only need the .NET Runtime.

Install the .NET 7 Software Development Kit (SDK) using this command:

sudo dnf install -y dotnet-sdk-7.0

This installs all the dependencies, including a .NET runtime.

If don’t want to install the entire SKD but just want to run .NET 7 applications, you can install either the ASP.NET Core runtime or the .NET runtime using one of the following commands:

sudo dnf install -y aspnetcore-runtime-7.0
sudo dnf install -y dotnet-runtime-7.0

This style of package name applies to all versions of .NET on all versions of Fedora Linux. For example, you can install .NET 6 using the same style of package name:

sudo dnf install -y dotnet-sdk-6.0

To make certain .NET 7 is installed, run dotnet –info to see all the SDKs and Runtimes installed.

License and Telemetry

The .NET packages in Fedora Linux are built from fully Open Source source code. The primary license is MIT. The .NET packages in Fedora Linux do not contain any closed source or proprietary software. The Fedora .NET team builds .NET offline in the Fedora Linux build system and removes all binaries present in the source code repositories before building .NET. This gives us a high degree of confidence that .NET is built from reviewed sources.

The .NET packages in Fedora Linux do not collect any data from users. All telemetry is disabled in the Fedora builds of .NET. No data is collected from anyone running .NET and no data is sent to Microsoft. We run tests to verify this for every build of .NET in Fedora Linux.

“Hello World” in .NET

After installing .NET 7, you can use it to create and run applications. For example, you can use the following steps to create and run the classic “Hello World” application.

Create a new .NET 7 project in the C# language:

dotnet new console -o HelloWorldConsole

This will create a new directory named HelloWorldConsole and create a trivial C# Hello World that prints hello world.

Then, switch to the project directory:

cd HelloWorldConsole

Finally, build and run your the application:

dotnet run

.NET 7 will build your program and run it. You should see a “Hello world” output from your program.

“Hello Web” in .NET

You can also use .NET to create web applications. Lets do that now.

First, create a new web project, in a separate directory (not under our previous project):

dotnet new web -o HelloWorldWeb

This will create a simple Hello-World style application based on .NET’s built-in web (Empty ASP.NET Core) template.

Now, switch to that directory:

cd HelloWorldWeb

Finally, build and run the application:

dotnet run

You should see output like the following that shows the web application is running.

Building…
info: Microsoft.Hosting.Lifetime[14]
Now listening on: http://localhost:5105
info: Microsoft.Hosting.Lifetime[0]
Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
Hosting environment: Development
info: Microsoft.Hosting.Lifetime[0]
Content root path: /home/omajid/temp/HelloWorldWeb

Use a web browser to access the application. You can find the URL in the output at the “Now listening on:” line. In my case that’s http://localhost:5105:

firefox http://localhost:5105

You should see a “Hello World” message in your browser.

Using .NET with containers

At this point, you have successfully created, built and run .NET applications locally. What if you want to isolate your application and everything about it? What if you want to run it in a non-Fedora OS? Or deploy it to a public/private/hybrid cloud? You can use containers! Let’s build a container image for running your .NET program and test it out.

First, create a new project:

dotnet new web -o HelloContainer

Then, switch to that project directory:

cd HelloContainer

Then add a Dockerfile that describes how to build a container for our application.

FROM fedora:37
RUN dnf install -y dotnet-sdk-7.0 && dnf clean all
RUN mkdir /HelloContainer/
WORKDIR /HelloContainer/
COPY . /HelloContainer/
RUN dotnet publish -c Release
ENV ASPNETCORE_URLS="http://0.0.0.0:8080"
CMD ["dotnet" , "bin/Release/net7.0/publish/HelloContainer.dll"]

This will start with a default Fedora Linux container, install .NET 7 in it, copy your source code into it and use the .NET in the container to build the application. Finally, it will set things up so that running the container runs your application and exposes it via port 8080.

You can build and run this container directly. However, if you are familiar with Dockerfiles, you might have noticed that it is quite inefficient. It will re-download all dependencies and re-build everything on any change to any source file. It produces a large container image at the end which even contains the full .NET SDK. An option is to use a multi-stage build to make it faster to iterate on the source code. You can also produce a smaller container at the end that contains just your application and .NET dependencies.

Overwrite the Dockerfile with this:

FROM registry.fedoraproject.org/fedora:37 as dotnet-sdk
RUN dnf install -y dotnet-sdk-7.0 && dnf clean all FROM registry.fedoraproject.org/fedora:37 as aspnetcore-runtime
RUN dnf install -y aspnetcore-runtime-7.0 && dnf clean all FROM dotnet-sdk as build-env
RUN mkdir /src
WORKDIR /src
COPY *.csproj .
RUN dotnet restore
COPY . .
RUN dotnet publish -c Release -o /publish FROM aspnetcore-runtime as app
WORKDIR /publish
COPY --from=build-env /publish .
ENV ASPNETCORE_URLS="http://0.0.0.0:8080"
EXPOSE 8080
ENTRYPOINT ["dotnet", "HelloContainer.dll"]

Now install podman so you can build and run the Dockerfile:

sudo dnf install -y podman

Build the container image:

podman build -t hello-container .

Now, run the container we just built:

podman run -it -p 8080:8080 hello-container

A note about the arguments. The port is configured with the -p flag so that port 8080 from inside the container is available as port 8080 outside too. This allows you to connect to the application directly. The container is run interactively (-it) so you can see the output and any errors that come up. Running interactively is usually not needed when deploying an application to production.

Finally, connect to the container using a web browse. For example:

firefox http://localhost:8080

You should see a “Hello World” message.

Congratulations! You now have a .NET application running inside a Fedora container!

Conclusion

This was a whirlwind overview of .NET 7 in Fedora Linux and covers building and running an application using plain Fedora RPM packages as well as creating an application for a .NET application using only Fedora Linux.

If you have an interest in using or improving .NET on Fedora Linux, please join us!

Posted on Leave a comment

Fedora Project at FOSDEM 2023

Fedora Project will be present at FOSDEM 2023. This article describes this gathering and a few of the events on the agenda. I assume if you are reading the Fedora Magazine, you already know what FOSDEM is, but I’ll start with a small intro anyway.

Define FOSDEM

FOSDEM is the biggest event in the known universe for free/libre and open-source developers and enthusiasts.

Many good people from around the world meet and discuss common topics and define the future of F/LOSS. The event is held in Brussels at the beginning of February. Some of us, who are coming from a bit warmer countries, are calling it FrOSDEM, because it’s usually freezing 🙂

Why attend?

If you are a contributor already or you want to start doing good with your skills for the F/LOSS universe, this event is a must. 

I know everyone has their reason for visiting, but I’ll share the most common ones:

  • You meet the people creating and supporting the products that power the Internet that you already use.
  • If you are a contributor already, you have a chance to meet with your team and people using your product.
  • You learn so much new stuff quickly.
  • You enlarge your horizons by looking at something outside your bubble. If you are a fan of Fedora, go and learn more about Security or Javascript.
  • You have a chance to talk to others with the same passion as yours and even become friends for life. A good friend is always a commodity!
  • You achieve your daily steps goal because the ULB campus is enormous, and you will have to move a lot to get to the room you would like to visit.
  • You have a chance to volunteer and help the community if this is what drives you.
  • You attend an event with a great Code of Conduct.

Fedora at FOSDEM 2023

It’s a tradition for the Fedora Project team to be there to present some of our work from the last year and to allow you to share your feedback on what we do well and how we can improve.

Meet, greet, and see our community in action

One of the most extraordinary things at FOSDEM, which I deliberately didn’t mention in the previous section, is the project booths. In almost every building, you will see people behind a branded table, ready to talk to you about their project, its values, and its mission.

People at the Fedora booth looking at something.
Image by Francesco Crippa under Attribution 2.0 Generic (CC BY 2.0)

 I have to mention the goodies here, as well. You will return home with many items from your favorite projects. Be sure to continue supporting them further.  

We at Fedora will be happy to welcome you to our booth as well. You can talk to the community members, give us constructive feedback, and see some of the things we prepared.

Our booth location is in building H, alongside the rest of the Linux Distros.

Map of the ULB campus with a mark of the building H, where the Fedora Project booth will be
Building H, ULB Campus.

Stop by and say hi in your language! We are looking forward to talking to you!

We want to share what makes our work exceptional

At each FOSDEM we have a good number of talks related to what we do at Fedora. I am listing only some of them to make it enjoyable for you to browse the agenda and discover the rest yourself.

1: Fedora CoreOS – Your Next Multiplayer Homelab Distro

Using Fedora CoreOS in a Selfhosted Homelab to setup a Multiplayer Server

Speakers

Akashdeep Dhar
 Objective Lead for Fedora Websites & Apps, Fedora Council
 Software Engineer, Red Hat Community Platform Engineering

Sumantro Mukherjee
 Elected Representative, Fedora Council
 Software Quality Engineer, Red Hat

Intro

Fedora CoreOS is an essential, monolithic, automatically updating operating system optimized for running containers. It focuses on offering the best container host for executing containerized workloads securely and at scale. We will show a case study of setting up Fedora CoreOS as a self-hosted Homelab distribution for globally accessible (using secure network tunneling) multiplayer servers for video games (namely Minecraft, Valheim, etc.).

When and Where

Saturday, Feb-4 at the Containers devroom from 11:30 to 12:00


2: Creative Freedom Summit Retrospective

Speakers

Emma Kidney

Part of Red Hat’s Community Platform Engineering team since 2021. 
Designer at Red Hat’s Community Design Team. 

Jess Chitas 

Part of Red Hat’s Community Platform Engineering team.
Creator of Fedora’s mascot – Colúr, and Fedora Brand Guidelines Booklet.

Intro

The Creative Freedom Summit is a virtual event focused on promoting Open Source tools, spreading knowledge of how to use them, and connecting creatives across the FOSS ecosystem. The summit’s accomplishments and shortcomings will be examined in light of the event’s first year and potential changes for the following years.

When and Where

Sunday, Feb-5 at the Open Source Design Dev Room from 14:30 to 14:55


Where to find more related talks?

Our wiki page is a good start, but FOSDEM’s schedule catalog is even better. One life hack: select a good 30 min slot, go through all the rooms which might get your attention, and create a personal schedule in your favorite calendar app. Make sure you have a backup plan because some rooms might be fully occupied, and you cannot enter.

I want to interest you in a challenge

If you know more than I do about FOSDEM 2023 and have already prepared your schedule, share a single paragraph comment about your FOSDEM plan and list a few of your favorite talks. You will help the community understand the greatness of the event and find more reasons to make the trip to frosty Brussels.

See you there!

Posted on Leave a comment

How to become a Shortwave listener (SWL) with Fedora Linux and Software Defined Radio

Catching signals from others is how we have started communicating as human beings. It all started, of course, with our vocal cords. Then we moved to smoke signals for long-distance communication. At some point, we discovered radio waves and are still using them for contact. This article will describe how you can tune in using Fedora Linux and an SDR dongle.

My journey

I got interested in radio communication as a hobby when I was a kid, while my local club, LZ2KRS, was still a thing. I was so excited to be able to listen and communicate with people worldwide. It opened a whole new world for me. I was living in a communist country back then and this was a way to escape just for a bit. It also taught me about ethics and technology.  

Year after year my hobby grew and now, in the Internet era with all the cool devices you can use, it’s getting even more exciting. So I want to show you how to do it with Fedora Linux and a hardware dongle.

What is Ham Radio

Amateur Radio (ham radio) is a popular hobby and service that brings people, electronics, and communication together. People use ham radio to talk across town, worldwide, or even into space, without the Internet or cell phones. 

What’s SWLing?

To broadcast with your ham radio or SDR system, you need to obtain a license from a governmental body. But to intercept signals and listen to the open communication between two amateur radio stations, you don’t need one.

The term SWLing comes from the abbreviation of Short Wave Listener, where you listen to stations communicating in the shortwave bands between 3 and 30 MHz. This can be used for long-distance communication using the ionosphere, a layer of the Earth’s atmosphere. 

To get started, you don’t need a license. Still, I recommend getting yourself an SWL sign to identify yourself in a listening contest. These are competitions for categories like who will discover the most connections in a month or who can listen to contacts from each country in the world. 

How to get an SWL Sign?

There are two options:

  • Contact your national radio club and ask them to issue one for you. I got my Czech one, OK1-36568, after a few weeks.
  • Join the Short Wave Amateur Radio Listening community and request a sign there.

You will get more information and help from either of these locations if you get stuck in some fashion! 

QSL Cards

You can also use your sign to send QSL cards via post or electronically. This is a great way to communicate with people worldwide and make friends.

Per Wikipedia, QSL card is a written confirmation of either a two-way radio communication between two amateur radio or citizens band stations; a one-way reception of a signal from an AM radio, FM radio, television, or shortwave broadcasting station; or the reception of a two-way radio communication by a third party listener (in our case). 

A typical QSL card is the same size and made from the same material as a regular postcard; most are sent through snail mail. 

Replace the radio receiver with your Fedora Linux.

The focal point of the ham radio hobby is the radio transmitter/receiver. Most of the time, enthusiasts build their radio from scratch, but this differs from what I will write about here. 

SDR

A software-defined radio (SDR) system is a radio communication system that uses software for the modulation and demodulation of radio signals. In other words, a piece of hardware and software takes the place of a radio transmitter/receiver. This helps you discover more in a way that you are familiar with – a User Interface with built-in functions instead of the limited interface of a radio receiver. 

My explanation oversimplifies things, so if you want to go deep and read more about SDR, here is an excellent start.

SDR Set Up under Fedora Linux

Choosing the proper hardware

If you search the Internet for an SDR dongle, you’ll find tons of ideas depending on your budget. In this tutorial, I’ll work with the one I have, which works well under Fedora 37 – it is available from Nooelec.

A note: The dongle covers frequencies from 25MHz to 1750MHz, which doesn’t cover the Short Wave bands. You would need an additional device to listen to them. This is included in the package I linked above. Some other hardware providers offer all-in-one products.

Check if the dongle is visible

Before installing anything, detect whether Fedora Linux recognizes your USB dongle. I hope you didn’t buy a fake one :-). Use the following command to list the USB devices on your system.

lsusb

One of the output lines (in the case of Nooelec) should be

Realtek Semiconductor Corp. RTL2838 DVB-T

A screen from Fedora Linux showing the results of the lsusb command listing the Realtek Device we will be using in this exercise.

Now proceed by installing the software you need

Fedora offers a set of tools and drivers packaged as a group. Even though you would not use all the components in this package from the beginning, I recommend installing it. You’ll have more software to play with.

sudo dnf group install 'Electronic Lab'

I advise you to explore what’s in the group by running this command:

sudo dnf group info 'Electronic Lab'

Now check if you have everything set up correctly by running:

rtl_test

You should see something like this:

A screen from Fedora Linux console showing the results of the previous command listing the device and its properties.

Do not forget to kill this process because the device will be busy and cannot be used in the next step. A simple Ctrl + c works.

Gqrx

You have the dongle already in your device’s USB port and all the software you need to get started. 

 Now it’s time to intercept your first signal. Start the program called Gqrx. Don’t be alarmed by the strange interface. You’ll get used to it. 

Configure the I/O Device Screen

A screen showing the settings panel of the software Gqrx, for the device we use.

From the “Device” Dropdown, select the ‘RealtekRTL2838...’

Leave the rest untouched for the moment.

If you don’t see your device there, click the “Device Scan” button at the bottom of the screen.

When your device is selected, click “OK” and the dialogue will close.

Configure the frequency screen

Before you start intercepting signals, ensure there is something out there that proves that everything works correctly. Since the dongle covers the FM radio band as well, do this:

  • Locate your favorite radio station’s frequency. Mine is 105MHZ
  • Set it in the Frequency field
  • Select WFM (stereo) in the “Mode” dropdown. If you don’t do this, you will not hear a sound.
A screen from the gqrx software helping us to se the frequency to our favorite FM station.

Play

And now, you need to start the reception by clicking the “play button” in your main menu. You will see the frequency visualized like this:

A screen from gqrz displaying the signal received.

If you hear a sound, everything is ready to move to the next step.

If you don’t hear anything, check if everything is set up correctly. You may ask a questions in the comments for this article; I can direct you to the proper forum to solve this.

Feel free to play with some more FM broadcasts. You have the antenna for it in your pack.

Let’s go Short Wave

In the case of the Nooelec, you need to add one more device to the USB dongle and turn it on. Instructions on how to do that are included in the package you receive.  

In short, you plug the “Up Converter” into your USB dongle and make sure the switch is in the “convert” position. Some videos are available on how to do it if you get stuck.

You will need an antenna and a good location

Now things get trickier. If you live in an area where you don’t see an open space out your window or other buildings surround your building, you might have trouble catching a Short Wave radio amateur signal. 

Let’s try this to see if it works

Try to be in the open. I usually listen from my terrace, which could be better but works under particular conditions.

Apart from the hardware, you would need a long wire to act as your antenna. Try the antenna that comes with the hardware initially – the telescoping one from Nooelec, but it will catch only powerful signals.

Let’s go back to Gqrx

Now with the converter, you need to make some changes to your device screen:

A screen from gqrx showing how to set up the SDR with the up Converter, You need to add the value of -125Mhz to the LNB LO Field.

Please note the –125Mhz for the LNB LO field. This is required for the Up Converter to work.

Tune your frequency to 14.100 Mhz and make sure your Mode is USB (standing for Upper sideband) because this is this band’s main demodulation option.

Then go to your FFT Settings screen, use the zoom slider, and set it to see about 100 kHz. In our case, you should have between 14.05 to 14.15 Mhz on your screen.

Also, click the “Enable Band Plan” to see the information about the SW bands you are exploring.

Then hit the play button and start exploring the space between 14.0 and 14.3 Mhz to get any amateur radio transmission.

A screen showing the gqrc signal receiver in work with the setting described in this section.

When intercepting a transmission, adjust your settings to improve your listening experience. It’s a journey that you have already started.

Most probably, you will hear something like this:

“CQ CQ CQ this is ..(followed by the radio license number spelled with the ham radio phonetic alphabet). 

Listen very carefully, and by the call sign, you will be able to determine the location of the radio amateurs’ country.

You can visit the QRZCQ website to learn more about them and even send them a QSL card confirming their connection.

Keep the momentum going.

Now you have some tools and ideas for starting Short Wave Listening. 

This is the first step of an incredible and exciting journey you can have together with your Fedora Linux OS. 

You will discover the pleasure of building your antenna for the specific band, reading more about how the ionosphere helps, how to be a part of a listening competition, and what those Q-codes mean.

73

Posted on Leave a comment

Anaconda Web UI storage feedback requested!

As you might know, the Anaconda Web UI preview image has a simple “erase everything” partitioning right now because partitioning is a pretty big and problematic topic. On one hand, Linux guru people want to control everything; on the other hand, we also need to support beginner users. We are also constrained by the capabilities of the existing backend and storage tooling and consistency with the rest of Anaconda. Anaconda team is looking for your storage feedback to help us with design of the Web UI!

In general, partitioning is one of the most complex, problematic, and controversial parts of what Anaconda is doing. Because of that and the great feedback from the last blog, we decided to ask you for feedback again to know where we should focus. We’re looking for feedback from everyone. More answers are better here. We’d like to get input if you’re using Fedora, RHEL, Debian, OpenSUSE, Windows, or Linux, even if it’s just for a week. All these inputs are valuable!

Please help us shape one of the most complex parts of the Anaconda installer!

With just a few minutes of your time filling out the questionnaire, you can help us decide which path we’d like to choose for partitioning.

Questionnaire link: https://redhatdg.co1.qualtrics.com/jfe/form/SV_87bPLycfp1ueko6