Posted on Leave a comment

Hibernation in Fedora Workstation

This article walks you through the manual setup for hibernation in Fedora Linux 36 Workstation using BTRFS and is based on a gist by eloylp on github.

Goal and Rationale

Hibernation stores the current runtime state of your machine – effectively the contents of your RAM, onto disk and does a clean shutdown. Upon next boot this state is restored from disk to memory such that everything, including open programs, is how you left it.

Fedora Workstation uses ZRAM. This is a sophisticated approach to swap using compression inside a portion of your RAM to avoid the slower on-disk swap files. Unfortunately this means you don’t have persistent space to move your RAM upon hibernation when powering off your machine.

How it works

The technique configures systemd and dracut to store and restore the contents of your RAM in a temporary swap file on disk. The swap file is created just before and removed right after hibernation to avoid trouble with ZRAM. A persistent swap file is not recommended in conjunction with ZRAM, as it creates some confusing problems compromising your systems stability.

A word on compatibility and expectations

Hibernation following this guide might not work flawless on your particular machine(s). Due to possible shortcomings of certain drivers you might experience glitches like non-working wifi or display after resuming from hibernation. In that case feel free to reach out to the comment section of the gist on github, or try the tips from the troubleshooting section at the bottom of this article.

The changes introduced in this article are linked to the systemd hibernation.service and hibernation.target units and hence won’t execute on their own nor interfere with your system if you don’t initiate a hibernation. That being said, if it does not work it still adds some small bloat which you might want to remove.

Hibernation in Fedora Workstation

The first step is to create a btrfs sub volume to contain the swap file.

$ btrfs subvolume create /swap

In order to calculate the size of your swap file use swapon to get the size of your zram device.

$ swapon
NAME TYPE SIZE USED PRIO
/dev/zram0 partition 8G 0B 100

In this example the machine has 16G of RAM and a 8G zram device. ZRAM stores roughly double the amount of system RAM compressed in a portion of your RAM. Let that sink in for a moment. This means that in total the memory of this machine can hold 8G * 2 + 8G of RAM which equals 24G uncompressed data. Create and configure the swapfile using the following commands.

$ touch /swap/swapfile
# Disable Copy On Write on the file
$ chattr +C /swap/swapfile
$ fallocate --length 24G /swap/swapfile
$ chmod 600 /swap/swapfile $ mkswap /swap/swapfile

Modify the dracut configuration and rebuild your initramfs to include the

resume

module, so it can later restore the state at boot.

$ cat <<-EOF | sudo tee /etc/dracut.conf.d/resume.conf
add_dracutmodules+=" resume "
EOF
$ dracut -f

In order to configure grub to tell the kernel to resume from hibernation using the swapfile, you need the UUID and the physical offset.

Use the following command to determine the UUID of the swap file and take note of it.

$ findmnt -no UUID -T /swap/swapfile
dbb0f71f-8fe9-491e-bce7-4e0e3125ecb8

Calculate the correct offset. In order to do this you’ll unfortunately need gcc and the source of the btrfs_map_physical tool, which computes the physical offset of the swapfile on disk. Invoke gcc in the directory you placed the source in and run the tool.

$ gcc -O2 -o btrfs_map_physical btrfs_map_physical.c
$ ./btrfs_map_physical /path/to/swapfile FILE OFFSET EXTENT TYPE LOGICAL SIZE LOGICAL OFFSET PHYSICAL SIZE DEVID PHYSICAL OFFSET
0 regular 4096 2927632384 268435456 1 <4009762816>
4096 prealloc 268431360 2927636480 268431360 1 4009766912
268435456 prealloc 268435456 3251634176 268435456 1 4333764608
536870912 prealloc 268435456 3520069632 268435456 1 4602200064
805306368 prealloc 268435456 3788505088 268435456 1 4870635520
1073741824 prealloc 268435456 4056940544 268435456 1 5139070976
1342177280 prealloc 268435456 4325376000 268435456 1 5407506432
1610612736 prealloc 268435456 4593811456 268435456 1 5675941888

The first value in the PHYSICAL OFFSET column is the relevant one. In the above example it is 4009762816.

Take note of the pagesize you get from getconf PAGESIZE.

Calculate the kernel resume_offset through division of physical offset by the pagesize. In this example that is 4009762816 / 4096 = 978946.

Update your grub configuration file and add the resume and resume_offset kernel cmdline parameters.

grubby --args="resume=UUID=dbb0f71f-8fe9-491e-bce7-4e0e3125ecb8 resume_offset=2459934" --update-kernel=ALL

The created swapfile is only used in the hibernation stage of system shutdown and boot hence not configured in fstab. Systemd units control this behavior, so create the two units hibernate-preparation.service and hibernate-resume.service.

$ cat <<-EOF | sudo tee /etc/systemd/system/hibernate-preparation.service
[Unit]
Description=Enable swap file and disable zram before hibernate
Before=systemd-hibernate.service [Service]
User=root
Type=oneshot
ExecStart=/bin/bash -c "/usr/sbin/swapon /swap/swapfile && /usr/sbin/swapoff /dev/zram0" [Install]
WantedBy=systemd-hibernate.service
EOF
$ systemctl enable hibernate-preparation.service
$ cat <<-EOF | sudo tee /etc/systemd/system/hibernate-resume.service
[Unit]
Description=Disable swap after resuming from hibernation
After=hibernate.target [Service]
User=root
Type=oneshot
ExecStart=/usr/sbin/swapoff /swap/swapfile [Install]
WantedBy=hibernate.target
EOF
$ systemctl enable hibernate-resume.service

Systemd does memory checks on login and hibernation. In order to avoid issues when moving the memory back and forth between swapfile and zram disable some of them.

$ mkdir -p /etc/systemd/system/systemd-logind.service.d/
$ cat <<-EOF | sudo tee /etc/systemd/system/systemd-logind.service.d/override.conf
[Service]
Environment=SYSTEMD_BYPASS_HIBERNATION_MEMORY_CHECK=1
EOF
$ mkdir -p /etc/systemd/system/systemd-hibernate.service.d/
$ cat <<-EOF | sudo tee /etc/systemd/system/systemd-hibernate.service.d/override.conf
[Service]
Environment=SYSTEMD_BYPASS_HIBERNATION_MEMORY_CHECK=1
EOF

Reboot your machine for the changes to take effect. The following SELinux configuration won’t work if you don’t reboot first.

SELinux won’t like hibernation attempts just yet. Change that with a new policy. An easy although “brute” approach is to initiate hibernation and use the audit log of this failed attempt via audit2allow. The following command will fail, returning you to a login prompt.

systemctl hibernate

After you’ve logged in again check the audit log, compile a policy and install it. The -b option filters for audit log entries from last boot. The -M option compiles all filtered rules into a module, which is then installed using semodule -i.

$ audit2allow -b
#============= systemd_sleep_t ==============
allow systemd_sleep_t unlabeled_t:dir search;
$ cd /tmp
$ audit2allow -b -M systemd_sleep
$ semodule -i systemd_sleep.pp

Check that hibernation is working via systemctl hibernate again. After resume check that ZRAM is indeed the only active swap device.

$ swapon
NAME TYPE SIZE USED PRIO
/dev/zram0 partition 8G 0B 100

You now have hibernation configured.

GNOME Shell hibernation integration

You might want to add a hibernation button to the GNOME Shell “Power Off / Logout” section. Check out the extension Hibernate Status Button to do so.

Troubleshooting

A first place to troubleshoot any problems is through journalctl -b. Have a look around the end of the log, after trying to hibernate, to pin-point log entries that tell you what might be wrong.

Another source of information on errors is the Problem Reporting tool. Especially problems, that are not common but more specific to your hardware configuration. Have a look at it before and after attempting hibernation and see if something comes up. Follow up on any issues via BugZilla and see if others experience similar problems.

Revert the changes

To reverse the changes made above, follow this check-list:

  • remove the swapfile
  • remove the swap subvolume
  • remove the dracut configuration and rebuild dracut
  • remove kernel cmdline args via grubby –remove-args=
  • disable and remove hibernation preparation and resume services
  • remove systemd overrides for systemd-logind.service and systemd-hibernation.service
  • remove SELinux module via semodule -r systemd_sleep

Credits and Additional Resources

This article is a community effort based primarily on the work of eloylp. As author of this article I’d like to make transparent that I’ve participated in the discussion to advance the gist behind this but many more minds contributed to make this work. Make certain to check out the discussion on github.

There are already some ansible playbooks and shell scripts to automate the process depicted in this guide. For example check out the shell scripts by krokwen and pietryszak or the ansible playbook by jorp

See the arch wiki for the full guide on how to calculate the swapfile offset.

Posted on Leave a comment

Fedora and Parental Controls

We all have people around us, whom we hold dear. Some of them might even rely on you to keep them save. And since the world is constantly changing, that can be a challenge. No more is this apparent than with children, and Linux has long been lacking simple tools to help parents. But that is changing, and here we’ll talk about the new parental controls that Fedora Linux provides.

Users and permissions

First, it’s important to know that any Linux system has a lot of options for user, group, and permission management. Many of these advanced tools are aimed at professional users, though, and we won’t be talking about those here. In this article we’ll focus on home users.

Additionally, parental controls are not just useful for parents. You can use them when helping family members who are technically illiterate. Or perhaps you want to configure a basic workstation for simple administrative tasks. Either way, parental control can offer many security and reliability benefits.

Creating users

From the Settings panel, you can navigate to Users and from there you can select Add User… (after unlocking) to add a new user. You can give them a personal name, a username and their own icon. You can even decide if somebody else should also be an administrator.

Adding a user to your machine is as simple as going to settings, users, and clicking Add User…

You can also set a default password, or even allow a computer to automatically log in. You should help others understand digital security and the value of passwords, but for some people it might be better to just auto-login.

Admin rights

When you give somebody administrator rights, that user will have the same powers as you have on the system. They will be able to make any system change they prefer, and they can also add and remove users themselves.

Users who do not have admin rights, will not be able to make fundamental changes to the computer. They can still use all applications that are already on the system, and they can even download applications from the internet to their home folder. Still, they are ultimately blocked from doing anything that could damage the system.

Accessing the user-directories of others. Only administrator users will be able to do this.

Don’t forget that as an administrator, you can always reset a password. You can also enter another user’s home directory in case you have to. As with all ‘sudo’ rights, you should be careful and you should be considerate of other’s privacy.

Application control

Once one or multiple users are created, you can choose to tweak and control what applications somebody can use. This is done from within Settings > Users by selecting the new user then selecting Parental Controls and then Restrict Applications. Other options are available there, as well.

changing Parental Controls for a single user.

However, there is a big caveat

Parental controls come with a big caveat: If you want a simple home-user solution, you MUST use Flatpaks.

The problem is as follows. The existing Linux application landscape is quite complex, and it would be almost impossible to introduce a new user-friendly application-control system this late into its life cycle. Thus, the second best solution is to ensure that the next generation of packaging has such functionality from the start.

To use Flatpaks, you can use the Fedora’s repository, or the Flathub repository. If you want to know all the fine details about those projects, then don’t forget to read this recent comparison.

Compromise and limitations

No article would be complete without mentioning the inherit limitations of the parental controls. Besides all the obvious limits of computers not knowing right from wrong, there are also some technical limits to parental controls.

Parental Control’s limits

The security that Parental Controls provides will only work as long as Fedora Linux is running in working order. One could easily bypass all controls by flashing Fedora on a USB stick and starting from a clean, root-powered, installation image. At this point, human supervision is still superior to the machine’s rules.

Adding to that, there are the obvious issues of browsers, store fronts like Steam, and other on-line applications. You can’t block just parts of these applications. Minecraft is a great game for children, but it also allows direct communication with other people. Thus, you’ll have to constantly juggle permissions. Here too, it is better to focus on the human element instead of relying to much on the tools.

Finally, don’t forget about protecting the privacy and well-being of others online. Blocking bad actors with Ublock Origin and/or a DNS based blocker will also help a lot.

Legacy applications

As mentioned before, Fedora and Parental Controls only work with Flatpaks. Every application that is already on the system can be started by users who otherwise don’t have the permissions.

As a rule of thumb; If you want to share a computer with vulnerable family members, don’t install any software that’s inappropriate using the RPM Repositories. Instead, consider using a Flatpak.

Starting the system-wide installation of Firefox from the Terminal. The Flatpak version of Firefox though, will not start.

Summary

There is much that you can do to help those who are less experienced with computers. By simply giving these users their own account and using Flatpaks, you can make their lives a lot easier. Age restrictions can even offer additional benefits. But it’s not all perfect, and good communication and supervision will still be important.

The Parental Controls will improve over time. They have been given more priority in the past few years and there are additional plans. Time-tracking is, for example planned. As the migration to Flatpaks continues, you can expect that more software will respect age-restrictions in the future.

Additional US and UK resources

Sharing Fedora Linux with Parental Controls

So, let’s start a small collaboration here. We’ve all been younger, so how did you escape your parents’ scrutiny? And for those who are taking care of others… how are you helping others? Let’s see what we can learn from each other.

Posted on Leave a comment

Community container images available for applications development

This article aims to introduce community containers, where users can pull them from, and use them. The three groups of containers available for use by community users are discussed. These are: Fedora, CentOS, and CentOS Stream.

What are the differences between the containers?

Fedora containers are based on the latest stable Fedora content, and CentOS-7 containers are based on components from the CentOS-7 and related SCLo SIG components. And finally, CentOS Stream containers are based on either CentOS Stream 8 or CentOS Stream 9.

Each container, e.g. s2i-php-container or s2i-perl-container, contain the same packages which are available for a given operating system. It means, that from a functionality point of view these example containers provides the PHP interpreter or Perl interpreter,respectively. 

Differences can be only in versions, which are available for each distribution. For example:

Fedora PHP containers are available in these versions:

CentOS-7 PHP containers are available in these versions:

CentOS Stream 9 PHP containers are available in these versions:

CentOS Stream 8 is not mentioned here for the PHP use case since users can pull it directly from the Red Hat Container Catalog registry as a UBI image. Containers that are not UBI based have CentOS Stream 8 repositories in the quay.io/sclorg namespace with repository suffix “-c8s”.

Fedora container images moved recently

The Fedora container images have recently moved to the quay.io/fedora registry organization. All of them use Fedora:35, and later Fedora:36 ,as a base image. The CentOS-7 containers are stored in the quay.io/centos7 registry organization. All of them use CentOS-7 as a base image.

CentOS Stream container images

The CentOS Stream containers are stored in the quay.io/sclorg registry organization.

The base image used for our CentOS Stream 8 containers is CentOS Stream 8 with the tag “stream8”.

The base image used for our CentOS Stream 9 containers is CentOS Stream 9 with the tag “stream9”.

In this registry organization, each container contains a “suffix”, either “c8s”  for CentOS Stream 8 or “c9s” for CentOS Stream 9, respectively.

See container PHP-74 for CentOS Stream 9.

Frequency of container image updates and testing

The community-based containers are updated in two ways.

First, when a pull request in the container sources that live under github.com/sclorg organization is merged, the corresponding versions in the GitHub repository are built and pushed into the proper repository.

Second, an update process is also implemented by GitHub Actions set up in each of our GitHub repositories. The base images, like “s2i-core” and “s2i-base”, are built each Tuesday at 1:00 pm. The rest of the containers are built each Wednesday at 1:00 pm.

This means every package update or change is updated in the container image within a few days, not later than after a week.

Each container that is not beyond its end of life is tested by our nightly builds. If we discover or detect an error on some of our containers, we attempt to fix it, but there are no guarantees provided.

What container shall I use?

In the end, what containers are we providing? That’s a great question. All containers live in the GitHub organization https://github.com/sclorg

The list of containers with their upstreams is summarized here:

How to use the container image I picked?

All container images are tuned up to be fully functional in the OpenShift (or OKD and even Kubernetes itself) without any trouble. Some containers support the source-to-image build strategy while some are expected to be used as daemons (like databases, for instance). For specific steps, please, navigate to the GitHub page for the respective container image by following one of the links above.

Finally, Some Real examples

Let’s show how to use PHP container images across all platforms that we support.

First of all, clone the container GitHub repository using this command:

$ git clone https://github.com/sclorg/s2i-php-container

Switch to the following directory created by the cloning step:

$ cd s2i-php-container/examples/from-dockerfile

Fedora example

Start by pulling the Fedora PHP-80 image with this command:

$ podman pull quay.io/fedora/php-80

Modify “Dockerfile” so it refers to Fedora php-80 image. “Dockerfile” then looks like:

FROM quay.io/fedora/php-80 USER 0
# Add application sources
ADD app-src .
RUN chown -R 1001:0 .
USER 1001 # Install the dependencies
RUN TEMPFILE=$(mktemp) && \ curl -o "$TEMPFILE" "https://getcomposer.org/installer" && \ php <"$TEMPFILE" && \ ./composer.phar install --no-interaction --no-ansi --optimize-autoloader # Run script uses standard ways to configure the PHP application
# and execs httpd -D FOREGROUND at the end
# See more in <version>/s2i/bin/run in this repository.
# Shortly what the run script does: The httpd daemon and php need to be
# configured, so this script prepares the configuration based on the container
# parameters (e.g. available memory) and puts the configuration files into
# the appropriate places.
# This can obviously be done differently, and in that case, the final CMD
# should be set to "CMD httpd -D FOREGROUND" instead.
CMD /usr/libexec/s2i/run

Check if the application works properly

Build it by using this command:

$ podman build -f Dockerfile -t cakephp-app-80

Now run the application using this command:

$ podman run -ti --rm -p 8080:8080 cakephp-app-80

To check the PHP version use these commands:

$ podman run -it –rm cakephp-app-80 bash
$ php –version

To check if everything works properly use this command:

$ curl -s -w ‘%{http_code}’ localhost:8080

This should return HTTP code 200. If you would like to see a web page enter “localhost:8080” in your browser.

CentOS 7 example

Start by pulling the CentOS-7 PHP-73 image using this command:

$ podman pull quay.io/centos7/php-73-centos7

Modify “Dockerfile” so it refers to CentOS php-73 image.

“Dockerfile” then looks like this:

FROM quay.io/centos7/php-73-centos7 USER 0
# Add application sources
ADD app-src .
RUN chown -R 1001:0 .
USER 1001 # Install the dependencies
RUN TEMPFILE=$(mktemp) && \ curl -o "$TEMPFILE" "https://getcomposer.org/installer" && \ php <"$TEMPFILE" && \ ./composer.phar install --no-interaction --no-ansi --optimize-autoloader # Run script uses standard ways to configure the PHP application
# and execs httpd -D FOREGROUND at the end
# See more in <version>/s2i/bin/run in this repository.
# Shortly what the run script does: The httpd daemon and php needs to be
# configured, so this script prepares the configuration based on the container
# parameters (e.g. available memory) and puts the configuration files into
# the appropriate places.
# This can obviously be done differently, and in that case, the final CMD
# should be set to "CMD httpd -D FOREGROUND" instead.
CMD /usr/libexec/s2i/run

Check if the application works properly

Build it using this command:

$ podman build -f Dockerfile -t cakephp-app-73

Now run the application using this command:

$ podman run -ti --rm -p 8080:8080 cakephp-app-73

To check the PHP version us these commands:

$ podman run -it –rm cakephp-app-73 bash
$ php –version

To check if everything works properly use this command:

curl -s -w ‘%{http_code}’ localhost:8080

which should return HTTP code 200. If you would like to see a web page enter localhost:8080 in your browser.

RHEL 9 UBI 9 real example

Start by pulling the RHEL9 UBI-based PHP-80 image using the command:

$ podman pull registry.access.redhat.com/ubi9/php-80

Modify “Dockerfile” so it refers to the RHEL9 ubi9 php-80 image.

“Dockerfile” then looks like:

FROM registry.access.redhat.com/ubi9/php-80 USER 0
# Add application sources
ADD app-src .
RUN chown -R 1001:0 .
USER 1001 # Install the dependencies
RUN TEMPFILE=$(mktemp) && \ curl -o "$TEMPFILE" "https://getcomposer.org/installer" && \ php <"$TEMPFILE" && \ ./composer.phar install --no-interaction --no-ansi --optimize-autoloader # Run script uses standard ways to configure the PHP application
# and execs httpd -D FOREGROUND at the end
# See more in <version>/s2i/bin/run in this repository.
# Shortly what the run script does: The httpd daemon and php needs to be
# configured, so this script prepares the configuration based on the container
# parameters (e.g. available memory) and puts the configuration files into
# the appropriate places.
# This can obviously be done differently, and in that case, the final CMD
# should be set to "CMD httpd -D FOREGROUND" instead.
CMD /usr/libexec/s2i/run

Check if the application works properly

Build it using this command:

$ podman build -f Dockerfile -t cakephp-app-80-ubi9

Now run the application using this command:

$ podman run -ti --rm -p 8080:8080 cakephp-app-80-ubi9

To check the PHP version use these commands:

$ podman run -it –rm cakephp-app-80-ubi9 bash
$ php –version

To check if everything works properly use this command:

curl -s -w ‘%{http_code}’ localhost:8080

which should return HTTP code 200. If you would like to see a web page enter localhost:8080 in your browser.

What to do in the case of a bug or enhancement

Just file a bug (known as an “issue” in GitHub) or even a pull request with a fix, to one of the GitHub repositories mentioned in the previous section.

Posted on Leave a comment

Your Personal Voice Assistant on Fedora Linux

It’s 7 PM. I sit down at my Fedora Linux PC and start a MMORPG. We want to brawl with Red Alliance over some systems they attacked; the usual stuff on a Friday evening in EVE. While we are waiting on a Titan to bridge us to the fighting area, a tune comes to my mind. “Carola, I wanna hear hurricanes.” My HD LED starts to blink for a second and I hear Carola saying “I found one match” and the music starts.

What sounded like Sci-Fi twenty years ago is now a reality for many PC users. For Linux users, this is now possible by installing “Carola” as your Personal Voice Assistant (PVA)[1].

Carola

The first thing people often ask is, “Why did you name it Carola?” 🙂 This is a common misconception. Carola is not the project name. It’s the keyword the PVA reacts to by default. It is similar to “Alexa” or “OK, Google” for those who are familiar with those products. You can configure this keyword. You can also configure other things such as your location, which applications to use by default when opening media files, what CardDAV server to use when looking up contact information, etc. These settings can be personalized for each user. Some of them can even be changed by voice command (e.g. the name, the default TTS engine, and the default apps).

In 2021 I read an article about the Speech-To-Text (STT) system Vosk[2] and started to play a bit with it. The installation was easy. But there was no use-case except for writing what one said down to the screen. A few hours and a hundred lines of Java code later, I could give my PC simple commands. After a few more days of work, it was capable of executing more complex commands. Today, you can tell him/her/it to start apps, redirect audio streams, control audio and video playback, call someone, handle incoming calls, and more. If you have a smart-home (which I haven’t 😉) you can even switch on the light in the kitchen. But back to why I chose “Carola” — it was the most recognizable by the STT system I was using at the time. 😉

Note: This PVA has no English translations yet because it was developed in German. I will use rough translations here which should work once someone helps with translating the config. There are videos out about Carola which show these kinds of interactions in reality[6]. For now, because a few dependencies are unavailable from Fedora Linux’s default repositories, you will need to install the system manually. But don’t be afraid, it is all described in the READ.ME and it is pretty simple.

The time of eSpeak has run out

A voice assistant doesn’t just react to your speech. It can also reply back to you. You might expect it to sound like a creepy robot voice from the 1960s when it was invented. But trust me, Fedora Linux can do better. 😉 Naturally, eSpeak was the first choice because it was installed on the system by default. But today we can choose (even by voice command 😉) what speech engine we want to use.

Text-To-Speech (TTS) systems

Text-To-Speech systems translate the written text to a listenable waveform. Some systems output the waveform directly. Others produce MP3 or WAV files which you can then play with, for example, sox. Over the last year, we’ve experimented with several TTS systems. One outputs the text via the Samsung TTS engine on an Android device. It sounds great, but it requires your cellphone and a special server application. 😉

One of the first speech engines I tried was MBROLA[3]. It produces more understandable audio than eSpeak. But it’s still far from being “good”, as it relies on eSpeak. Think of it as an eSpeak pre-processor. Unfortunately, it is not currently available from Fedora Linux’s default repositories.

Next was Pico2Wave, which still uses the same technique as eSpeak, but on a higher level. Still, this does not meet our vision of speech quality. Also, it has to be extracted from an old Ubuntu package because it isn’t available from the Fedora Linux repositories.

With MaryTTS[4] we reach the first speech processor that produces human speech patterns combined with accent and dialect. MaryTTS was developed in Germany at the Saarland University & the German Research Center for AI. It’s not meant to run on your local PC (but does quite well). Rather, it is meant to run on a network to offer speech output to any kind of client that asks for it. Because modern PCs have way more CPU power than it requires, you can also run it solo on your PC. However, running it locally requires compiling the source code which can be a little tricky to do.

MaryTTS comes with different languages and one remarkable voice for Germans — an old Bavarian woman. This voice model was trained from an old speech archive in Munich and it’s so good that you’d think your PC is at least seventy years old. 🙂 This also makes it seem like you are giving commands to an old women who should be in retirement. This makes giving commands to your PC problematic; trust me. 😉

The top of the line of available TTS systems is GTTS[5]. It is available from the default Fedora Linux repositories. This TTS produces the best sound quality for a wide variety of languages and, for standard voices, “the first 4 million characters are free each month”[7]. The downside of this TTS is that the text is sent to Google’s servers[8] for processing and translation into an audible speech format. This might be a privacy concern depending on the nature of the data that you are sending for translation. Be mindful that the PVA will repeat parts of what you said if it did not understand your command or when it tells you things in response to your question. For this reason, GTTS is not the default TTS. However, it is pre-configured in the PVA repo[1] and ready to use.

Let’s talk about privacy

You just read that your TTS system could rat you out to Google. This leads to other questions. Namely, “Does the STT system do this too?” It doesn’t. Vosk runs entirely on your PC. There are no cloud services involved. But of course, your assistant needs to know things in order to assist you.

One of the simpler functions is to produce a weather report. For this to work, the assistant needs to relay your question to the weather provider’s API server. The provider can reasonably assume that you normally aren’t interested in the weather for places you do not live. So the server can derive where that you live based on what city you most frequently inquire about and it can collaborate its deduction based on your device’s IP address.

Consequently, if you configure a service in your PVA’s config, you should ask yourself if this service will cause a privacy problem for you. Many PVA functions won’t because they work locally or they use services under your control (the CalDAV and CardDAV address book services, for example). The same is true for the upcoming IMAP feature. You will use the same email provider that your email app is already configured to use. So there are no extra concerns for the IMAP feature. But what happens if you decided to use GTTS because these simple text fragments are “no big deal” and the PVA reads out loud the incoming email? Here the decision for a TTS engine gets more and more important.

One of PVA’s functions is playing music on command. By itself, this function may not seem concerning. However, an upcoming feature might change this. It will be able to tell what you want to listen to by the use of abstract requests like “Jazz”. At the moment, the PVA searches for this term in filenames and metadata it found in your MP3 archive. In the future, it will have a local track list of all the audiophile wishes you had in the past and it will produce a TOP song list for the search term.

It will write every file you added to the playlist to a database and it will count how many times a match to your abstract request is found and play the song(s) with the best score. On the plus side, you get your favorite music. But what happens if an external plugin or hacked app takes this list and exfiltrates the data to someone who pays for this kind of information?

Now this feature becomes a privacy concern. There is no great risk at the moment since this feature needs to be enabled and, for now, PVA is not widespread enough to be a target. But this may change and you should be aware of these kinds of privacy concerns before they become a problem.

As a best effort to address such privacy concerns, PVA will disable features that require external communication by default. So if you use the base installation, there should be very few privacy concerns. There is one known exception — all texts sent to the PVA app are recorded in ~/.var/log/pva.log. This should make it easier to find flaws in the STT engine and track down other problems.

Always keep in mind that privacy can also be undermined by third-party add-ons.

What can you expect from your assistant?

PVA auto-configures itself on first start-up with a basic configuration. For example, it adds the default paths from the freedesktop.org specs to all your pictures, music, documents and videos. It creates a local cache and config directory where you can place your version of the config files, add new ones, or overwrite existing ones. Usually user customizations are added to the config. But you can overwrite existing values. Before we dig deeper, let’s present some more of PVA’s features.

The Weather Report app is a classic. No assistant would be complete without it. 🙂 All it needs to know is the name of your hometown. The weather provider used is wttr.in. You can point your browser at this URL to find for your city’s unique identifier. You have no idea how many “Neustadt” exists in Germany alone. 😉 Because it is a webservice, you don’t need to install anything. It works out-of-the-box using cURL.

Asking your PVA what time it is, is also a classic and it works out of the box. You can also ask your PC how it feels: “Carola, what is the actual load?” Or, more abstractly, “Carola, how do you feel?” 😉

For playing audio, PVA uses QMMP by default. It comes with an easy to use command line interface and a rich feature set. You will need to install this app before you can use it. Luckily, Fedora Linux ships this. QMMP gives you remote control over loudness, track number, track-position, playback, and it gives us information about what is currently playing. So you can ask your PVA what is playing when it is playing random tracks.

Controlling QMMP by voice is one of the features I cannot do without again. I often end up in situations where I have full-screen windows with complex code on the screen. If you loose your focus, you loose your workflow. Developers call this “being in the flow” or “in the tunnel”. Here, voice control comes in very handy. You do not need to divert your focus from your work. You can just say what you want and it “magically” it happens! 😉

The phone-call feature works in a similar way. You can now ask your SIP software to make a call to a contact in your CardDAV address book without diverting your focusing from your work. As this is a complex process, it needs a little extra care. There is a special internal function for handling the parsing of the command sentence. “Carola, call Andreas” is enough to initiate a call. The first match in the address book will be called. If you know more than one person with the same name, I hope they have nicknames. 😉 Also, since one contact might have multiple phone numbers (e.g. for home and for work), you can specify which number should be called: “Carola, call Andreas at work.”

Even if it doesn’t look like a complex problem, consider which of the words the PVA receives are part of the name and which are just binding words like “at” in the above example? It is not that easy to determine. So we need to follow a precise syntax. But it needs to be natural or humans will not use it. Another thing that is important when interacting with humans is that they tend to alternate their command structures. Parsing a human sentence is more complex for a computer than you might think. It’s natural for you, but the opposite of logic for a computer. Keep this in mind as you read further. It is a reoccurring challenge.

Another example of when voice control can come in handy is controlling video playback is while you exercise. You are likely not in reach of a mouse or a remote (KDE Connect) nor do you want to stop your work out just because someone asks you something. With voice control, you can ask the PC to pause playback and then ask it to resume after you have answered the question or otherwise addressed the problem.

For audio and video players that offer a MPRIS2 interface on DBUS, you can control them on the spot without adding the CLI (command line interface) commands to your config. Based on MPRIS2 you can even control Netflix or YouTube in your Firefox browser. OK, you can’t currently choose the track to watch. But you can change the volume and (un)pause the playback. And all of this can be done with the same set of commands in your config.

There are many situations where voice control is superior or even necessary. What I haven’t told you yet is that the first device I deployed PVA to was a PinePhone. Smartphones can be used for many things. You might use it as an MP3 player while you drive or as a navigational tool. It doesn’t work with Gnome Maps (yet). But controlling a PinePhone via voice while driving will be more and more important in the Linux community. So hopefully further advancements will be made in this area. Fun fact/tip, if you use it as an MP3 player, don’t make it too loud. Or better yet, use an external speaker system[6].

If you use Thunderbird to manage your email, it is capable of composing and sending an email using only CLI arguments. Consequently, your PVA can compose and send email using Thunderbird. All you need to do is to tell your PVA the recipient, the subject, and then dictate the content of the body. It will also do some minor interpretation for you. While I still write my emails by hand, I can imagine situations where it could be useful. I did not have an opportunity to work with a disabled person to test this method of email composition. But it might be interesting.

The PVA can also be handy for short reminders. You can tell your PVA when and what it should remind you about. For example, “Carola, remind me in 10 minutes to check the kitchen” or, “Carola, remind me at 10 to call Andreas”. The PVA will respond if it understood and acknowledge your reminder.

The best comes last. With Twinkle, your PVA can take a call and interact with the caller just as it does with you. One thing I have not explained yet is that your PVA requires authorization codes for vital or potentially dangerous operations. I find it reminiscent of a scene from “Star Trek”. 😉

“Carola, reboot PC.”
“Authorization code alpha is needed to perform this operation.”
“Carola, authorization code alpha three four seven.”

And if the code is correct, the requested action will proceed. Requiring these authorization codes helps to alleviate the fear that someone might hack your PC through the PVA and cause trouble. Though about the worst that could happen at the moment is that you could unwillingly send spam to someone or your PC might be left playing music all day long. 😉 However, if you have home automation running and configured, it is probably better not to have Twinkle answer the phone.

Let’s take a look behind the curtain

This list is long and detailed. So I will focus on some basics.

PVA comes with a rudimentary form of hard-coded human responses. You can modify and expand them as you like. But there is no intelligence in them. You can, however, build reaction chains that are so long that a normal human cannot detect the repeating phrases.

Here are two examples. Let’s ask the PVA for its name.

reaction:”what is your name”,””,”my name is %KEYWORD”

Here is a more complex example that uses the word “not”.

reaction:”this works”,”not”,”of course it does”
reaction:”this works not”,””,”uh oh, please contact my developer”

As the absence of the word “not” is crucial to the meaning of a sentence, reactions contain “positive” and “negative” terms to work. The rule is as follows.

Positive words MUST be inside the sentence, negative words MUST NOT be inside the sentence. In developer terms it can be written as “if ( p && !n ) do …”.

If your reaction texts give a human a new clue what to say next and you can anticipate this, then it is possible to build complex reaction chains that will simulate a conversation. I have even heard from people using this feature that they like talking to their PC (and these are not your stereotypical nerds 😉). As you can use the same trigger for multiple reactions, alternative chains are possible.

Starting applications

Part of the basic functionality is to start the apps that you’ve named. In the beginning, there was a fixed list of apps that was hard-coded. But now, you can extend this via the config. Let’s take a quick look at it.

app:”office”,”openoffice4″
app:”txt”,”gedit”
app:”pdf”,”evince”
app:”gfx”,”gnome-open”
app:”mail”,”thunderbird”

The corresponding voice commands would be “Carola, start mail”, “Carola, open mail” or, in free-form, you could say, “Carola, start Krita” (Krita is an OpenSource paint app that is available on Fedora Linux). You can configure several alternative versions of the command sentences. These can be regular expressions (regex) or multiple entries in the config file. These apps are also used in complex commands like searching for files and opening the resultant set with an app. For example, “Carola, search pictures of Iceland and open them with Krita.” The previous command would cause the PVA to search in your configured picture paths for filenames matching “iceland” and then open them with Krita. This works for all apps as long as their launchers accept filenames as arguments on their CLI. Even if this isn’t the case for your favorite app, you might still be able to write a small “wrapper” script in Bash and then use that script as the app target for the PVA.

Via voice command, you can switch apps out for a configured alternative on the fly.

alternatives:”firefox”,”web”,”firefox”
alternatives:”google chrome”,”web”,”google-chrome”
alternatives:”chromium free”,”web”,” chromium-freeworld”
alternatives:”chromium privacy”,”web”,” chromium-privacy-browser”
alternatives:”openoffice”,”office”,”openoffice4″
alternatives:”libreoffice”,”office”,”libreoffice”

By using the “use {alternative}” syntax you select what you want to use next. For example, “Carola use Firefox” or “Carola use Chromium free”. It’s working for any app in the app list. But how are these commands defined?

command:”start”,”STARTAPP”,”app”,””
command:”open”,”OPENAPP”,”app”,””

Quite simple: “Carola, start Firefox app” will start Firefox. “Carola, start Firefox” would also work because the term “app” is filtered out.

Next is the positive and negative list of words again. Here is the reaction syntax.

command:”how is the weather”,”CURRENTWEATHERNOW”,””,”tomorrow”
command:”how will the weather be”,”CURRENTWEATHERNEXT”,””,”tomorrow”
command:”how will the weather be tomorro
w”,”CURRENTWEATHERTOMORROW”,””,””

The commands in the second column are some of PVA’s internal function names. They are coded internally because processing the result can be tricky. You can, however, outsource those commands to an external Bash script. Below is an example that shows how to call a custom Bash script.

replacements:”h d m i”,”hdmi”
replacements:”u s b”,”usb”

command:”switch .* to kopfhörer”,”EXEC:pulse.outx:x%0x:xhdmi”
command:”switch .* to lautsprecher”,”EXEC:pulse.outx:x%0x:xdefault”
command:”switch .* to .* now”,”EXEC:pulse.outx:x%0x:x%1″

The last of the above commands will switch the output device of a named (first “.*”) running app in pulseaudio to the named (second “.*”) device. As you can see, it works using regular-expression-like syntax. The syntax is not, however, fully-featured. All three of the above commands call the script pulse.out with two parameters. That is, “pulse.out <procname> <devicename>”. Now there is no need to start PAVUControl anymore. Instead, you can just grab your headset and tell your PC to use it. 🙂

There is no limit to the number of .* wildcards in the command or execution part. Feel free to code a response for something like, “Carola, switch on the light in the kitchen.”

command:”switch .* the light in .*”,”EXEC:smarthomex:x%0x:x%1″

If you are wondering about those “x:x” character sequences in the execution part, those are being used to escape whitespaces because they tend to appear in filenames.

Oops, it sent your mail to Hermine instead of Hermann

All these voice commands only work correctly when the STT system works flawlessly. Of course, it often does not. Do not have false hopes. It’s not perfect. However, PVA can auto-correct some common errors by configuring replacements.

PVA can correct simple mistakes.

replacements:”hi länder”,”highlander”
replacements:”cash”,”cache”
replacements:”karola”,”carola”

Or, it can perform context replacements. These are only done if a command has been recognized.

contextreplacements:”STARTAPP”,”kreta”,”krita”
contextreplacements:”STARTAPP”,”konfig”,”config”
contextreplacements:”ADDTITLE”,”föge”,”füge”

Shown above is the entry “Krita” which sounds like the German word “Kreta” which is a Greek island. STT Vosk likes “Kreta”. So we need to replace this. But not every time. If we perform a web search for “Kreta”, it would be counter productive to replace it with “Krita”. You can add simple replacements to the config. I suggest that you use the user config for these because other users my encounter different problems.

If you don’t know why a command is not being recognized correctly, you can the always check the log at ~/.var/log/pva.log.

Contribution

If you want to contribute to this project, feel free to do so. It is available on Github[1]. Since Vosk now has support for eighteen languages, translating the texts to different languages would be a great place that a contributor could get started.

To get PVA installed on Fedora Linux, it is required to rebuild Vosk and its libraries with Fedora sources. We tried to do so earlier this year, but we failed when we ran into some exotic mathlib dependencies that we couldn’t get to compile on Fedora Linux. We are hoping that a good, skilled C developer could solve this problem and get those resulting packages reviewed by Fedora packagers. 🙂

I hope I have awoken some interest from our dear readers. The world of PVA needs your awesome configs! 🙂

Best regards,
Marius Schwarz

References

[1] https://github.com/Cyborgscode/Personal-Voice-Assistent

[2] https://alphacephei.com/vosk

[3] https://github.com/numediart

[4] https://marytts.github.io

[5] Available from the Fedora Linux repositories:

  • dnf install gtts

[6] http://static.bloggt-in-braunschweig.de/Pinephone-PVA-TEST1.mp4

[7] https://cloud.google.com/text-to-speech

[8] https://cloud.google.com/text-to-speech/docs/reference/rest

Posted on Leave a comment

DIY Embroidery with Inkscape and Ink/Stitch

Introduction

Embroidered shirts are great custom gifts and can also be a great way to show your love for open source. This tutorial will demonstrate how to design your own custom embroidered polo shirt using Inkscape and Ink/Stitch. Polo shirts are often used for embroidery because they do not tear as easily as t-shirts when pierced by embroidery needles, though with care t-shirts can also be embroidered. This tutorial is a follow on article to Make More with Inkscape and Ink/Stitch and provides complete steps to create your design.

Logo on Front of Shirt

Pictures with only a few colors work well for embroidery. Let us use a public domain black and white SVG image of Tux created by Ryan Lerch and Garret LeSage.

Black and white image of Tux, the Linux penguin mascot.
Black and white image of Tux

Download this public domain image, tux-bw.svg, to your computer, and import it into your document as an editable SVG image using File>Import...

Image of Tux imported into an Inkscape document
Image of Tux with text to be embroidered

Use a Transparent Background

It is helpful to have a checkerboard background to distinguish background and foreground colors. Click File>Document Properties… and then check the box to enable a checkerboard background.

Document properties dialog box
Dialog box to enable checkerboard document background

Then close the document properties dialog box. You can now distinguish between colors used on Tux and the background color.

Tux the Linux mascot in an Inkscape document with a checkered background
Tux can be distinguished from the document background

Use a Single Color For Tux

Type s to use the Select and Transform objects tool, and click on the image of Tux to select it. Then click on Object>Fill and Stroke, in the menu. Type n to use the Edit paths by Nodes tool and click on a white portion of Tux. Within the Fill and Stroke pane change the fill to No paint to make this portion of Tux transparent.

The central portion of Tux that was formerly white is now transparent
Tux in one color

Thi leaves the black area to be embroidered.

Enable Embroidering of Tux

Now convert the image for embroidery. Type s to use the Select and Transform objects tool and click on the image of Tux to select it again. Choose Extensions>Ink/Stitch>Fill Tools>Break Apart Fill Objects … In the resulting pop up, choose Complex, click Apply, and wait for the operation to complete.

Dialog box showing simple and complex options for break objects apart function in Ink/Stitch
Dialog to Break Apart Fill Objects

For further explanation of this operation, see the Ink/Stitch documentation.

Resize Document

Now resize the area to be embroidered. A good size is about 2.75 inches by 2.75 inches. Press s to use the Select and Transform objects tool, and select Tux, hold down the shift key, and also select any text area. Then choose Object>Transform …, click on Scale in the dialogue box, change the measurements to inches, check the Scale proportionally box and choose a width of 2.75 inches, and click Apply.

Tux has been resized to have a width of 2.75 inches, which is also shown in a dialog box
Resized drawing

Before saving the design, reduce the document area to just fit the image. Press s to use the Select and Transform objects tool, then select Tux.

Inkscape window with black and white Tux on a checkered background with the transform dialog box
Objects selected to determine resize area

Choose File>Document Properties… then choose Resize to content: or press Ctrl+Shift+R

Document properties dialog box with options to have a checkerboard background, resize to content, change the page orientation and others
Dialog to resize page

The document is resized.

Tux in a resized document that fits closely
Resized document

Save Your Design

You now need to convert your file to an embroidery file. A very portable format is the DST (Tajima Embroidery Format) format, which unfortunately does not have color information, so you will need to indicate color information for the embroidery separately. First save your design as an Inkscape SVG file so that you retain a format that you can easily edit again. Choose File>Save As, then select the Inkscape SVG format and enter a name for your file, for example AnotherAwesomeFedoraLinuxUserFront.svg and save your design. Then choose File>Save As and select the DST file format and save your design. Generating this file requires calculation of stitch locations, this may take a few seconds. You can preview the DST file in Inkscape, but another very useful tool is vpype-embroidery

Install vpype-embroidery on the command line using a Python virtual environment via the following commands:

virtualenv test-vpype
source test-vpype/bin/activate
pip install matplotlib
pip install vpype-embroidery
pip install vpype[all]

Preview your DST file (in this case named AnotherAwesomeFedoraLinuxUserFront.dst which should be replaced by the filename you choose if it is different), using this command:

vpype eread AnotherAwesomeFedoraLinuxUserFront.dst show
Image of Tux created from the DST file and shown using Vpype-embroider
Preview of design created by vpype-embroidery

Check the dimensions of your design, if you need to resize it, you should resize the SVG design file before exporting it as a DST file. Resizing the DST file is not recommended since it contains stitch placement information, regenerate this placement information from the resized SVG file to obtain a high quality embroidered result.

Text on the Back of the Shirt

Now create a message to put on the back of your polo shirt. Create a new Inkscape document using File>New. Then choose Extensions>Ink/Stitch>Lettering.

Choose a font, for example Geneva Simple Sans created by Daniel K. Schneider in Geneva. If you want to resize your text, do so at this point using the scale section of the dialog box since resizing it once it is in Inkscape will distort the resulting embroidered pattern. Add your text,

Another Awesome Fedora Linux User
Image showing the text, Another Awesome Fedora Linux User, in a dialog box along with the choosen embroidery font, Geneva Simple Sans
Lettering creation dialog box

A preview will appear, click on Quit

Preview of stitches for the text, Another Awesome Fedora Linux User, created by Ink/Stitch
Preview image of text to be embroidered

Then click on Apply and Quit in the lettering creation dialog box. Your text should appear in your Inkscape document.

Text to be embroidered in the top left corner of an otherwise empty Inkscape document
Resulting text in Inkscape document

Create a checkered background and resize the document to content by opening up the document properties dialog box File>Document Properties…

Document properties dialog box to resize a document
Document properties dialog box

Your document should now be a little larger than your text.

Document is a little larger than the text
Text in resized document

Clean Up Stitches

Many commercial embroidery machines support jump instructions which can save human time in finishing the embroidered garment. Examine the text preview image. A single continuous thread sews all the letters. Stitches joining the letters are typically removed. These stitches can either be cut by hand after the embroidery is done, or they can be cut by the embroidery machine if it supports jump instructions. Ink/Stitch can add these jump instructions.

Add jump instructions by selecting View>Zoom>Zoom Page to enlarge the view of the drawing. Press s to choose the Select and transform objects tool. Choose Extensions>Ink/Stitch>Commands>Attach Commands to Selected Objects. A dialog box should appear, check just the Trim thread after sewing this object option.

Ink/Stitch `Attach commands to selected object` dialog box showing option to `Trim thread after sewing this object`
Attach commands dialog

Then click in the drawing area and select the first letter of the text

The first letter A in the text is selected
Select first letter of the text

Then click Apply, and some cut symbols should appear above the letter.

The first letter A has scissor symbols above it
Scissor symbols above first letter

Repeat this process for all letters.

All the letters in the text have symbols above them
Separately embroidered letters

Now save your design, as before, in both SVG and DST formats. Check the likely quality of the embroidered text by previewing your DST file (in this case named AnotherAwesomeFedoraLinuxUserBack.dst – replaced this by the filename you chose), using

vpype eread AnotherAwesomeFedoraLinuxUserBack.dst show
Illustration showing a green stitch pattern that forms the txt, Another Awesome Fedora Linux User
Preview of text to be embroidered created by vpype-embroidery

Check the dimensions of your design, if you need to resize it, you should resize the SVG design file before exporting it as a DST file.

Create a Mockup

To show the approximate placement of your design on the polo shirt create a mockup. You can then send this to an embroidery company with your DST file. The Fedora Design Team has a wiki page with examples of mockups. An example mockup made using Kolourpaint is below.

Image showing drawings of front and back of a polo shirt with Tux placed on the front left and text on the back top and middle
Mockup image of polo shirt with design

You can also use an appropriately licensed drawing of a polo shirt, for example from Wikimedia Commons.

Example Shirt

Pictures of a finished embroidered polo shirt are below

Embroidered black polo shirt front with white thread used to embroider Tux
Front of embroidered shirt
Back of black polo shirt, with white embroidered text
Back of embroidered shirt
Closeup of the front left of a person wearing a black polo shirt with Tux embroidered on it with white thread.
Closeup of embroidered Tux
Back of person wearing a polo shirt with the text, Another Awesome Fedora Linux User, embroidered on the shirt
Closeup of embroidered text

Further Information

A three color image of Tux is also available, but single colors are easiest to achieve good embroidered results with. Adaptation of this shaded multiple color image is required to use it for embroidery. Additional tutorial information is available on the Ink/Stitch website.

Some companies that can do embroidery given a DST file include:

Search the internet for machine embroidery services close to you or a hackerspace with an embroidery machine you can use.

This article has benefited from many helpful suggestions from Michael Njuguna of Marvel Ark and Brian Lee of Embroidery Your Way.

Posted on Leave a comment

Accessibility in Fedora Workstation

The first concerted effort to support accessibility under Linux was undertaken by Sun Microsystems when they decided to use GNOME for Solaris. Sun put together a team focused on building the pieces to make GNOME 2 fully accessible and worked with hardware makers to make sure things like Braille devices worked well. I even heard claims that GNOME and Linux had the best accessibility of any operating system for a while due to this effort. As Sun started struggling and got acquired by Oracle this accessibility effort eventually trailed off with the community trying to pick up the slack afterwards. Especially engineers from Igalia were quite active for a while trying to keep the accessibility support working well.

But over the years we definitely lost a bit of focus on this and we know that various parts of GNOME 3 for instance aren’t great in terms of accessibility. So at Red Hat we have had a lot of focus over the last few years trying to ensure we are mindful about diversity and inclusion when hiring, trying to ensure that we don’t accidentally pre-select against underrepresented groups based on for instance gender or ethnicity. But one area we realized we hadn’t given so much focus recently was around technologies that allowed people with various disabilities to make use of our software. Thus I am very happy to announce that Red Hat has just hired Lukas Tyrychtr, who is a blind software engineer, to lead our effort in making sure Red Hat Enterprise Linux and Fedora Workstation has excellent accessibility support!

Anyone who has ever worked for a large company knows that getting funding for new initiatives is often hard and can take a lot of time, but I want to highlight how I was extremely positively surprised at how quick and easy it was to get support for hiring Lukas to work on accessibility. When Jiri Eischmann and I sent the request to my manager, Stef Walter, he agreed to champion the same day, and when we then sent it up to Mike McGrath who is the Vice President of Linux Engineering he immediately responded that he would bring this to Tim Cramer who is our Senior Vice President of Software Engineering. Within a few days we had the go ahead to hire Lukas. The fact that everyone just instantly agreed that accessibility is important and something we as a company should do made me incredibly proud to be a Red Hatter.

What we hope to get from this is not only a better experience for our users, but also to allow even more talented engineers like Lukas to work on Linux and open source software at Red Hat. I thought it would be a good idea here to do a quick interview with Lukas Tyrychtr about the state of accessibility under Linux and what his focus will be.

Christian: Hi Lukas, first of all welcome as a full time engineer to the team! Can you tell us a little about yourself?

Lukas: Hi, Christian. For sure. I am a completely blind person who can see some light, but that’s basically it. I started to be interested in computers around 2009 or so, around my 15th or 16th birthday. First, because of circumstances, I started tinkering with Windows, but Linux came shortly after, mainly because of some pretty good friends. Then, after four years the university came and the Linux knowledge paid off, because going through all the theoretical and practical Linux courses there was pretty straightforward (yes, there was no GUI involved, so it was pretty okay, including some custom kernel configuration tinkering). During that time, I was contacted by Red Hat associates whether I’d be willing to help with some accessibility related presentation at our faculty, and that’s how the collaboration began. And, yes, the hire is its current end, but that’s actually, I hope, only the beginning of a long and productive journey.

Christian: So as a blind person you have first hand experience with the state of accessibility support under Linux. What can you tell us about what works and what doesn’t work?

Lukas: Generally, things are in pretty good shape. Braille support on text-only consoles basically just always works (except for some SELinux related issues which cropped up). Having speech there is somewhat more challenging, the needed kernel module (Speakup for the curious among the readers) is not included by all distributions, unfortunately it is not included by Fedora, for example, but Arch Linux has it. When we look at the desktop state of affairs, there is basically only a single screen reader (an application which reads the screen content), called Orca, which might not be the best position in terms of competition, but on the other hand, stealing Orca developers would not be good either. Generally, the desktop is usable, at least with GTK, Qt and major web browsers and all recent Electron based applications. Yes, accessibility support receives much less testing than I would like, so for example, a segmentation fault with a running screen reader can still unfortunately slip through a GTK release. But, generally, the foundation works well enough. Having more and naturally sounding voices for speech synthesis might help attract more blind users, but convincing all the players is no easy work. And then there’s the issue of developer awareness. Yes, everything is in some guidelines like the GNOME ones, however I saw much more often than I’d like to for example a button without any accessibility labels, so I’d like to help all the developers to fix their apps so accessibility regressions don’t get to the users, but this will have to improve slowly, I guess.

Christian: So you mention Orca, are there other applications being widely used providing accessibility?

Lukas: Honestly, only a few. There’s Speakup – a kernel module which can read text consoles using speech synthesis, e.g. a screen reader for these, however without something like Espeakup (an Espeak to Speakup bridge) the thing is basically useless, as it by default supports hardware synthesizers, however this piece of hardware is basically a think of the past, e.g. I have never seen one myself. Then, there’s BRLTTY. This piece of software provides braille output for screen consoles and an API for applications which want to output braille, so the drivers can be implemented only once. And that’s basically it, except for some efforts to create an Orca alternative in Rust, but that’s a really long way off. Of course, utilities for other accessibility needs exist as well, but I don’t know much about these.

Christian: What is your current focus for things you want to work on both yourself and with the larger team to address?

Lukas: For now, my focus is to go through the applications which were ported to GTK 4 as a part of the GNOME development cycle and ensure that they work well. It includes adding a lot of missing labels, but in some cases, it will involve bigger changes, for example, GNOME Calendar seems to need much more work. During all that, educating developers should not be forgotten either. With these things out of the way, making sure that no regressions slip to the applications should be addressed by extending the quality assurance and automated continuous integration checks, but that’s a more distant goal.

Christian: Thank you so much for talking with us Lukas, if there are other people interested in helping out with accessibility in Fedora Workstation what is the best place to reach you?

Actually for now the easiest way to reach me is by email at ltyrycht@redhat.com. Be happy to talk to anyone wanting to help with making Workstation great for accessibility.

Posted on Leave a comment

Fedora Job Opening: Community Action and Impact Coordinator (FCAIC)

It is bittersweet to announce that I have decided to move on from my role as the Fedora Community Action and Impact Coordinator (FCAIC). For me, this role has been full of growth, unexpected challenges, and so much joy. It has been a privilege to help guide our wonderful community through challenges of the last three years. I’m excited to see what the next FCAIC can do for Fedora. If you’re interested in applying, see the FCAIC job posting on Red Hat Jobs and read more about the role below. 

Adapting to Uncertain Times

When I applied back in 2019, a big part of the job description was to travel the globe, connecting with and supporting Fedora communities worldwide. As we all know, that wasn’t possible with the onset of COVID-19 and everything that comes with a pandemic. 

Instead, I learned how to create virtual experiences for Fedora, connect with people solely in a virtual environment, and support contributors from afar. Virtual events have been a HUGE success for Fedora. The community has shown up for those events in such a wonderful way. We have almost tripled our participation in our virtual events since the first Release Party in 2020. We have more than doubled the number of respondents to the Annual Contributor Survey over last year’s turnout. I am proud of the work I have accomplished and even more so how much the community has grown and adapted to a very challenging couple of years.

What’s next for me

As some of you may know, I picked up the Code of Conduct (CoC) work that my predecessor Brian Exelbierd (Bex) started for Fedora. After the Fedora Council approved the new CoC, I then got started on additional pieces of related work: Supplemental Documentation and Moderation Guidelines. I am also working on expanding the small Code of Conduct Committee(CoCC) to include more community members. As a part of the current CoCC, I have helped to deal with the majority of the incidents throughout my time as FCAIC. 

Because of my experience with all this CoC work, I will be moving into a new role inside of Red Hat’s OSPO: Code of Conduct Specialist. I will be assisting other Community Architects (like the FCAIC role) to help roll out CoC’s and governance around them, as well as collaborating with other communities to develop a Community of Practice around this work. I am excited and determined to take on this new challenge and very proud to be a part of an organization that values work that prioritizes safety and inclusion. 

What’s next for Fedora

This is an amazing opportunity for the Fedora community to grow in new and exciting ways. Every FCAIC brings their own approach to this role as well as their own ideas, strengths, and energy. I will be working with Matthew Miller, Ben Cotton, and Red Hat to help hire and onboard the new Fedora Community Action and Impact Coordinator. I will continue as FCAIC until we hire someone new, and will help transition them into the role. Additionally, I will offer support, advice, and guidance as others who have moved on have done for me. I am eager to see who comes next and how I can help them become a success. And, as I have for years prior to my tenure as FCAIC, I will continue to participate in the community, albeit in different ways. 

This means we are looking for a new FCAIC! Do you love Fedora? Do you want to help support and grow the community full time? This is the core of what the FCAIC does. The job description has a list of the primary job responsibilities and required skills- but that is just a taste of what is required and what it is to support the Fedora community full time. Day-to-day work includes working with the Mindshare Committee, managing the Fedora budget, and being a part of many other teams and in many places. You should be ready and excited to write about Fedora’s achievements, policies, as well as generate strategies to help the community succeed. And, of course, there is event planning and support (Flock, Nest, Hatch, Release Parties, etc). It can be tough work, but it is a lot of fun and wonderfully rewarding to help Fedora thrive. 

How to apply

Do you enjoy working with people all over the world, with a variety of skills and interests? Are you good at setting long term goals and seeing them through to completion? Can you set priorities, follow through, and know when to say “no” in order to focus on the most important tasks for success? Are you excited about building not only a successful Linux distribution, but also a healthy project? Is Fedora’s mission deeply important to you? If you said “yes” to these questions, you might be a great candidate for the FCAIC role. If you think you’re a great fit, please apply online, or contact Marie Nordin, or Jason Brooks.

Posted on Leave a comment

Using Linux System Roles to implement Clevis and Tang for automated LUKS volume unlocking

One of the key aspects of system security is encrypting storage at rest. Without encrypted storage, any time a storage device leaves your presence it can be at risk. The most obvious scenario where this can happen is if a storage device (either just the storage device or the entire system, server, or laptop) is lost or stolen.

However, there are other scenarios that are a concern as well: perhaps you have a storage device fail, and it is replaced under warranty — many times the vendor will ask you to return the original device. If the device was encrypted, it is much less of a concern to return it back to the hardware vendor.

Another concern is anytime your storage device is out of sight there is a risk that the data is copied or cloned off of the device without you even being aware. Again, if the device is encrypted, this is much less of a concern.

Fedora (and other Linux distributions) include the Linux Unified Key Setup (LUKS) functionality to support disk encryption. LUKS is easy to use, and is even integrated as an option in the Fedora Anaconda installer.

However there is one challenge that frequently prevents people from implementing LUKS on a large scale, especially for the root filesystem: every single time you reboot the host you generally have to manually access the console and type in the LUKS passphrase so the system can boot up.

If you are running Fedora on a single laptop, this might not be a problem, after all, you probably are sitting in front of your laptop any time you reboot it. However, if you have a large number of Fedora instances, this quickly becomes impractical to deal with.

If you have hundreds of systems, it is impractical to manually type the LUKS passphrase on each system on every reboot

You might be managing Fedora systems that are at remote locations, and you might not even have good or reliable ways to access a console on them. In this case, rebooting the hosts could result in them not coming up until you or someone else travels to their location to type in the LUKS passphrase.

This article will cover how to implement a solution to enable automated LUKS volume unlocking (and the process to implement these features will be done using automation as well!)

Overview of Clevis and Tang

Clevis and Tang are an innovative solution that can help with the challenge of having systems with encrypted storage boot up without manual user intervention on every boot. At a high level, Clevis, which is installed on the client systems, can enable LUKS volumes to be unlocked without user intervention as long as the client system has network access to a configurable number of Tang servers.

The basic premise is that the Tang server(s) are on an internal/private or otherwise secured network, and if the storage devices are lost, stolen, or otherwise removed from the environment, that they would no longer have network access to the Tang server(s), and thus no longer automatically unlock automatically at boot.

Tang is stateless and doesn’t require authentication or even TLS, which means it is very lightweight and easy to configure, and can run from a container. In this article, I’m only setting up a single Tang server, however it is also possible to have multiple Tang servers in an environment, and to configure the number Tang servers the Clevis clients must connect to in order to unlock the encrypted volume. For example, you could have three Tang servers, and require the Clevis clients to be able to connect to at least two of the three Tang servers.

For more information on how Tang and Clevis work, refer to the GitHub pages: Clevis and Tang, or for an overview of the inner workings of Tang and Clevis, refer to the Securing Automated Decryption New Cryptography and Techniques FOSDEM talk.

Overview of Linux System Roles

Linux System Roles is a set of Ansible Roles/Collections that can help automate the configuration and management of many aspects of Fedora, CentOS Stream, RHEL, and RHEL derivatives. Linux System Roles is packaged in Fedora as an RPM (linux-system-roles) and is also available on Ansible Galaxy (as both roles and as a collection). For more information on Linux System Roles, and to see a list of included roles, refer to the Linux System Roles project page.

Included in the list of Linux System Roles are the nbde_client, nbde_server, and firewall roles that will be used in this article. The nbde_client and nbde_server roles are focused on automating the implementation of Clevis and Tang, respectively. The “nbde” in the role names stands for network bound disk encryption, which is another term to refer to using Clevis and Tang for automated unlocking of LUKS encrypted volumes. The firewall role can automate the implementation of firewall settings, and will be used to open a port in the firewall on the Tang server.

Demo environment overview

In my environment, I have a Raspberry Pi, running Fedora 36 that I will install Linux System Roles on and use as my Ansible control node. In addition, I’ll use this same Raspberry Pi as my Tang server. This device is configured with the pi.example.com hostname.

In addition, I have four other systems in my environment: two Fedora 36 systems, and two CentOS Stream 9 systems, named fedora-server1.example.com, fedora-server2.example.com, c9s-server1.example.com, and c9s-server2.example.com. Each of these four systems has a LUKS encrypted root filesystem and currently the LUKS passphrase must be manually typed in each time the systems boot up.

I’ll use the nbde_server and firewall roles to install and configure Tang on my pi.example.com system, and use the nbde_client role to install and configure Clevis on my four other systems, enabling them to automatically unlock their encrypted root filesystem if they can connect to the pi.example.com Tang system.

Installing Linux System Roles and Ansible on the Raspberry Pi

I’ll start by installing the linux-system-roles package on the pi.example.com host, which will act as my Ansible control node. This will also install ansible-core and several other packages as dependencies. These packages do not need to be installed on the other four systems in my environment (which are referred to as managed nodes).

$ sudo dnf install linux-system-roles

SSH keys and sudo configuration need to be configured so that the control node host can connect to each of the managed nodes in the environment and escalate to root privileges.

Defining the Ansible inventory file

Still on the pi.example.com host, I’ll create an Ansible inventory file to group the five systems in my environment into two Ansible inventory groups. The nbde_servers group will contain a list of hosts that I would like to configure as Tang servers (which in this example is only the pi.example.com host), and the nbde_clients group will contain a list of hosts that I would like to configure as Clevis clients. I’ll name this inventory file inventory.yml and it contains the following content:

all: children: nbde_servers: hosts: pi.example.com: nbde_clients: hosts: fedora35-server1.example.com: fedora35-server2.example.com: c9s-server1.example.com: c9s-server2.example.com:

Creating Ansible Group variable files

Ansible variables are set to specify what configuration should be implemented by the Linux System Roles. Each role has a README.md file that contains important information on how to use each role, including a list of available role variables. The README.md files for the nbde_server, nbde_client, and firewall roles are available in the following locations, respectively:

  • /usr/share/doc/linux-system-roles/nbde_server/README.md
  • /usr/share/doc/linux-system-roles/nbde_client/README.md
  • /usr/share/doc/linux-system-roles/firewall/README.md

I’ll create a group_vars directory with the mkdir group_vars command. Within this directory, I’ll create a nbde_servers.yml file and nbde_clients.yml file, which will define, respectively, the variables that should be set for systems listed in the nbde_servers inventory group and the nbde_clients inventory group.

The nbde_servers.yml file contains the following content, which will instruct the firewall role to open TCP port 80, which is the default port used by Tang:

firewall: - port: ['80/tcp'] state: enabled

The nbde_clients.yml file contains the following content:

nbde_client_bindings: - device: /dev/vda2 encryption_password: !vault | $ANSIBLE_VAULT;1.1;AES256 62666465373138636165326639633... servers: - http://pi.example.com

Under nbde_client_bindings, device specifies the backing device of the encrypted root filesystem on the four managed nodes. The encryption_password specifies a current LUKS passphrase that is required to configure Clevis. In this example, I’ve used ansible-vault to encrypt the string rather than place the LUKS passphrase in clear text. And finally, under servers, a list of Tang servers that Clevis should bind to are specified. In this example, the Clevis clients will be configured to bind to the pi.example.com Tang server.

Creating the playbook

I’ll create a simple Ansible playbook, named nbde.yml that will call the firewall and nbde_server roles for systems in the nbde_servers inventory group, and call the nbde_client role for systems in the nbde_clients group:

- name: Open firewall for Tang hosts: nbde_servers roles: - linux-system-roles.firewall - name: Deploy NBDE Tang server hosts: nbde_servers roles: - linux-system-roles.nbde_server - name: Deploy NBDE Clevis clients hosts: nbde_clients roles: - linux-system-roles.nbde_client

At this point, I have the following files and directories created:

  • inventory.yml
  • nbde.yml
  • group_vars/nbde_clients.yml
  • group_vars/nbde_servers.yml

Running the playbook

The nbde.yml playbook can be run with the following command:

$ ansible-playbook nbde.yml -i inventory.yml --ask-vault-pass -b

The -i flag specifies which inventory file should be used, the –ask-vault-pass flag will prompt for the Ansible Vault password to decrypt the encryption_password variable, and the -b flag specifies that Ansible should escalate to root privileges.

play recap output from ansible-playbook command showing playbook successfully completed

Validating the configuration

To validate the configuration, I rebooted each of my four managed nodes that were configured as Clevis clients of the Raspberry Pi Tang server. Each of the four managed nodes boots up and briefly pauses on the LUKS passphrase prompt:

Systems boot up to LUKS passphrase prompt, and automatically continue booting after a brief pause

However, after the brief delay, each of the four systems continued booting up without requiring me to enter the LUKS passphrase.

Conclusion

If you would like to secure your data at rest with LUKS encryption, but need a solution that enables systems to boot up without intervention, consider implementing Clevis and Tang. Linux System Roles can help you implement Clevis and Tang, as well as a number of other aspects of your system, in an automated manner.

Posted on Leave a comment

Fedora Workstation’s State of Gaming – A Case Study of Far Cry 5 (2018)

First-person shooter video games are a great proving ground for strategies that make you finish on the top, reflexes that help you to shoot before getting shot and agility that adjusts you to whatever a situation throws at you. Add the open-ended nature brought in by large intricately-designed worlds into the mix, and it dials the player experience to eleven and, with that, it also becomes great evidence of what a platform is capable of. Needless to say, I have been a great fan of open-world first-person shooter games. And Ubisoft’s Far Cry series happens to be the one which remains closest to my heart. So I tried the (second) most recent release in the long-running series, Far Cry 5 which came out in 2018, on Fedora Workstation 35 to see how it performs.

Just like in my previous case study, the testing hardware has an AMD RDNA2-based GPU, where the video game was configured to the highest possible graphical preset to stress the hardware into performing as much as its limiting factor. To ensure that we have a fair comparison, I set up two environments – one with Windows 10 Pro 21H2 and one with Fedora Workstation 35, both having up-to-date drivers and support software such as MSI Afterburner or MangoHUD for monitoring, Steam or Lutris for video game management and OBS Studio for footage recording. Adding to that, the benchmarks were ensured to be both representatives of a common gameplay scenario and variable enough to address resolution scaling and HD textures.

Cover art for “Far Cry 5”, Ubisoft, fair use, via Wikimedia Commons

Before we get into some actual performance testing and comparison results, I would like to go into detail about the video game that is at the centre of this case study. Far Cry 5 is a first-person action-adventure video game developed by Ubisoft Montreal and Ubisoft Toronto. The player takes the role of an unnamed junior deputy sheriff who is trapped in Hope County, a fictional region based in Montana and has to fight against a doomsday cult to take back the county from the grasp of its charismatic and powerful leader. The video game has been well received for the inclusion of branching storylines, role-playing elements and side quests, and is optimized enough to be a defining showcase of what the underlying hardware and platform are capable of.

Preliminary

Framerate

The first test that was performed had a direct implication on how smooth the playing experience would be across different platforms but on the same hardware configuration.

Without HD textures

On a default Far Cry 5 installation, I followed the configuration stated above but opted out of the HD textures pack to warm up the platforms with a comparatively easier test. Following are the results.

  1. On average, the video game had around a whopping 59.25% more framerate on Fedora Workstation 35 than on Windows 10 Pro 21H2.
  2. To ensure an overall consistent performance, both the minimum and maximum framerates were also noted to monitor dips and rises.
  3. The minimum framerates on Fedora Workstation 35 were ahead by a big 49.10% margin as compared to those on Windows 10 Pro 21H2.
  4. The maximum framerates on Fedora Workstation 35 were ahead by a big 62.52% margin as compared to those on Windows 10 Pro 21H2.
  5. The X11 display server had roughly 0.52% more minimum framerate as compared to Wayland, which can be taken as a margin of error.
  6. The Wayland display server had roughly 3.87% more maximum framerate as compared to X11, which can be taken as a margin of error.

With HD textures

On a default Far Cry 5 installation, I followed the configuration stated above, but this time I enabled the HD textures pack to stress the platforms with a comparatively harder test. Following are the results.

  1. On average, the video game had around a whopping 65.63% more framerate on Fedora Workstation 35 than on Windows 10 Pro 21H2.
  2. To ensure an overall consistent performance, both the minimum and maximum framerates were also noted to monitor dips and rises.
  3. The minimum framerates on Fedora Workstation 35 were ahead by a big 59.11% margin as compared to those on Windows 10 Pro 21H2.
  4. The maximum framerates on Fedora Workstation 35 were ahead by a big 64.21% margin as compared to those on Windows 10 Pro 21H2.
  5. The X11 display server had roughly 9.77% more minimum framerate as compared to Wayland, which is big enough to be considered.
  6. The Wayland display server had roughly 1.12% more maximum framerate as compared to X11, which can be taken as a margin of error.

Video memory usage

The second test that was performed had less to do with the playing experience and more with the efficiency of graphical resource usage. Following are the results.

Without HD textures

On a default Far Cry 5 installation, I followed the configuration stated above but opted out of the HD textures pack to use comparatively lesser video memory across the platforms. Following are the results.

  1. On average, Fedora Workstation 35 uses around 31.94% lesser video memory as compared to Windows 10 Pro 21H2.
  2. The Wayland display server uses roughly 1.78% more video memory as compared to X11, which can be taken as a margin of error.
  3. The video game usage estimated is closer to the actual readings on Fedora Workstation 35 than they are those on Windows 10 Pro 21H2.
  4. Adding this to the previous results speaks about how Fedora Workstation 35 performs better while using fewer resources.

With HD textures

On a default Far Cry 5 installation, I followed the configuration stated above but this time I enabled the HD textures pack to stress the platforms by occupying more video memory. Following are the results.

  1. On average, Fedora Workstation 35 uses around 22.79% lesser video memory as compared to Windows 10 Pro 21H2.
  2. The Wayland display server uses roughly 2.73% more video memory as compared to X11, which can be taken as a margin of error.
  3. The video game usage estimated is closer to the actual readings on Fedora Workstation 35 than they are those on Windows 10 Pro 21H2.
  4. Adding this to the previous results speaks about how Fedora Workstation 35 performs better while using fewer resources.

System memory usage

The third test that was performed had less to do with the playing experience and more with how other applications can fit in the available memory while the video game is running. Following are the results.

Without HD textures

On a default Far Cry 5 installation, I followed the configuration stated above but opted out of the HD textures pack to warm up the platforms with a comparatively easier test. Following are the results.

  1. On average, Fedora Workstation 35 uses around 38.10% lesser system memory as compared to Windows 10 Pro 21H2.
  2. The Wayland display server uses roughly 4.17% more system memory as compared to X11, which can be taken as a margin of error.
  3. Adding this to the previous results speaks about how Fedora Workstation 35 performs better while using fewer resources.
  4. Lesser memory usage by the video game leaves out extra headroom for other applications to run simultaneously with no compromises.

With HD textures

On a default Far Cry 5 installation, I followed the configuration stated above, but this time I enabled the HD textures pack to stress the platforms with a comparatively harder test. Following are the results.

  1. On average, Fedora Workstation 35 uses around 33.58% lesser system memory as compared to Windows 10 Pro 21H2.
  2. The Wayland display server uses roughly 7.28% more system memory as compared to X11, which is big enough to be considered.
  3. Adding this to the previous results speaks about how Fedora Workstation 35 performs better while using fewer resources.
  4. Lesser memory usage by the video game leaves out extra headroom for other applications to run simultaneously with no compromises.

Advanced

Without HD textures

On a default Far Cry 5 installation, I followed the previously stated configuration without the HD textures pack and ran the tests with varied resolution multipliers. Following are the results.

Minimum framerates recorded

  1. A great deal of inconsistent performance is visible on Fedora Workstation 35 with both display servers in lower resolution scales.
  2. The inconsistencies seem to normalize for the resolution multipliers on and beyond the 1.1x resolution scale for Fedora Workstation 35.
  3. Resolution multipliers do not seem to have a great effect on the framerate on Windows 10 Pro 21H2 as much as on Fedora Workstation 35.
  4. Although Windows 10 Pro 21H2 misses out on potential performance advantages in lower resolution multipliers, it has been consistent.
  5. Records on Windows 10 Pro 21H2 in the 2.0x resolution multiplier appear to be marginally better than those on Fedora Workstation 35.

Maximum framerates recorded

  1. A small amount of inconsistent performance is visible on Fedora Workstation 35 with both display servers in lower resolution scales.
  2. The inconsistencies seem to normalize for the resolution multipliers on and beyond the 1.1x resolution scale for Fedora Workstation 35.
  3. Resolution multipliers change starts noticeably affecting performance on Windows 10 Pro 21H2 on a 1.6x scale, beyond which it falls greatly.
  4. Although Windows 10 Pro 21H2 misses out on potential performance advantages in lower resolution multipliers, it has been consistent.
  5. Records on Windows 10 Pro 21H2 in the 1.6x resolution multiplier and beyond appear to be better than those on Fedora Workstation 35.

Average framerates recorded

  1. A minor amount of inconsistent performance is visible on Fedora Workstation 35 with both display servers in lower resolution scales.
  2. The inconsistencies seem to normalize for the resolution multipliers on and beyond the 1.1x resolution scale for Fedora Workstation 35.
  3. Resolution multipliers change starts noticeably affecting performance on Windows 10 Pro 21H2 on a 1.6x scale, beyond which it falls greatly.
  4. Although Windows 10 Pro 21H2 misses out on potential performance advantages in lower resolution multipliers, it has been consistent.
  5. Records on Windows 10 Pro 21H2 in the 1.9x resolution multiplier and beyond appear to be better than those on Fedora Workstation 35.

With HD textures

On a default Far Cry 5 installation, I followed the previously stated configuration with the HD textures pack and ran the tests with varied resolution multipliers. Following are the results.

Minimum framerates recorded

  1. A great deal of inconsistent performance is visible on Fedora Workstation 35 with both display servers in lower resolution scales.
  2. The inconsistencies seem to normalize for the resolution multipliers on and beyond the 1.5x resolution scale for Fedora Workstation 35.
  3. Resolution multipliers do not seem to have a great effect on the framerate on Windows 10 Pro 21H2 as much as on Fedora Workstation 35.
  4. Although Windows 10 Pro 21H2 misses out on potential performance advantages in lower resolution multipliers, it has been consistent.
  5. Records on Windows 10 Pro 21H2 in the 2.0x resolution multiplier appear to be marginally better than those on Fedora Workstation 35.

Maximum framerates recorded

  1. A great deal of inconsistent performance is visible on Fedora Workstation 35 with both display servers in lower resolution scales.
  2. The inconsistencies seem to normalize for the resolution multipliers on and beyond the 1.0x resolution scale for Fedora Workstation 35.
  3. Resolution multipliers change starts noticeably affecting performance on Windows 10 Pro 21H2 on a 1.6x scale, beyond which it falls greatly.
  4. Although Windows 10 Pro 21H2 misses out on potential performance advantages in lower resolution multipliers, it has been consistent.
  5. Records on Windows 10 Pro 21H2 in the 1.6x resolution multiplier and beyond appear to be better than those on Fedora Workstation 35.

Average framerates recorded

  1. A minor amount of inconsistent performance is visible on Fedora Workstation 35 with both display servers in lower resolution scales.
  2. The inconsistencies seem to normalize for the resolution multipliers on and beyond the 1.1x resolution scale for Fedora Workstation 35.
  3. Resolution multipliers change starts noticeably affecting performance on Windows 10 Pro 21H2 on a 1.6x scale, beyond which it falls greatly.
  4. Although Windows 10 Pro 21H2 misses out on potential performance advantages in lower resolution multipliers, it has been consistent.
  5. Records on Windows 10 Pro 21H2 in the 1.9x resolution multiplier and beyond appear to be better than those on Fedora Workstation 35.

Inferences

If the test results and observations baffle you, please allow me to tell you that you are not the only one who feels like that. For a video game that was created to run on Windows, it is hard to imagine how it ends up performing way better on Fedora Workstation 35, all while using a much lesser amount of system resources at all times. Special attention has been given to noting down the highest highs and lowest lows of framerates to ensure that consistent performance is made available.

But wait a minute – how is it that Fedora Workstation 35 manages to make this possible? Well, while I do not have a clear idea of what exactly goes on behind the scenes, I do have a certain number of assumptions that I suspect might be the reasons attributing to such brilliant visuals, great framerates and efficient resource usage. These can potentially act as starting points for us to understand the features of Fedora Workstation 35 for compatibility layers to make use of.

  1. Effective caching of graphical elements and texture assets in the video memory allows for keeping only those data in the memory which are either actively made use of or regularly referenced. The open-source AMD drivers help Fedora Workstation 35 make efficient use of the available frame buffer.
  2. Quick and frequent cycling of data elements from the video memory helps to bring down total occupancy per application at any point in time. The memory clocks and shader clocks are left at the application’s disposal by the open-source AMD drivers, and firmware bandwidth limits are all but absent.
  3. With AMD Smart Access Memory (SAM) enabled, the CPU is no longer restricted to using only 256MiB of the video memory at a time. A combination of leading-edge kernel and up-to-date drivers makes it available on Fedora Workstation 35 and capable of harnessing the technology to its limits.
  4. Extremely low system resource usage by supporting software and background services leaves out a huge majority of them to be used by the applications which need it the most. Fedora Workstation 35 is a lightweight distribution, which does not get in your way and puts the resources on what’s important.
  5. Faster loading of data elements to and from the physical storage devices to the system memory is greatly enhanced with the use of high-capacity modern copy-on-write file systems like BTRFS and journaling file systems like EXT4, which happens to be the suggested file system for Fedora Workstation 35.

Performance improvements like these only make me want to indulge more in testing and finding out what else Fedora Workstation is capable of. Do let me know what you think in the comments section below.

Posted on Leave a comment

Contribute at the Fedora Linux 37 Test Week for Kernel 5.18 

The kernel team is working on final integration for Linux kernel 5.18. This version was just recently released, and will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week now through Sunday, June 12, 2022. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

How does a test week work?

A test week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results. We have a document which provides all the steps written.

Happy testing, and we hope to see you on test day.