Posted on Leave a comment

Contribute at the Fedora i18n and GNOME 42 test weeks

There are two upcoming test weeks in the coming weeks. The first is Monday 28 February through Monday 07 March. It is to test GNOME 42. The second is Monday 07 March through Sunday 13 March. It focuses on testing internationalization. Come and test with us to make the upcoming Fedora 36 even better. Read more below on how to do it.

GNOME test week

GNOME is the default desktop environment for Fedora Workstation and thus for many Fedora users. As a part of the planned change the GNOME megaupdate will land on Fedora which then will be shipped with Fedora 36. To ensure that everything works fine The Workstation Working Group and QA team will have this test week Monday 28 February through Monday 07 March. Refer to the GNOME test week wiki page for links and resources.

i18n test week

i18n test week focuses on testing internationalization features in Fedora Linux. The test week is March 07 through March 13.

How do test days work?

A test day is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about both test days are on the wiki pages above. If you’re available on or around the days of the events, please do some testing and report your results.

Fedora test days are events where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed to Fedora before, this is a perfect way to get started.

Again, The two upcoming test days in the upcoming week are:

  • Monday 28 February through Monday 07 March, to test GNOME 42.
  • Monday 07 March through Sunday 13 March, focusing on testing internationalization.

Come and test with us to make the upcoming Fedora 36 even better.

Posted on Leave a comment

Upgraded Fedora shirts

We’ve upgraded the Fedora shirt and sweatshirt collection we announced in March 2020.

A blue Fedora polo shirt. T-shirts ans sweatshirts are available too.

What’s new?

  • We do the embroidery with the new Fedora logo, but the old logo is still available as Fedora Classic.
  • A Fedora hoodie, and crewneck sweatshirt and a long-sleeve t-shirt. The last two are seasonal, only for February.
  • Faster shipping to the US, Canada and the UK. (We ship from Hungary, Europe.)

OK, but is it made with Linux?

A lot of questions come about as to how exactly do we make the embroidery. Do we use Windows for that? There is a short explanation on our website, but let’s give some more details.

You might have read here at Fedora Magazine that there is a good, fully free (as in freedom) solution to making embroidery designs with Inkscape and Inkstitch. This is the future for us too, and sooner or later we will make all of our embroidery with them. But — mainly because Inkstitch is quite new, and we have been embroidering Linux shirts for a decade — we are still using a manufacturer-independent, but “non-free” program, called Embird. We run it with Wine under Debian. This one is the only proprietary piece of software used here. And it must go, soon…

The Fedora embroidery design in Embird, under Linux.

The other software we use is quite standard for Linux: Inkscape, GIMP, ImageMagick, and of course all of the everyday tools you are using on your Fedora Linux desktop and server.

Check out the embroidered Fedora collection here! Don’t forget to use the FEDORA5 coupon code, for the $5 discount on every shirt and sweatshirt.

Posted on Leave a comment

How I Customize Fedora Silverblue and Fedora Kinoite

Hello everyone. My name is Yasin and I live in Turkey. I am 28 years old and have used Fedora Silverblue for two months and I am an active Fedora Kinoite user. I want to share the information I’ve learned in the process of using the systems. So I’ve decided to write this article. I hope you like it. Let’s get started.

When one says Fedora Linux, the first edition that comes to mind is Fedora Workstation. However, do not overlook the emerging editions Fedora Silverblue (featuring the GNOME desktop environment) and Fedora Kinoite (featuring the KDE desktop environment). Both of these are reprovisionable operating systems based on libostree. They are created exclusively from official RPM packages from the Fedora Project. In this article, I will demonstrate some common steps you might take after a clean installation of Fedora Silverblue or Fedora Kinoite. Everything listed in this article is optional. Exactly what you want to install or how you want to configure your system will depend on your particular needs. What is demonstrated below is just meant to give you some ideas and to provide some examples.

Disclaimer: Packages from Flathub, RPM Fusion, the Copr build system, GitHub, GitLab, et al. are not managed by the Fedora release team and they do not provide official software builds. Use packages from these sources at your own risk.

System upgrades

Fedora Linux in particular releases feature updates and security updates quite often. So you will want to run the below command regularly to keep your system up-to-date. Open the terminal and enter the following command. Afterwards, restart the computer so the changes will take effect.

$ rpm-ostree upgrade

If you want to preview which packages will be updated, use the follow command first.

$ rpm-ostree update --preview

It is also possible to configure automatic updates by editing the rpm-ostreed.conf file as demonstrated below.

$ sudo nano /etc/rpm-ostreed.conf

Change AutomaticUpdatePolicy to check. Then save the change and quit the editor. After that you need to reload rpm-ostree and enable the automatic timer.

$ rpm-ostree reload
$ systemctl enable rpm-ostreed-automatic.timer --now

Adding Flatpak remotes and other third-party repositories

Fedora Silverblue and Fedora Kinoite come preloaded with the basic Fedora Linux repos. In addition, you might want Flatpak, RPM Fusion or some Copr repos.

Flathub remotes

Flatpak is at the top of the list of ways to install applications on Fedora Silverblue and Fedora Kinoite because it is container-based and it does not require a reboot after installation. To add some remote software libraries and try it out, open the terminal again and enter the following commands.

Fedora Flatpaks remote:

$ flatpak remote-add --if-not-exists fedora oci+https://registry.fedoraproject.org

Flathub remote:

$ flatpak remote-add --if-not-exists flathub https://flathub.org/repo/flathub.flatpakrepo

Flabhub Beta remote:

$ flatpak remote-add --if-not-exists flathub-beta https://flathub.org/beta-repo/flathub-beta.flatpakrepo

KDE nightly remote:

$ flatpak remote-add --if-not-exists kdeapps --from https://distribute.kde.org/kdeapps.flatpakrepo

GNOME nightly remote:

$ flatpak remote-add --if-not-exists gnome-nightly https://nightly.gnome.org/gnome-nightly.flatpakrepo

After the repositories are added, you need to enter the code below in order to update the application catalog in the GNOME Software and Discover stores. In this way, you will be able to manage applications directly from the store without going to flathub.org.

$ flatpak update --appstream

After that, you can use the store to update Flatpak applications, or if you want to update directly from the terminal, you can enter the code below.

$ flatpak update

If you want to see all installed Flatpaks:

$ flatpak list

RPM Fusion repos

Another remote software library you can add is RPM Fusion. To add it on Fedora Silverblue or Fedora Kinoite, open the terminal, enter the following commands and restart.

$ sudo rpm-ostree install https://mirrors.rpmfusion.org/free/fedora/rpmfusion-free-release-$(rpm -E %fedora).noarch.rpm
$ sudo rpm-ostree install https://mirrors.rpmfusion.org/nonfree/fedora/rpmfusion-nonfree-release-$(rpm -E %fedora).noarch.rpm

Copr repos

Copr repos are yet another source of applications that can be installed on Fedora Silverblue and Fedora Kinoite. To add the repos, enter commands in the following form.

$ sudo ostree remote add <name-of-repo> <repository-url>

Example (Heroic Games launcher repo):

$ sudo ostree remote add heroic-games-launcher https://download.copr.fedorainfracloud.org/results/atim/heroic-games-launcher/fedora-$releasever-$basearch/

If you want another option, you can download the repository configuration file from Copr‘s own site and put it in the /etc/yum.repos.d folder.

Examples of popular Flatpak applications

Libre Office

$ flatpak install flathub org.libreoffice.LibreOffice

Lutris

$ flatpak install —user flathub-beta net.lutris.Lutris//beta

Steam

$ flatpak install flathub com.valvesoftware.Steam

VLC

$ flatpak install flathub org.videolan.VLC

Firefox

$ flatpak install flathub org.mozilla.firefox

Note: Fedora Firefox normally comes preloaded with Fedora Silverblue and Fedora Kinoite. However, the Flatpak version of Firefox has more comprehensive codec support.

Installing the Nvidia driver and a specific kernel

If you have installed RPM Fusion repositories, you can install the Nvidia driver by entering the code below and restarting the computer so the changes will take effect.

$ sudo rpm-ostree install akmod-nvidia xorg-x11-drv-nvidia

If you are using the Nvidia System Management Interface (nvidia-smi) or CUDA:

$ sudo rpm-ostree install akmod-nvidia xorg-x11-drv-nvidia-cuda

If you want to install specific kernel, you can always download a kernel from Koji and install it on Fedora Silverblue or Fedora Kinoite using the following command:

$ sudo rpm-ostree override replace ./kernel*.rpm

If you want to install multiple kernels, you will need to pin your deployment by issuing the ostree admin pin 0 command then use the same code above. After restarting, if you pin the new kernel, then you will have two deployments with specific kernels. Remember that you must update them individually because you cannot pin two deployments at the same time.

Toolbx

The Toolbx utility is used primarily for CLI apps, development and debugging tools, etc. However, you can install supported any operating system. In this article, I will give an example of Fedora 35 Workstation installation and use. Fedora Silverblue and Fedora Kinoite come preloaded with Toolbx. So you can start directly.

First, create a toolbox.

$ toolbox create

When the above is complete, enter:

$ toolbox enter

When you see the code that starts with toolbox, then you are in the container operating system. You can list the container(s) by means of:

$ toolbox list

If you want to remove the container, enter:

$ toolbox rmi <container name>

If you need more help, enter:

$ toolbox --help

Thanks to Toolbx, your main operating system will never break. You can pretend to be on Fedora Workstation, install and delete packages, and do things you cannot do on the libostree-based host system. Let’s illustrate with a few examples.

Many users use Toolbx for their developer tools. But it is a really useful tool for regular users as well. For example, you can install Xtreme Download Manager and combine it with Firefox to download content such as music and videos from the internet. It will make your job even easier if you download the file manager before downloading XDM. Now that you are in Toolbx, try installing Nautilus.

$ sudo dnf install nautilus

After that, you can get XDM from here:

https://github.com/subhra74/xdm/releases/download/7.2.11/xdm-setup-7.2.11.tar.xz

Start Nautilus with sudo nautilus while in Toolbx. Then unarchive XDM, open the folder, right click on some empty space and select Open in Terminal. Then enter the below code.

$ su -c ./install.sh

Congratulations! You have successfully installed XDM. After that you will need to open XDM, install Firefox and then open XDM again. Finally, you will want to make the XDM plugin available for Firefox.

$ sudo xdm
$ sudo dnf install firefox
$ sudo firefox

A few more example things that you could do in Toolbx include:

  • Add the repositories from Fedora Silverblue or Fedora Kinoite using the terminal. Alternatively, you could copy the repo files from /etc/yum.repos.d in Fedora Silverblue or Fedora Kinoite to /etc/yum.repos.d in Toolbx.
  • Keep the container updated by running sudo dnf update periodically. (Tip: For faster downloads, you might want to try adding the fastestmirror=1 and max_parallel_downloads=10 options to the container’s /etc/dnf/dnf.conf file.)
  • Use the dnf history command to see what changes you’ve made to the container.
  • You could install multimedia codecs and Windows fonts. But it’s not necessary because rpm-ostree can handle them and the google-croscore-fonts and liberation-fonts are both designed to be compatible with the most common MS fonts.

Layering packages

The package layering method modifies the existing installation. You can permanently install almost any RPM package on Fedora Silverblue or Fedora Kinoite. However, you should only layer packages that you consider essential because, after the layering is complete, you will need to reboot the system before you will be able to use the package. For most packages, I recommend using Toolbx.

Package layering is almost identical to installing a RPM package on Fedora Workstation. It’s just rpm-ostree replacing dnf. For example:

$ rpm-ostree install htop

If you want to remove layered packages:

$ rpm-ostree uninstall htop

If you want to see the all layered packages:

$ rpm-ostree status

If you want to remove all layered packages:

$ rpm-ostree uninstall --all

If you are wondering which packages I’ve chosen to layer on my libostree systems, here are my favorites.

  • tlp, tlp-rdw: helps to reduce the battery use on laptops
  • stacer: system optimizer and monitoring
  • WoeUSB: for preparing bootable Windows ISO images
  • unrar: for extracting and viewing RAR archives

Gaming

Some ways of playing games on Fedora Silverblue or Fedora Kinoite include the following.

  • Using platforms (Steam, Lutris, itch.io, GOG and other emulators)
  • Using compatibility tools (Wine, Proton and others)
  • Native Linux games (These games can be found in official or third-party repositories; or on their official website)
  • Other (Virtualbox, web browser games, etc.)

People are often advised to play games designed to run on Linux or Windows using Proton on Steam. However, not all Windows games are compatible with Proton; especially online games with cheat protection software. So it is useful to check the site below before installing the game.

https://www.protondb.com/

In Fedora Silverblue or Fedora Kinoite, there are two ways to install Proton.

From Flathub (using the terminal):

$ flatpak install com.valvesoftware.Steam.CompatibilityTool.Proton

From GitHub (manually):

https://github.com/GloriousEggroll/proton-ge-custom

My advice is to use the proton-ge-custom version (Gloruious Eggroll) because it contains extra patches and fixes for many popular games. You can read about how to install proton-ge-custom and how to activate it on Steam in the README.md file in the above GitHub repo.

If you do not want to use an online platform, it is possible to play the game using Wine. But you need to go to Wine‘s official site and read the reports about the game or try it yourself to see if the game works. Also, don’t think of it as just a game engine. Wine can run a wide verity of Windows programs. So how do you install Wine? Unfortunately, Wine cannot be directly installed on Fedora Silverblue or Fedora Kinoite as a layered package due to rpm-ostree’s lack of 32-bit support. It is possible, however, to install Wine using some indirect methods. The Winepak repo is dead now. So I’ll skip that.

Method 1: Use a Flathub application as a Wine launcher.

Lutris, Bottles, ProtonUp-Qt and finally Phoenicis PlayOnLinux

Method 2: Install Wine or Lutris in Toolbx with Steam.

$ sudo dnf install wine lutris steam

Method 3: Partially install Wine on rpm-ostree.

$ rpm-ostree install wine-core wine-core.i686 lutris

There are other methods of playing games on Linux. Native Linux games, for example, are available in many repositories. Browser games are also easy to access. Installing Windows in a virtual machine is another method. However, while a virtual machine may work for simpler games, I do not recommend it for games that require a lot of processing power.

Other tips and suggestions

In this final section, I would like to mention a few more things that do not depend on anything mentioned earlier in this article.

rpm-ostree tips

You can use the override sub-command to manage base packages. For example, to remove the pre-loaded Firefox:

$ rpm-ostree override remove firefox

If you want to remove all overlays, overrides and initramfs:

$ rpm-ostree ex reset

rpm-ostree provides an experimental live update feature so that you can avoid rebooting after installing packages.

$ rpm-ostree install --apply-live htop

Since you are on Fedora Silverblue or Fedora Kinoite, switching systems or updating to rawhide can be done with just a few commands. Also, reverting is easier than ever.

Substitute system with kinoite or silverblue in the below examples.

Switch systems:

$ rpm-ostree rebase fedora/35/x86_64/system

Upgrade to rawhide:

$ rpm-ostree rebase fedora/rawhide/x86_64/system

Rollback to a previous version:

$ rpm-ostree rollback fedora/35/x86_64/system

Listing packages

On Fedora Workstation you can use dnf to list the packages in the repositories. But this does not work on Fedora Silverblue or Fedora Kinoite. So how do you do it? If you want to list the installed RPM packages on your system, you can use the following command.

To list the installed RPM packages:

$ rpm -qa

However, if you want to list the packages in the repositories, you must either layer the dnfdragora package or enter Toolbx. Then you can use the following dnf commands.

To list all RPM packages (both installed and available):

$ dnf list

To search for a specific RPM package:

$ dnf search <packagename>

Miscellaneous tips

  • When you want to install an application, first look at the Flatpak remotes. If it’s not there, use Toolbx. Finally, if you cannot run it in Toolbx, layer the package. If you still cannot get what you want to install, the last option is to install Windows in a virtual machine or on a separate partition or hard drive and configure multi-booting.
  • I do not recommend using any other repositories besides the Fedora, RPM Fusion, and Copr repositories unless required.
  • Remember that only KDE (Fedora Kinoite) and GNOME (Fedora Silverblue) desktop environments are officially supported by the Fedora Project.
  • If you want your system to stay the same speed, you can try to avoid doing too much customization (global theme, Conky, Plank, etc.)
  • For Fedora Kinoite users: To add the option to open folder or file as root in the Dolphin file manager on the right click, install the “Dolphin as root” plugin from the Discover application.
  • If you want to preview video files without opening them, you can enter: $ rpm-ostree install ffmpegthumbs kffmpegthumbnailer.

    Note: For now, do not install Dolphin from Flatpak because it replaces the preinstalled Dolphin on the system. With the Flatpak version of Dolphin, you will not be able to preview videos because it does not contain the packages mentioned above

  • For Kinoite users: If you want to install a global theme, the installation from the system settings can sometimes cause problems. Instead, download the global theme file from the KDE Store and enter: $ kpackagetool5 -i /home/username/theme folder

Conclusion

Dear friends, you have come to the end of this article. If you have anything you want to add to this topic or if you have questions, I am waiting for you in the comments section below. Also, special thanks to Badhshah, Timothée Ravier and Daniels for helping me with some information in preparing this article. Finally, if you want to contribute to Fedora Silverblue or Fedora Kinoite or get more information, check the links below. Thank you for reading.

Posted on Leave a comment

Comparison of Fedora Flatpaks and Flathub remotes

In the previous article in this series, we looked at how to get started with Fedora Flatpaks and how to use it. This article compares and contrasts between the Fedora Flatpaks remote and the Flathub remote. Flathub is the de-facto standard Flatpak remote, whereas Fedora Flatpaks is the Fedora Project’s Flatpak remote. The things that differ between the remotes include but are not limited to their policies, their ways of distribution, and their implementation.

Goals and motivation

Fedora Flatpaks and Flathub share the same goals but differ in motivation. The goal is to make applications accessible in their respective field, maximize convenience and minimize maintenance.

Fedora Flatpaks’s motivation is to push RPMs that come directly from the Fedora Project and make them accessible throughout Fedora Linux regardless of the versions, spin, etc. So, in theory, it would be possible to get the latest and greatest applications from the Fedora Project without needing to upgrade to the latest version of Fedora Linux. Of course, it’s always advisable to keep everything up-to-date.

Flathub’s motivation is to simply make applications and tools as accessible as possible regardless of the distribution in use. Hence, all tools are available on GitHub. Filing issues for applications provided by Flathub is the same as filing issues on any project on GitHub.

Packages

Fedora Flatpaks and Flathub create Flatpak applications differently. First and foremost, Fedora Flatpaks literally converts existing RPMs to Flatpak-compatible files where developers can then easily bundle as Flatpak and redistribute them. Flathub, on the other hand, is more open when it comes to how developers bundle applications.

Types of packages published

Fedora Flatpaks only publishes free and open source software, whereas Flathub publishes free and open source software as well as proprietary software. However, Flathub plans to separate proprietary applications from free and open source applications, as stated by a recent blog post from GNOME.

Sources

Flathub is open with what source a Flatpak application (re)uses, whereas Fedora Flatpaks strictly reuses the RPM format.

As such, Flathub has tons of applications that reuse other package formats. For example, the Chrome Flatpak reuses the .deb package, the UnityHub Flatpak reuses the AppImage, the Spotify Flatpak reuses the Snap package, the Android Studio Flatpak uses a tar.gz archive, etc.

Alternatively, Flathub also compiles directly from source. Sometimes from a source archive, from running git clone, etc.

Number of applications

Fedora Flatpaks has fewer applications than Flathub. To list the applications available from a remote, run flatpak remote-ls --app $REMOTE. You can go one step further and get the number of applications by piping to wc -l:

[Terminal ~]$ flatpak remote-ls --app fedora | wc -l
86
[Terminal ~]$ flatpak remote-ls --app flathub | wc -l
1518

Here, at the time of writing this article, we can see that Flathub has 1518 applications available, whereas Fedora Flatpaks has only 86.

OSTree and OCI formats

Implementations are quite different too. Both Fedora Flatpaks and Flathub use Flatpak to help you install, remove, and manage applications. However, in terms of how these applications are published, they fundamentally work differently. Flathub uses the OSTree format to publish applications, whereas Fedora Flatpaks uses the OCI format.

OSTree format

OSTree (or libostree) is a tool to keep track of system binaries. Developers consider OSTree as “Git for binaries” because it is conceptually analogous to git. The OSTree format is the default format for Flatpak, which Flathub uses to publish packages and updates.

When downloading an application, OSTree checks the differences between the installed application (if installed) and the updated application, and intelligently downloads and changes the differences while keeping everything else unchanged, which reduces bandwith. We call this process delta updates.

OCI format

Open Container Initiative (OCI) is an initiative by several organizations to standardize certain elements of containers. Fedora Flatpaks uses the OCI format to publish applications.

This format is similar to how Docker works, which makes it fairly easy to understand for developers who are already familiar with Docker. Furthermore, the OCI format allows the Fedora Project to extend the Fedora Registry, the Fedora Project’s Docker registry, by creating Flatpak applications as Docker images and publishing them to a Docker registry.

This avoids the burden and complications of having to use additional tools to maintain an additional infrastructure just to maintain a Flatpak remote. Instead, the Fedora Project simply reuses the Fedora Registry, to make maintenance much easier and manageable.

Runtimes

Flatpak runtimes are core dependencies where applications reuse these dependencies without duplicating data, also known as “deduplication”. Runtimes may be based on top of other runtimes, or built independently.

Flathub decentralizes these runtimes, meaning runtimes are only available for specific types of applications. For example GTK applications use the GNOME runtime (org.gnome.Platform), Qt applications use the KDE runtime (org.kde.Platform), almost everything else uses the freedesktop.org runtime (org.freedesktop.Platform). The respective organizations maintain these runtimes, and publish them on Flathub. Both the GNOME and KDE runtimes are built on top of the freedesktop.org runtime.

Fedora Flatpaks, on the other hand, uses one runtime for everything, regardless the size of the application. This means, installing one application from Fedora Flatpaks will download and install the whole Fedora runtime (org.fedoraproject.Platform).

Conclusion

In conclusion, we can see that there are several philosophical and technical differences between Fedora Flatpaks and Flathub.

Fedora Flatpaks focuses on fully taking advantage of the existing infrastructure by providing more to an average user without using more resources. In contrast, Flathub strives to make distributing/publishing applications and using them as painless as possible for the developers and for users.

Both remotes are quite impressive with how rapid they improved in very little time. We hope both remotes get better and better, and become the standard across the majority of desktop Linux distributions.

Posted on Leave a comment

FedoraShareYourScreen week (F35)

The Fedora Project, through the Marketing team, is happy to announce the first FedoraShareYourScreen week!

We know that even though the stock look of Fedora Linux is awesome, most people love to tweak and adapt their systems to their own workflow. We want to see how your Fedora Linux desktop looks.

FedoraShareYourScreen week

  • Share your screen with us! Take a screenshot of your desktop and share it. Use the hashtag #FedoraShareYourScreen and mention @fedora on Twitter or @thefedoraproject on Instagram. For Mastodon, just use the hashtag. Avoid showing personal and private info.
  • If you use a full Desktop Environment, just a Window Manager, or just the command line, we want to see how it looks! Share your favorite apps, configs, plugins, widgets and everything on your desktop (including your favorite wallpapers if they are SFW 😉).
  • At the end of the week we will be publishing a slide show on YouTube with all the screens collected during the week! Keep it Family Friendly, inappropriate content won’t be included in the video.

Feel proud of your customization and show it to us! From January 31st to February 6th we will be looking, commenting and sharing feedback on the screenshots shared with the hashtag #FedoraShareYourScreen on Twitter, Instagram and Mastodon!

When is this week?

It will start this on January 31st and it will end on February 6th. We will collect all the screenshots on February 7th and the slide show will be published on February 10th.

Will this happen again?

Of course! We want to see everyone’s ideas with all the new stuff that Fedora Linux adds each release. We will be doing this in the middle of each Fedora Linux release. This will give everyone time to customize the desktop and show it in all it’s shininess!

Posted on Leave a comment

Sharing the computer screen in Gnome

You do not want someone else to be able to monitor or even control your computer and you usually work hard to cut off any such attempts using various security mechanisms. However, sometimes a situation occurs when you desperately need a friend, or an expert, to help you with a computer problem, but they are not at the same location at the same time. How do you show them? Should you take your mobile phone, take pictures of your screen, and send it to them? Should you record a video? Certainly not. You can share your screen with them and possibly let them control your computer remotely for a while. In this article, I will describe how to allow sharing the computer screen in Gnome.

Setting up the server to share its screen

A server is a computer that provides (serves) some content that other computers (clients) will consume. In this article the server runs Fedora Workstation with the standard Gnome desktop.

Switching on Gnome Screen Sharing

By default, the ability to share the computer screen in Gnome is off. In order to use it, you need to switch it on:

  1. Start Gnome Control Center.
  2. Click on the Sharing tab.

    Sharing switched off

  3. Switch on sharing with the slider in the upper right corner.
  4. Click on Screen sharing.

    Sharing switched on

  5. Switch on screen sharing using the slider in the upper left corner of the window.
  6. Check the Allow connections to control the screen if you want to be able to control the screen from the client. Leaving this button unchecked will only allow view-only access to the shared screen.
  7. If you want to manually confirm all incoming connections, select New connections must ask for access.
  8. If you want to allow connections to people who know a password (you will not be notified), select Require a password and fill in the password. The password can only be 8 characters long.
  9. Check Show password to see what the current password is. For a little more protection, do not use your login password here, but choose a different one.
  10. If you have more networks available, you can choose on which one the screen will be accessible.

Setting up the client to display a remote screen

A client is a computer that connects to a service (or content) provided by a server. This demo will also run Fedora Workstation on the client, but the operating system actually should not matter too much, if it runs a decent VNC client.

Check for visibility

Sharing the computer screen in Gnome between the server and the client requires a working network connection and a visible “route” between them. If you cannot make such a connection, you will not be able to view or control the shared screen of the server anyway and the whole process described here will not work.

To make sure a connection exists

Find out the IP address of the server.

Start Gnome Control Center, a.k.a Settings. Use the Menu in the upper right corner, or the Activities mode. When in Activities, type

settings

and click on the corresponding icon.

Select the Network tab.

Click on the Settings button (cogwheel) to display your network profile’s parameters.

Open the Details tab to see the IP address of your computer.

Go to your client’s terminal (the computer from which you want to connect) and find out if there is a connection between the client and the server using the ping command.

$ ping -c 5 192.168.122.225

Examine the command’s output. If it is similar to the example below, the connection between the computers exists.

PING 192.168.122.225 (192.168.122.225) 56(84) bytes of data. 64 bytes from 192.168.122.225: icmp_seq=1 ttl=64 time=0.383 ms 64 bytes from 192.168.122.225: icmp_seq=2 ttl=64 time=0.357 ms 64 bytes from 192.168.122.225: icmp_seq=3 ttl=64 time=0.322 ms 64 bytes from 192.168.122.225: icmp_seq=4 ttl=64 time=0.371 ms 64 bytes from 192.168.122.225: icmp_seq=5 ttl=64 time=0.319 ms --- 192.168.122.225 ping statistics --- 5 packets transmitted, 5 received, 0% packet loss, time 4083ms rtt min/avg/max/mdev = 0.319/0.350/0.383/0.025 ms

You will probably experience no problems if both computers live on the same subnet, such as in your home or at the office, but problems might occur, when your server does not have a public IP address and cannot be seen from the external Internet. Unless you are the only administrator of your Internet access point, you will probably need to consult about your situation with your administrator or with your ISP. Note, that exposing your computer to the external Internet is always a risky strategy and you must pay enough attention to protecting your computer from unwanted access.

Install the VNC client (Remmina)

Remmina is a graphical remote desktop client that can you can use to connect to a remote server using several protocols, such as VNC, Spice, or RDP. Remmina is available from the Fedora repositories, so you can installed it with both the dnf command or the Software, whichever you prefer. With dnf, the following command will install the package and several dependencies.

$ sudo dnf install remmina

Connect to the server

If there is a connection between the server and the client, make sure the following is true:

  1. The computer is running.
  2. The Gnome session is running.
  3. The user with screen sharing enabled is logged in.
  4. The session is not locked, i.e. the user can work with the session.

Then you can attempt to connect to the session from the client:

  1. Start Remmina.
  2. Select the VNC protocol in the dropdown menu on the left side of the address bar.
  3. Type the IP address of the server into the address bar and hit Enter.

    Remmina Window

  4. When the connection starts, another connection window opens. Depending on the server settings, you may need to wait until the server user allows the connection, or you may have to provide the password.
  5. Type in the password and press OK.

    Remmina Connected to Server

  6. Press Align with resolution button to resize the connection window to match the server resolution, or press Full Screen Button to resize the connection window over your entire desktop. When in fullscreen mode, notice the narrow white bar at the upper edge of the screen. That is the Remmina menu and you can access it by moving the mouse to it when you need to leave the fullscreen mode or change some of the settings.

When you return back to the server, you will notice that there is now a yellow icon in the upper bar which indicates that you are sharing the computer screen in Gnome. If you no longer wish to share the screen, you can enter the menu and click on Screen is being shared and then on select Turn off to stop sharing the screen immediately.

Turn off menu item

Terminating the screen sharing when session locks.

By default, the connection will always terminate when the session locks. A new connection cannot be established until the session is unlocked.

On one hand, this sounds logical. If you want to share your screen with someone, you might not want them to use your computer when you are not around. On the other hand, the same approach is not very useful, if you want to control your own computer from a remote location, be it your bed in another room or your mother-in-law’s place. There are two options available to deal with this problem. You can either disable locking the screen entirely or you can use a Gnome extension that supports unlocking the session via the VNC connection.

Disable screen lock

In order to disable the screen lock:

  1. Open the Gnome Control Center.
  2. Click on the Privacy tab.
  3. Select the Screen Lock settings.
  4. Switch off Automatic Screen Lock.

Now, the session will never lock (unless you lock it manually), so it will be possible to start a VNC connection to it.

Use a Gnome extension to allow unlocking the session remotely.

If you do not want to switch off locking the screen or you want to have an option to unlock the session remotely even when it is locked, you will need to install an extension that provides this functionality as such behavior is not allowed by default.

To install the extension:

  1. Open the Firefox browser and point it to the Gnome extension page.

    Gnome Extensions Page

  2. In the upper part of the page, find an info block that tells you to install GNOME Shell integration for Firefox.
  3. Install the Firefox extension by clicking on Click here to install browser extension.
  4. After the installation, notice the Gnome logo in the menu part of Firefox.
  5. Click on the Gnome logo to navigate back to the extension page.
  6. Search for allow locked remote desktop.
  7. Click on the displayed item to go to the extension’s page.
  8. Switch the extension ON by using the on/off button on the right.

    Extension selected

Now, it will be possible to start a VNC connection any time. Note, that you will need to know the session password to unlock the session. If your VNC password differs from the session password, your session is still protected a little.

Conclusion

This article, described the way to enable sharing the computer screen in Gnome. It mentioned the difference between the limited (view-only) access or not limited (full) access. This solution, however, should in no case be considered a correct approach to enable a remote access for serious tasks, such as administering a production server. Why?

  1. The server will always keep its control mode. Anyone working with the server session will be able to control the mouse and keyboard.
  2. If the session is locked, unlocking it from the client will also unlock it on the server. It will also wake up the display from the stand-by mode. Anybody who can see your server screen will be able to watch what you are doing at the moment.
  3. The VNC protocol per se is not encrypted or protected so anything you send over this can be compromised.

There are several ways, you can set up a protected VNC connection. You could tunnel it via the SSH protocol for better security, for example. However, these are beyond the scope of this article.

Disclaimer: The above workflow worked without problems on Fedora 35 using several virtual machines. If it does not work for you, then you might have hit a bug. Please, report it.

Posted on Leave a comment

Quarkus and Mutiny

Quarkus is a foundation for building Java based applications; whether for the desktop, server or cloud. An excellent write up on usage can be found at https://fedoramagazine.org/using-the-quarkus-framework-on-fedora-silverblue-just-a-quick-look/. This article is primer for coding asynchronous processes using Quarkus and Mutiny.

So what is Mutiny? Mutiny allows streaming of objects in an event driven flow. The stream might originate from a local process or something remote like a database. Mutiny streaming is accomplished by either a Uni or a Multi object. We are using the Uni to stream one object — a List containing many integers. A subscribe pattern initiates the stream.

A traditional program is executed and results are returned before continuing. Mutiny can easily support non-blocking code to run processes concurrently. RxJava, ReactiveX and even native Java are alternatives. Mutiny is easy to use (the exposed API is minimal) and it is the default in many of the Quarkus extensions. The two extensions used are quarkus-mutiny and quarkus-vertx. Vert.x is the underlying framework wrapped by Quarkus. The Promise classes are supplied by quarkus-vertx. A promise returns a Uni stream when the process is complete. To get started, install a Java JDK and Maven.

Bootstrap

The minimum requirement is either Java-11 or Java-17 with Maven.

With Java-11:

$ sudo dnf install -y java-11-openjdk-devel maven

With Java-17:

$ sudo dnf install -y java-17-openjdk-devel maven

Bootstrap Quarkus and Mutiny with the Maven call below. The extension quarkus-vertx is not included to demonstrate how to add additional extensions. Locate an appropriate directory before executing. The directory mutiny-demo will be created with the initial application.

$ mvn io.quarkus.platform:quarkus-maven-plugin:2.6.2.Final:create \ -DprojectGroupId=fedoramag \ -DprojectArtifactId=mutiny-demo \ -DprojectVersion=1.0.0 \ -DclassName="org.demo.mag.Startup" \ -Dextensions="mutiny" \ -DbuildTool=gradle

Now that Gradle is bootstrapped, other extensions can be added. In the mutiny-demo directory execute:

$ ./gradlew addExtension --extensions='quarkus-vertx'

To view all available extensions execute:

$ ./gradlew listExtensions

To get all of the defined Gradle tasks execute:

$ ./gradlew tasks

Mutiny Code

The className entry on the Quarkus bootstrap is org.demo.mag.Startup which creates the file src/main/java/org/demo/map/Startup.java. Replace the contents with the following code:

package org.demo.mag; import java.util.List;
import java.util.concurrent.ExecutionException;
import java.util.function.IntSupplier;
import java.util.stream.Collectors;
import java.util.stream.IntStream; import io.quarkus.runtime.Quarkus;
import io.quarkus.runtime.QuarkusApplication;
import io.quarkus.runtime.annotations.QuarkusMain;
import io.smallrye.mutiny.Uni;
import io.smallrye.mutiny.tuples.Tuple2;
import io.vertx.mutiny.core.Promise; @QuarkusMain
public class Startup implements QuarkusApplication { public static void main(String... args) { Quarkus.run(Startup.class, args); } @Override public int run(String... args) throws InterruptedException, ExecutionException { final Promise<String> finalMessage = Promise.promise(); final String elapsedTime = "Elapsed time for asynchronous method: %d milliseconds"; final int[] syncResults = {0}; Application.runTraditionalMethod(); final Long millis = System.currentTimeMillis(); Promise<List<Integer>> promiseRange = Application.getRange(115000); Promise<Tuple2<Promise<List<Integer>>, Promise<List<Integer>>>> promiseCombined = Application.getCombined(10000, 15000); Promise<List<Integer>> promiseReverse = Application.getReverse(24000); /* * Retrieve the Uni stream and on the complete event obtain the List<Integer> */ promiseRange.future().onItem().invoke(list -> { System.out.println("Primes Range: " + list.size()); if(syncResults[0] == 1) { finalMessage.complete(String.format(elapsedTime, System.currentTimeMillis() - millis)); } { syncResults[0] = 2; } return; }).subscribeAsCompletionStage(); promiseReverse.future().onItem().invoke(list -> { System.out.println("Primes Reverse: " + list.size()); return; }).subscribeAsCompletionStage(); /* * Notice that this finishes before the other two prime generators(smaller lists). */ promiseCombined.future().onItem().invoke(p -> { /* * Notice that "Combined Range" displays first */ p.getItem2().future().invoke(reverse -> { System.out.println("Combined Reverse: " + reverse.size()); return; }).subscribeAsCompletionStage(); p.getItem1().future().invoke(range -> { System.out.println("Combined Range: " + range.size()); /* * Nesting promises to get multple results together */ p.getItem2().future().invoke(reverse -> { System.out.println(String.format("Asserting that expected primes are equal: %d -- %d", range.get(0), reverse.get(reverse.size() - 1))); assert range.get(0) == reverse.get(reverse.size() - 1) : "Generated primes incorrect"; if(syncResults[0] == 2) { finalMessage.complete(String.format(elapsedTime, System.currentTimeMillis() - millis)); } else { syncResults[0] = 1; } return; }).subscribeAsCompletionStage(); return; }).subscribeAsCompletionStage(); return; }).subscribeAsCompletionStage(); // Note: on very fast machines this may not display first. System.out.println("This should display first - indicating asynchronous code."); // blocking for final message String elapsedMessage = finalMessage.futureAndAwait(); System.out.println(elapsedMessage); return 0; } public static class Application { public static Promise<List<Integer>> getRange(int n) { final Promise<List<Integer>> promise = Promise.promise(); // non-blocking - this is only for demonstration(emulating some remote call) new Thread(() -> { try { /* * RangeGeneratedPrimes.primes is blocking, only returns when done */ promise.complete(RangeGeneratedPrimes.primes(n)); } catch (Exception exception) { Thread.currentThread().interrupt(); } }).start(); return promise; } public static Promise<List<Integer>> getReverse(int n) { final Promise<List<Integer>> promise = Promise.promise(); new Thread(() -> { try { // Generating a new object stream promise.complete(ReverseGeneratedPrimes.primes(n)); } catch (Exception exception) { Thread.currentThread().interrupt(); } }).start(); return promise; } public static Promise<Tuple2<Promise<List<Integer>>, Promise<List<Integer>>>> getCombined(int ran, int rev) { final Promise<Tuple2<Promise<List<Integer>>, Promise<List<Integer>>>> promise = Promise.promise(); new Thread(() -> { try { Uni.combine().all() /* * Notice that these are running concurrently */ .unis(Uni.createFrom().item(Application.getRange(ran)), Uni.createFrom().item(Application.getReverse(rev))) .asTuple().onItem().call(tuple -> { promise.complete(tuple); return Uni.createFrom().nullItem(); }) .onFailure().invoke(Throwable::printStackTrace) .subscribeAsCompletionStage(); } catch (Exception exception) { Thread.currentThread().interrupt(); } }).start(); return promise; } public static void runTraditionalMethod() { Long millis = System.currentTimeMillis(); System.out.println("Traditiona1-1: " + RangeGeneratedPrimes.primes(115000).size()); System.out.println("Traditiona1-2: " + RangeGeneratedPrimes.primes(10000).size()); System.out.println("Traditiona1-3: " + ReverseGeneratedPrimes.primes(15000).size()); System.out.println("Traditiona1-4: " + ReverseGeneratedPrimes.primes(24000).size()); System.out.println(String.format("Elapsed time for traditional method: %d milliseconds\n", System.currentTimeMillis() - millis)); } } public interface Primes { static List<Integer> primes(int n) { return null; }; } public abstract static class PrimeBase { static boolean isPrime(int number) { return IntStream.rangeClosed(2, (int) (Math.sqrt(number))) .allMatch(n -> number % n != 0); } } public static class RangeGeneratedPrimes extends PrimeBase implements Primes { public static List<Integer> primes(int n) { return IntStream.rangeClosed(2, n) .filter(x -> isPrime(x)).boxed() .collect(Collectors.toList()); } } public static class ReverseGeneratedPrimes extends PrimeBase implements Primes { public static List<Integer> primes(int n) { List<Integer> list = IntStream.generate(getReverseList(n)).limit(n - 1) .filter(x -> isPrime(x)).boxed() .collect(Collectors.toList()); return list; } private static IntSupplier getReverseList(int startValue) { IntSupplier reverse = new IntSupplier() { private int start = startValue; public int getAsInt() { return this.start--; } }; return reverse; } }
}

Testing

The Quarkus install showcases the quarkus-resteasy extension by default. We are not using it, replace the contents of src/test/java/org/demo/mag/StartupTest.java with:

package org.demo.mag; import io.quarkus.test.junit.QuarkusTest;
import io.vertx.mutiny.core.Promise; import java.util.List; import org.demo.mag.Startup;
import org.junit.jupiter.api.Assertions;
import org.junit.jupiter.api.Tag;
import org.junit.jupiter.api.Test; @QuarkusTest
public class StartupTest { Promise<List<Integer>> promise = Promise.promise(); Promise<Void> promiseAndAwait = Promise.promise(); List<Integer> testValue; @Tag("DEV") @Test public void testVerifyAsync() { Assertions.assertEquals( null , testValue); promise.future().onItem().invoke(list -> { testValue = list; promiseAndAwait.complete(); }).subscribeAsCompletionStage(); Assertions.assertEquals(null, testValue); promise.complete(Startup.ReverseGeneratedPrimes.primes(100)); promiseAndAwait.futureAndAwait(); Assertions.assertNotNull(testValue); Assertions.assertEquals(2, testValue.get(testValue.size()-1)); }
}

Optional

To reduce download volume, remove the following entries from the build.gradle file.

	implementation 'io.quarkus:quarkus-resteasy' testImplementation 'io.rest-assured:rest-assured'

Installation and Execution

The next step is to build the project. This includes downloading all dependencies as well as compiling and executing the Startup.java program. Everything is included in one file for brevity.

$ ./gradlew quarkusDev

The above command produces a banner and console output from Quarkus and the program.

This is development mode. Notice the prompt: “Press [space] to restart”. To review edits hit the space-bar and enter-key to re-compile and execute. Enter q to quit.

To build an Uber jar (all dependencies included) execute:

$ ./gradlew quarkusBuild -Dquarkus.package.type=uber-jar

This creates a jar in the build directory named mutiny-demo-1.0.0-runner.jar. To run the jar file, enter the following command.

$ java -jar ./build/mutiny-demo-1.0.0-runner.jar

To remove the banner and console logs, add the following lines to the src/main/resources/application.properties file.

%prod.quarkus.log.console.enable=false
%prod.quarkus.banner.enabled=false

The output might look similar to the following.

	Traditional-1: 9592 Traditional-2: 1229 Traditional-3: 2262 Traditional-4: 2762 Elapsed time for traditional method: 67 milliseconds Combined Range: 1229 This should display first - indicating asynchronous code. Combined Reverse: 2262 Primes Reverse: 2762 Asserting that expected primes are equal: 2 -- 2 Primes Range: 9592 Elapsed time for asynchronous method: 52 milliseconds

You will still get the banner and logs in development mode.

To go one step further, Quarkus can generate an executable out of the box using GraalVM.

$ ./gradlew build -Dquarkus.package.type=native

The executable generated by the above command will be ./build/mutiny-demo-1.0.0-runner.

The default GraalVM is a downloaded container. To override this, set the environment variable GRAALVM_HOME to your local install. Don’t forget to install the native-image with the following command.

$ ${GRAALVM_HOME}/bin/gu install native-image

The Code

The code, generates prime numbers for a range, reversed on a limit and a combination of the two. For example, consider the range: “Promise<List<Integer>> promiseRange = Application.getRange(115000);”.

This generates all primes between 1 and 115000 and displays the number of primes in the range. It is executed first but displays its results last. The code near the end of the main method — System.out.println (“This should display first – indicating asynchronous code.”); — displays first. This is an example of asynchronous code. We can run multiple processes concurrently. However, the order of completion is unpredictable. The traditional calls are orderly and the results can be collected when completed.

Execution can be blocked until a result is returned. The code does exactly that to display the asynchronous elapsed time message. At the end of the main method we have: “String elapsedMessage = finalMessage.futureAndAwait();”. The message arrives from either promiseRange or promiseCombined — the two longest running processes. But even this is not guaranteed. The state of the underling OS is unknown. One of the other processes might finish last. Normally, asynchronous calls are nested to co-ordinate results. This is demonstrated in the promiseCombined promise to evaluate the results of range and reversed primes.

Conclusion

The comparison between the traditional method and asynchronous method suggests that the asynchronous method can be up to 25% faster on a modern computer. An older CPU that does not have the resources and computing power produces results faster with the traditional method. If a computer has many cores, why not use them‽

More documentation can be found on the following web sites.

Posted on Leave a comment

Network address translation part 1 – packet tracing

The first post in a series about network address translation (NAT). Part 1 shows how to use the iptables/nftables packet tracing feature to find the source of NAT related connectivity problems.

Introduction

Network address translation is one way to expose containers or virtual machines to the wider internet. Incoming connection requests have their destination address rewritten to a different one. Packets are then routed to a container or virtual machine instead. The same technique can be used for load-balancing where incoming connections get distributed among a pool of machines.

Connection requests fail when network address translation is not working as expected. The wrong service is exposed, connections end up in the wrong container, request time out, and so on. One way to debug such problems is to check that the incoming request matches the expected or configured translation.

Connection tracking

NAT involves more than just changing the ip addresses or port numbers. For instance, when mapping address X to Y, there is no need to add a rule to do the reverse translation. A netfilter system called “conntrack” recognizes packets that are replies to an existing connection. Each connection has its own NAT state attached to it. Reverse translation is done automatically.

Ruleset evaluation tracing

The utility nftables (and, to a lesser extent, iptables) allow for examining how a packet is evaluated and which rules in the ruleset were matched by it. To use this special feature “trace rules” are inserted at a suitable location. These rules select the packet(s) that should be traced. Lets assume that a host coming from IP address C is trying to reach the service on address S and port P. We want to know which NAT transformation is picked up, which rules get checked and if the packet gets dropped somewhere.

Because we are dealing with incoming connections, add a rule to the prerouting hook point. Prerouting means that the kernel has not yet made a decision on where the packet will be sent to. A change to the destination address often results in packets to get forwarded rather than being handled by the host itself.

Initial setup

 
# nft 'add table inet trace_debug'
# nft 'add chain inet trace_debug trace_pre { type filter hook prerouting priority -200000; }'
# nft "insert rule inet trace_debug trace_pre ip saddr $C ip daddr $S tcp dport $P tcp flags syn limit rate 1/second meta nftrace set 1"

The first rule adds a new table This allows easier removal of the trace and debug rules later. A single “nft delete table inet trace_debug” will be enough to undo all rules and chains added to the temporary table during debugging.

The second rule creates a base hook before routing decisions have been made (prerouting) and with a negative priority value to make sure it will be evaluated before connection tracking and the NAT rules.

The only important part, however, is the last fragment of the third rule: “meta nftrace set 1″. This enables tracing events for all packets that match the rule. Be as specific as possible to get a good signal-to-noise ratio. Consider adding a rate limit to keep the number of trace events at a manageable level. A limit of one packet per second or per minute is a good choice. The provided example traces all syn and syn/ack packets coming from host $C and going to destination port $P on the destination host $S. The limit clause prevents event flooding. In most cases a trace of a single packet is enough.

The procedure is similar for iptables users. An equivalent trace rule looks like this:

 
# iptables -t raw -I PREROUTING -s $C -d $S -p tcp --tcp-flags SYN SYN  --dport $P  -m limit --limit 1/s -j TRACE

Obtaining trace events

Users of the native nft tool can just run the nft trace mode:

 
# nft monitor trace

This prints out the received packet and all rules that match the packet (use CTRL-C to stop it):

 
trace id f0f627 ip raw prerouting  packet: iif "veth0" ether saddr ..

We will examine this in more detail in the next section. If you use iptables, first check the installed version via the “iptables –version” command. Example:

 
# iptables --version
iptables v1.8.5 (legacy)

(legacy) means that trace events are logged to the kernel ring buffer. You will need to check dmesg or journalctl. The debug output lacks some information but is conceptually similar to the one provided by the new tools. You will need to check the rule line numbers that are logged and correlate those to the active iptables ruleset yourself. If the output shows (nf_tables), you can use the xtables-monitor tool:

 
# xtables-monitor --trace

If the command only shows the version, you will also need to look at dmesg/journalctl instead. xtables-monitor uses the same kernel interface as the nft monitor trace tool. Their only difference is that it will print events in iptables syntax and that, if you use a mix of both iptables-nft and nft, it will be unable to print rules that use maps/sets and other nftables-only features.

Example

Lets assume you’d like to debug a non-working port forward to a virtual machine or container. The command “ssh -p 1222 10.1.2.3” should provide remote access to a container running on the machine with that address, but the connection attempt times out.

You have access to the host running the container image. Log in and add a trace rule. See the earlier example on how to add a temporary debug table. The trace rule looks like this:

 
nft "insert rule inet trace_debug trace_pre ip daddr 10.1.2.3 tcp dport 1222 tcp flags syn limit rate 6/minute meta nftrace set 1"

After the rule has been added, start nft in trace mode: nft monitor trace, then retry the failed ssh command. This will generate a lot of output if the ruleset is large. Do not worry about the large example output below – the next section will do a line-by-line walkthrough.

 
trace id 9c01f8 inet trace_debug trace_pre packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp tcp dport 1222 tcp flags == syn
trace id 9c01f8 inet trace_debug trace_pre rule ip daddr 10.2.1.2 tcp dport 1222 tcp flags syn limit rate 6/minute meta nftrace set 1 (verdict continue)
trace id 9c01f8 inet trace_debug trace_pre verdict continue
trace id 9c01f8 inet trace_debug trace_pre policy accept
trace id 9c01f8 inet nat prerouting packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp  tcp dport 1222 tcp flags == syn
trace id 9c01f8 inet nat prerouting rule ip daddr 10.1.2.3  tcp dport 1222 dnat ip to 192.168.70.10:22 (verdict accept)
trace id 9c01f8 inet filter forward packet: iif "enp0" oif "veth21" ether saddr .. ip daddr 192.168.70.10 .. tcp dport 22 tcp flags == syn tcp window 29200
trace id 9c01f8 inet filter forward rule ct status dnat jump allowed_dnats (verdict jump allowed_dnats)
trace id 9c01f8 inet filter allowed_dnats rule drop (verdict drop)
trace id 20a4ef inet trace_debug trace_pre packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp tcp dport 1222 tcp flags == syn

Line-by-line trace walkthrough

The first line generated is the packet id that triggered the subsequent trace output. Even though this is in the same grammar as the nft rule syntax, it contains header fields of the packet that was just received. You will find the name of the receiving network interface (here named “enp0”) the source and destination mac addresses of the packet, the source ip address (can be important – maybe the reporter is connecting from a wrong/unexpected host) and the tcp source and destination ports. You will also see a “trace id” at the very beginning. This identification tells which incoming packet matched a rule. The second line contains the first rule matched by the packet:

 
trace id 9c01f8 inet trace_debug trace_pre rule ip daddr 10.2.1.2 tcp dport 1222 tcp flags syn limit rate 6/minute meta nftrace set 1 (verdict continue)

This is the just-added trace rule. The first rule is always one that activates packet tracing. If there would be other rules before this, we would not see them. If there is no trace output at all, the trace rule itself is never reached or does not match. The next two lines tell that there are no further rules and that the “trace_pre” hook allows the packet to continue (verdict accept).

The next matching rule is

 
trace id 9c01f8 inet nat prerouting rule ip daddr 10.1.2.3  tcp dport 1222 dnat ip to 192.168.70.10:22 (verdict accept)

This rule sets up a mapping to a different address and port. Provided 192.168.70.10 really is the address of the desired VM, there is no problem so far. If its not the correct VM address, the address was either mistyped or the wrong NAT rule was matched.

IP forwarding

Next we can see that the IP routing engine told the IP stack that the packet needs to be forwarded to another host:

trace id 9c01f8 inet filter forward packet: iif "enp0" oif "veth21" ether saddr .. ip daddr 192.168.70.10 .. tcp dport 22 tcp flags == syn tcp window 29200

This is another dump of the packet that was received, but there are a couple of interesting changes. There is now an output interface set. This did not exist previously because the previous rules are located before the routing decision (the prerouting hook). The id is the same as before, so this is still the same packet, but the address and port has already been altered. In case there are rules that match “tcp dport 1222” they will have no effect anymore on this packet.

If the line contains no output interface (oif), the routing decision steered the packet to the local host. Route debugging is a different topic and not covered here.

trace id 9c01f8 inet filter forward rule ct status dnat jump allowed_dnats (verdict jump allowed_dnats)

This tells that the packet matched a rule that jumps to a chain named “allowed_dnats”. The next line shows the source of the connection failure:

 
trace id 9c01f8 inet filter allowed_dnats rule drop (verdict drop)

The rule unconditionally drops the packet, so no further log output for the packet exists. The next output line is the result of a different packet:

trace id 20a4ef inet trace_debug trace_pre packet: iif "enp0" ether saddr .. ip saddr 10.2.1.2 ip daddr 10.1.2.3 ip protocol tcp tcp dport 1222 tcp flags == syn

The trace id is different, the packet however has the same content. This is a retransmit attempt: The first packet was dropped, so TCP re-tries. Ignore the remaining output, it does not contain new information. Time to inspect that chain.

Ruleset investigation

The previous section found that the packet is dropped in a chain named “allowed_dnats” in the inet filter table. Time to look at it:

 
# nft list chain inet filter allowed_dnats
table inet filter {
 chain allowed_dnats {
  meta nfproto ipv4 ip daddr . tcp dport @allow_in accept
  drop
   }
}

The rule that accepts packets in the @allow_in set did not show up in the trace log. Double-check that the address is in the @allow_set by listing the element:

 
# nft "get element inet filter allow_in { 192.168.70.10 . 22 }"
Error: Could not process rule: No such file or directory

As expected, the address-service pair is not in the set. We add it now.

 
# nft "add element inet filter allow_in { 192.168.70.10 . 22 }"

Run the query command now, it will return the newly added element.

# nft "get element inet filter allow_in { 192.168.70.10 . 22 }"
table inet filter { set allow_in { type ipv4_addr . inet_service elements = { 192.168.70.10 . 22 } }
}

The ssh command should now work and the trace output reflects the change:

trace id 497abf58 inet filter forward rule ct status dnat jump allowed_dnats (verdict jump allowed_dnats)
trace id 497abf58 inet filter allowed_dnats rule meta nfproto ipv4 ip daddr . tcp dport @allow_in accept (verdict accept)
trace id 497abf58 ip postrouting packet: iif "enp0" oif "veth21" ether .. trace id 497abf58 ip postrouting policy accept

This shows the packet passes the last hook in the forwarding path – postrouting.

In case the connect is still not working, the problem is somewhere later in the packet pipeline and outside of the nftables ruleset.

Summary

This Article gave an introduction on how to check for packet drops and other sources of connectivity problems with the nftables trace mechanism. A later post in the series shows how to inspect the connection tracking subsystem and the NAT information that may be attached to tracked flows.

Posted on Leave a comment

A 2020 love letter to the Fedora community

[This message comes directly from the desk of Matthew Miller, the Fedora Project Leader. — Ed.]

When I wrote about COVID-19 and the Fedora community all the way back on March 16, it was very unclear how 2020 was going to turn out. I hoped that we’d have everything under control and return to normal soon—we didn’t take our Flock to Fedora in-person conference off the table for another month. Back then, I naively hoped that this would be a short event and that life would return to normal soon. But of course, things got worse, and we had to reimagine Flock as a virtual event on short notice. We weren’t even sure if we’d be able to make our regular Fedora Linux releases on schedule.

Even without the pandemic, 2020 was already destined to be an interesting year. Because Red Hat moved the datacenter where most of Fedora’s servers live, our infrastructure team had to move our servers across the continent. Fedora 33 had the largest planned change set of any Fedora Linux release—and not small things either. We changed the default filesystem for desktop variants to BTRFS and promoted Fedora IoT to an Edition. We also began Fedora ELN—a new process which does a nightly build of Fedora’s development branch in the same configuration Red Hat would use to compose Red Hat Enterprise Linux. And Fedora’s popularity keeps growing, which means more users to support and more new community members to onboard. It’s great to be successful, but we also need to keep up with ourselves!

So, it was already busy. And then the pandemic came along. In many ways, we’re fortunate: we’re already a global community used to distributed work, and we already use chat-based meetings and video calls to collaborate. But it made the datacenter move more difficult. The closure of Red Hat offices meant that some of the QA hardware was inaccessible. We couldn’t gather together in person like we’re used to doing. And of course, we all worried about the safety of our friends and family. Isolation and disruption just plain make everything harder.

I’m always proud of the Fedora community, but this year, even more so. In a time of great stress and uncertainty, we came together and did our best work. Flock to Fedora became Nest With Fedora. Thanks to the heroic effort of Marie Nordin and many others, it was a resounding success. We had way more attendees than we’ve ever had at an in-person Flock, which made our community more accessible to contributors who can’t always join us. And we followed up with our first-ever virtual release party and an online Fedora Women’s Day, both also resounding successes.

And then, we shipped both Fedora 32 and Fedora 33 on time, extending our streak to six releases—three straight years of hitting our targets.

The work we all did has not gone unnoticed. You already know that Lenovo is shipping Fedora Workstation on select laptop models. I’m happy to share that two of the top Linux podcasts have recognized our work—particularly Fedora 33—in their year-end awards. LINUX Unplugged listeners voted Fedora Linux their favorite Linux desktop distribution. Three out of the four Destination Linux hosts chose Fedora as the best distro of the year, specifically citing the exciting work we’ve done on Fedora 33 and the strength of our community. In addition, OMG! Ubuntu! included Fedora 33 in its “5 best Linux distribution releases of 2020” and TechRepublic called Fedora 33 “absolutely fantastic“.

Like everyone, I’m looking ahead to 2021. The next few months are still going to be hard, but the amazing work on mRNA and other new vaccine technology means we have clear reasons to be optimistic. Through this trying year, the Fedora community is stronger than ever, and we have some great things to carry forward into better times: a Nest-like virtual event to compliment Flock, online release parties, our weekly Fedora Social Hour, and of course the CPE team’s great trivia events.

In 2021, we’ll keep doing the great work to push the state of the art forward. We’ll be bold in bringing new features into Fedora Linux. We’ll try new things even when we’re worried that they might not work, and we’ll learn from failures and try again. And we’ll keep working to make our community and our platform inclusive, welcoming, and accessible to all.

To everyone who has contributed to Fedora in any way, thank you. Packagers, blog writers, doc writers, testers, designers, artists, developers, meeting chairs, sysadmins, Ask Fedora answerers, D&I team, and more—you kicked ass this year and it shows. Stay safe and healthy, and we’ll meet again in person soon.Oh, one more thing! Join us for a Fedora Social Hour New Year’s Eve Special. We’ll meet at 23:30 UTC today in Hopin (the platform we used for Nest and other events). Hope to see you there!

Posted on Leave a comment

Choose between Btrfs and LVM-ext4

Fedora 33 introduced a new default filesystem in desktop variants, Btrfs. After years of Fedora using ext4 on top of Logical Volume Manager (LVM) volumes, this is a big shift. Changing the default file system requires compelling reasons. While Btrfs is an exciting next-generation file system, ext4 on LVM is well established and stable. This guide aims to explore the high-level features of each and make it easier to choose between Btrfs and LVM-ext4.

In summary

The simplest advice is to stick with the defaults. A fresh Fedora 33 install defaults to Btrfs and upgrading a previous Fedora release continues to use whatever was initially installed, typically LVM-ext4. For an existing Fedora user, the cleanest way to get Btrfs is with a fresh install. However, a fresh install is much more disruptive than a simple upgrade. Unless there is a specific need, this disruption could be unnecessary. The Fedora development team carefully considered both defaults, so be confident with either choice.

What about all the other file systems?

There are a large number of file systems for Linux systems. The number explodes after adding in combinations of volume managers, encryption methods, and storage mechanisms . So why focus on Btrfs and LVM-ext4? For the Fedora audience these two setups are likely to be the most common. Ext4 on top of LVM became the default disk layout in Fedora 11, and ext3 on top of LVM came before that.

Now that Btrfs is the default for Fedora 33, the vast majority of existing users will be looking at whether they should stay where they are or make the jump forward. Faced with a fresh Fedora 33 install, experienced Linux users may wonder whether to use this new file system or fall back to what they are familiar with. So out of the wide field of possible storage options, many Fedora users will wonder how to choose between Btrfs and LVM-ext4.

Commonalities

Despite core differences between the two setups, Btrfs and LVM-ext4 actually have a lot in common. Both are mature and well-tested storage technologies. LVM has been in continuous use since the early days of Fedora Core and ext4 became the default in 2009 with Fedora 11. Btrfs merged into the mainline Linux kernel in 2009 and Facebook uses it widely. SUSE Linux Enterprise 12 made it the default in 2014. So there is plenty of production run time there as well.

Both systems do a great job preventing file system corruption due to unexpected power outages, even though the way they accomplish it is different. Supported configurations include single drive setups as well as spanning multiple devices, and both are capable of creating nearly instant snapshots. A variety of tools exist to help manage either system, both with the command line and graphical interfaces. Either solution works equally well on home desktops and on high-end servers.

Advantages of LVM-ext4

Show the relationship of LVM-ext4 filesystem to hard-drive partitions and mounted directories.
Structure of ext4 on LVM

The ext4 file system focuses on high-performance and scalability, without a lot of extra frills. It is effective at preventing fragmentation over extended periods of time and provides nice tools for when it does happen. Ext4 is rock solid because it built on the previous ext3 file system, bringing with it all the years of in-system testing and bug fixes.

Most of the advanced capabilities in the LVM-ext4 setup come from LVM itself. LVM sits “below” the file system, which means it supports any file system. Logical volumes (LV) are generic block devices so virtual machines can use them directly. This flexibility allows each logical volume to use the right file system, with the right options, for a variety of situations. This layered approach also honors the Unix philosophy of small tools working together.

The volume group (VG) abstraction from the hardware allows LVM to create flexible logical volumes. Each LV pulls from the same storage pool but has its own configuration. Resizing volumes is a lot easier than resizing physical partitions as there are no limitation of ordered placement of the data. LVM physical volumes (PV) can be any number of partitions and can even move between devices while the system is running.

LVM supports read-only and read-write snapshots, which make it easy to create consistent backups from active systems. Each snapshot has a defined size, and a change to the source or snapshot volume use space from there. Alternately, logical volumes can also be part of a thinly provisioned pool. This allows snapshots to automatically use data from a pool instead of consuming fixed sized chunks defined at volume creation.

Multiple devices with LVM

LVM really shines when there are multiple devices. It has native support for most RAID levels and each logical volume can have a different RAID level. LVM will automatically choose appropriate physical devices for the RAID configuration or the user can specify it directly. Basic RAID support includes data striping for performance (RAID0) and mirroring for redundancy (RAID1). Logical volumes can also use advanced setups like RAID5, RAID6, and RAID10. LVM RAID support is mature because under the hood LVM uses the same device-mapper (dm) and multiple-device (md) kernel support used by mdadm.

Logical volumes can also be cached volumes for systems with both fast and slow drives. A classic example is a combination of SSD and spinning-disk drives. Cached volumes use faster drives for more frequently accessed data (or as a write cache), and the slower drive for bulk data.

The large number of stable features in LVM and the reliable performance of ext4 are a testament to how long they have been in use. Of course, with more features comes complexity. It can be challenging to find the right options for the right feature when configuring LVM. For single drive desktop systems, features of LVM like RAID and cache volumes don’t apply. However, logical volumes are more flexible than physical partitions and snapshots are useful. For normal desktop use, the complexity of LVM can also be a barrier to recovering from issues a typical user might encounter.

Advantages of Btrfs

Show the relationship of Btrfs filesystem to hard-drive partitions and mounted directories.
Btrfs Structure

Lessons learned from previous generations guided the features built into Btrfs. Unlike ext4, it can directly span multiple devices, so it brings along features typically found only in volume managers. It also has features that are unique in the Linux file system space (ZFS has a similar feature set, but don’t expect it in the Linux kernel).

Key Btrfs features

Perhaps the most important feature is the checksumming of all data. Checksumming, along with copy-on-write, provides the key method of ensuring file system integrity after unexpected power loss. More uniquely, checksumming can detect errors in the data itself. Silent data corruption, sometimes referred to as bitrot, is more common that most people realize. Without active validation, corruption can end up propagating to all available backups. This leaves the user with no valid copies. By transparently checksumming all data, Btrfs is able to immediately detect any such corruption. Enabling the right dup or raid option allows the file system to transparently fix the corruption as well.

Copy-on-write (COW) is also a fundamental feature of Btrfs, as it is critical in providing file system integrity and instant subvolume snapshots. Snapshots automatically share underlying data when created from common subvolumes. Additionally, after-the-fact deduplication uses the same technology to eliminate identical data blocks. Individual files can use COW features by calling cp with the reflink option. Reflink copies are especially useful for copying large files, such as virtual machine images, that tend to have mostly identical data over time.

Btrfs supports spanning multiple devices with no volume manager required. Multiple device support unlocks data mirroring for redundancy and striping for performance. There is also experimental support for more advanced RAID levels, such as RAID5 and RAID6. Unlike standard RAID setups, the Btrfs raid1 option actually allows an odd number of devices. For example, it can use 3 devices, even if they are are different sizes.

All RAID and dup options are specified at the file system level. As a consequence, individual subvolumes cannot use different options. Note that using the RAID1 option with multiple devices means that all data in the volume is available even if one device fails and the checksum feature maintains the integrity of the data itself. That is beyond what current typical RAID setups can provide.

Additional features

Btrfs also enables quick and easy remote backups. Subvolume snapshots can be sent to a remote system for storage. By leveraging the inherent COW meta-data in the file system, these transfers are efficient by only sending incremental changes from previously sent snapshots. User applications such as snapper make it easy to manage these snapshots.

Additionally, a Btrfs volume can have transparent compression and chattr +c will mark individual files or directories for compression. Not only does compression reduce the space consumed by data, but it helps extend the life of SSDs by reducing the volume of write operations. Compression certainly introduces additional CPU overhead, but a lot of options are available to dial in the right trade-offs.

The integration of file system and volume manager functions by Btrfs means that overall maintenance is simpler than LVM-ext4. Certainly this integration comes with less flexibility, but for most desktop, and even server, setups it is more than sufficient.

Btrfs on LVM

Btrfs can convert an ext3/ext4 file system in place. In-place conversion means no data to copy out and then back in. The data blocks themselves are not even modified. As a result, one option for an existing LVM-ext4 systems is to leave LVM in place and simply convert ext4 over to Btrfs. While doable and supported, there are reasons why this isn’t the best option.

Some of the appeal of Btrfs is the easier management that comes with a file system integrated with a volume manager. By running on top of LVM, there is still some other volume manager in play for any system maintenance. Also, LVM setups typically have multiple fixed sized logical volumes with independent file systems. While Btrfs supports multiple volumes in a given computer, many of the nice features expect a single volume with multiple subvolumes. The user is still stuck manually managing fixed sized LVM volumes if each one has an independent Btrfs volume. Though, the ability to shrink mounted Btrfs filesystems does make working with fixed sized volumes less painful. With online shrink there is no need to boot a live image.

The physical locations of logical volumes must be carefully considered when using the multiple device support of Btrfs. To Btrfs, each LV is a separate physical device and if that is not actually the case, then certain data availability features might make the wrong decision. For example, using raid1 for data typically provides protection if a single drive fails. If the actual logical volumes are on the same physical device, then there is no redundancy.

If there is a strong need for some particular LVM feature, such as raw block devices or cached logical volumes, then running Btrfs on top of LVM makes sense. In this configuration, Btrfs still provides most of its advantages such as checksumming and easy sending of incremental snapshots. While LVM has some operational overhead when used, it is no more so with Btrfs than with any other file system.

Wrap up

When trying to choose between Btrfs and LVM-ext4 there is no single right answer. Each user has unique requirements, and the same user may have different systems with different needs. Take a look at the feature set of each configuration, and decide if there is something compelling about one over the other. If not, there is nothing wrong with sticking with the defaults. There are excellent reasons to choose either setup.