Posted on Leave a comment

Contribute at the Fedora Linux Test Week for Kernel 6.4

The kernel team is working on final integration for Linux kernel 6.4. This version was just recently released, and will arrive soon in Fedora Linux. As a result, the Fedora Linux kernel and QA teams have organized a test week from Sunday, July 09, 2023 to Sunday, July 16, 2023. The wiki page in this article contains links to the test images you’ll need to participate. Please continue reading for details.

How does a test week work?

A test week is an event where anyone can help ensure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the days of the event, please do some testing and report your results. We have a document which provides all the necessary steps.

Happy testing, and we hope to see you on one of the test days.

Posted on Leave a comment

The Community Platform Engineering F2F 2023 Experience – Part II

20 Mar 2023 – 23 Mar 2023

– Barcelona, Spain

Intro

If you have not already – now is the best time to read through the first part of the event report before getting into the second part here.

From left to right – Actually, never mind – there are too many people there.

Day 2
Wednesday, 22nd March 2023
The team members decided to start a bit late on the second day to ensure that everyone gets enough rest after the adventurous first day. Once the joint breakfast was completed in the hotel restaurant, the members started heading off to the office by 10 in the morning. They were were gracefully served Catalonian snacks by Lenka Segura, Japanese souvenirs by David Kirwan, and German sweets by Julia Bley for the duration of the meeting. Matthew Miller started the second day with his talk about the Fedora Project strategy and how Community Platform Engineering fits into the picture. This talk was followed by the one by Tomas Hrcka about how the Fedora Release Engineering team addresses its responsibilities. The team took a short break after the first couple of sessions, before heading into the next set.

From left to rightLenka Segura Matthew Miller Adam Samalik Troy Dawson

After a short break, the team continued with three lightning talks delivered about OpenShift operators by David Kirwan, Packit by Frantisek Lachman and Laura Barcziova, and Pulp by Miroslav Suchy. Miroslav Suchy delivered a thought-provoking talk about the scope of tooling for the Community Platform Engineering team, and how it contributes to Fedora Linux. With that, the folks dispersed into small groups for lunch. Once they were back in the office, Hunor Csomortáni delivered a talk about revisiting source-git and the plans for unifying package sources in the pipeline. This was followed by a talk by Carl George about the EPEL 10 plans for improvements that would be coming very soon. The final interesting talk of the day was delivered by Brian Stinson about the expectations of RHEL from Fedora Linux and CentOS Stream.

From left to rightEmma Kidney David Fan Michal Konecny Akashdeep Dhar Samyak Jain David Kirwan

In the evening, at around 19:00, the team members left for a regional burger restaurant, Tio Joe’s. This had been booked in advance and was near the hotel premises . With toasts made to the team members that were now united after a long time, they not only enjoyed the appetizing food that was served but also the company of the friends that they had bonded with over the course of the last couple days.

Once the dinner was over, at around 21:30, some folks headed back to the hotel for a respite. The remainder went to the Trompos Karaoke Bar to participate in the fun karaoke night session organized by Aurelien Bompard. People queued up their songs and soon began to collaborate in performing their favorite songs in duos and in choirs. This was a fitting end to the night, as late as 02:00, and a fun look at their singing preferences.

Day 3
Thursday, 23rd March 2023

This was the day of departure for a lot of people. Since they would miss out on the sessions on this third day, it was incredibly light in terms of the agenda and activities. Many of the team members checked out of their hotel rooms after breakfast at 09:00 and left their luggage with the hotel before leaving for the office.

Stefan Mattejiet started off the last day with his discussion session about CPE Futurespective, and understanding what direction the team should head in the coming times. The interesting discussion felt a lot inspired by the established logic model planning structure used for Fedora Council community initiatives. This structure starts off with planning the general objectives first and then going back to the implementation details later. 

From left to rightFabian Arrotin James Antill Adam Saleh Carl George Tomas Hrcka

The next session was hosted by Aoife Moloney who kicked off an interesting discussion about the limited-scope projects that the team undertakes and the maintainers for the applications that the team takes care of. The members participated in pointing out the things that currently work great and those that could use some improvements. This was the last planned session for the day and the group assembled to be a part of the “Community Platform Engineering family picture”. After that, they dispersed into small groups to have their lunches in their preferred places.

With no more planned talks after lunchtime, the team members were divided into smaller groups for breakout rooms to participate in more detailed discussions. Michal Konecny lead the one for the infrastructure and release engineering team. 

From left to rightMichal Konecny Carl George Akashdeep Dhar David Kirwan Nils Philippsen Fabian Arrotin Troy Dawson Diego Herrera

The team started slowly thinning down even more in the late afternoon, with the members bidding farewell to each other in the office and returning to the hotel. Some members decided to stay longer to explore Barcelona a bit more. Others began collecting their luggage from the hotel and leaving for the airport. With “goodbyes” waved to teammates and “resolves” about the next thing, the members departed from the face-to-face meeting with a new zeal and energy to contribute in an even better way to the community.

Even with minor hiccups and some teammates not being able to join the event, the meeting event turned out to be a grand success – both in uniting the members and in strategizing the team’s efforts. The members of the team surely look forward to the next time they get together. 

Posted on Leave a comment

Trying different desktop environments using “rpm-ostree rebase”

Fedora Linux Workstation features a GNOME desktop environment which is an easy to use, intuitive, and efficient desktop environment. But this is not the only option if you would like to use Fedora Linux. There are other spins that provide alternative desktop environments like KDE, XFCE, Cinnamon, etc. This article describes how you can try different desktop environments if you are using an OSTree based Fedora Linux spins.

Main version of Fedora Workstation

If you installed a non-OSTree Fedora Workstation or one of the Spins and would like to try different desktop environment, you have some possibilities:

  • install different desktop environment using dnf
  • dual boot multiple spins of Fedora

If you choose the first option, you have to install another desktop environment by using the dnf install command. This technique enables you to select which desktop environment you would like to use on the login screen after system boots. Using this method will pull in a lot of dependencies. This is especially true when you have a GTK based desktop environment (like GNOME) and install a QT based (like KDE), or vice-versa. It can be difficult to completely uninstall one of the installed desktop environment if you are not satisfied with them.

Another issue with this is that system apps may be doubled in the application menu on each environment. For example, if you have installed GNOME and install KDE, you have Nautilus and Dolphin for browsing files, GNOME Terminal and Konsole for terminal emulation, etc. You have to remember which app to use on which environment, because apps from KDE behave worse on GNOME and vice-versa.

If you choose the second option, you have to make some free unpartitioned space on your hard drive to install another Fedora spin alongside one you are currently using. In this way, the systems are separated from each other and the system apps will not be doubled. You can decide to share the /home partition between them. This technique enables you select the system to use in the bootloader menu before system boots. But if you use this method, you have to maintain these systems separately (for example installing updates) and it consumes a lot of free space on the hard drive.

OSTree based version of Fedora Workstation

Some variants of Fedora Linux are OSTree based. OSTree provides immutability and transactional upgrades with the possibility of rollback in case something goes wrong. You can read more about it in this great article. Right now, we have three OSTree based Fedora Workstation variants:

  • Silverblue – provides GNOME desktop environment
  • Kinoite – provides KDE Plasma desktop environment
  • Sericea – provides Sway window manager (not recommended for beginners)

If you are running one of these variants of Fedora Linux, you can easily switch your system to another OSTree compatible one to try different desktop environment. This process is similar to doing a system upgrade. OSTree guarantees that the operation is transactional (finishes successfully or nothing is changed) and you are able to rollback if you are not satisfied with the change. The operation does not consume much space on the hard drive, and system apps are not doubled.

How to use OSTree rebase to switch to a new variant

To start, I recommend executing the following command to pin the current deployment. This makes certain it will not be deleted automatically in the future and provides the ability to roll back to it.

$ sudo ostree admin pin 0

If you have a pending update, the command may fail with the message:

error: Cannot pin staged deployment

In this case, reboot your system to apply pending updates, and try again.

After pinning the deployment, execute:

$ ostree remote refs fedora

It outputs a list of all available branches that you can rebase into. Every branch has an architecture, version, and the name of the variant. Select carefully. In the following examples I assume you would like to rebase into the current stable version of Fedora for x86_64 (version 38).

  • for Fedora Silverblue, use fedora:fedora/38/x86_64/silverblue
  • for Fedora Kinoite, use fedora:fedora/38/x86_64/kinoite
  • for Fedora Sericea, use fedora:fedora/38/x86_64/sericea

Choose the branch you wish to rebase into and execute the following command (change the branch name provided in the example if necessary):

$ rpm-ostree rebase fedora:fedora/38/x86_64/kinoite

When this command succeeds, restart the system to begin using the new desktop environment. If it fails, the system should continue to work unmodified thanks to transactional updates provided by OSTree.

Undo the rebase into an OSTree variant

If you are not satisfied with the new environment the following command will return you to your original variant:

rpm-ostree rollback

Restart your system once again to switch back to the previous variant of Fedora.

Posted on Leave a comment

New AWS storage type for Fedora Linux

If you filter Fedora Linux AWS images using a script, you might notice a change in the image names. The Fedora Cloud SIG recently updated the image publishing configuration to use the latest generation storage option and simplify the image listings.

This involves two changes:

  • Replacing gp2 storage with gp3 storage by default for new images
  • Removing standard storage from new images

What’s the benefit of these changes?

The gp3 storage type appeared in 2020 and switching to that storage type for Fedora Linux users means more consistent performance for a lower cost. (For more details, read Corey Quinn’s helpful blog post.)

Removing standard storage from new image uploads means we’re creating half the AMIs we created before and it reduces the number of images you need to review when launching an instance. Finding the right Fedora Linux image for your deployment should be a little easier.

What if I really like the other storage types?

When you launch your instance, you can choose any storage type that is compatible with your instance in your preferred region. Although Fedora Linux images will have gp3 set as the default, you can choose from any other storage type at launch time.

How should I adjust my scripts that look for Fedora Linux images on AWS?

The format of the image names remains the same, but you’ll notice a new string in the storage type portion of the image name. As an example, here’s what you would see before the change was made:

Fedora-Cloud-Base-Rawhide-20230503.n.0.aarch64-hvm-us-east-1-standard-0
Fedora-Cloud-Base-Rawhide-20230503.n.0.aarch64-hvm-us-east-1-gp2-0
Fedora-Cloud-Base-Rawhide-20230503.n.0.x86_64-hvm-us-east-1-standard-0
Fedora-Cloud-Base-Rawhide-20230503.n.0.x86_64-hvm-us-east-1-gp2-0

After the change, there is only one image per release and architecture:

Fedora-Cloud-Base-Rawhide-20230504.n.0.aarch64-hvm-us-east-1-gp3-0
Fedora-Cloud-Base-Rawhide-20230504.n.0.x86_64-hvm-us-east-1-gp3-0

Why was this change made?

The Fedora Cloud SIG wants to make the Fedora Linux cloud experience the best it can possibly be on every public cloud platform. This change gives Fedora Linux a better performing default storage option at a lower cost, reduces the overhead from creating AMIs on AWS, and simplifies the Fedora Linux image listings.

Read the Fedora change proposals for removing standard storage and switching the default to gp3 for a lot more detail. As always, you can find members of the Fedora Cloud SIG and join our group on Fedora Matrix or on Libera IRC in #fedora-cloud.

Posted on Leave a comment

Contribute at the Fedora Linux Test Week for Kernel 6.3

The kernel team is working on final integration for Linux kernel 6.3. This version was just recently released, and will arrive soon in Fedora Linux. As a result, the Fedora Linux kernel and QA teams have organized a test week from Sunday, May 07, 2023 to Sunday, May 14, 2023. Refer to the wiki page in this article for links to the test images you’ll need to participate. Please continue reading for details.

How does a test week work?

A test week is an event where anyone can help make sure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the days of the event, please do some testing and report your results. We have a document which provides all the necessary steps.

Happy testing, and we hope to see you on one of the test days.

Posted on Leave a comment

Anaconda Installer Partitioning and Storage Survey Results

Back in late January, we distributed a survey focusing on partitioning preferences for Anaconda Installer (OS Installer for RHEL, CentOS, and Fedora). We were able to get 1269 responses! Thank you to all who participated. The data we collected will help the Anaconda team continue to provide an installer that best suits the majority’s needs. 

Given the high participation rate, we’re excited to share the main results and findings with you!

Who are our users?

First, we wanted to understand who the users are. The most common answers to demographic-style questions gave us the following results:

  • 96% (of 1138 responses) are desktop/workstation Linux users
  • Have 11+ years of experience using Linux (50% of 1138 responses)
  • 90% use Fedora (sometimes in combination with RHEL, CentOS, Ubuntu, or Debian)

450 users identify as having an Intermediate level of expertise with Linux storage partitioning

n=1138

n=1109

These data points mean that most of you have been using Linux with Fedora (not exclusively) on a desktop/workstation for over a decade! That’s impressive! But when it comes to Linux storage partitioning there is still quite a bit to learn – and we are here to make that easier. 

What storage and partitioning is used?

Storage

Once we got a better picture of who our participants were, we asked questions regarding your current storage and partitioning set up. For example, we uncovered that when it comes to identifying the disks you will use for your installation, most of you are mainly interested in the disk name, size, and the sizes of partitions. This helps the team decide which data is more helpful to present on the disk selection screen. 

n=969

Partitioning

Then, we asked questions about your preferences and expectations regarding partitioning. From studies in the past, we kept seeing an almost even preference for auto-partitioning and custom partitioning because of the different needs they each fulfill. However, this survey clarified that there is a slightly stronger preference for auto-partitioning, but many of you made it clear that you need the Installer to allow some customizations to partitioning. The team is certainly keeping this in mind. In fact, we asked you “How do you prefer to create partitions (storage layout) during the installation process?” and most of the multiple-choice responses were split between “Installer creates the partitions for me”, “Installer creates partitions based on my set preferences”, or “I modify the partitions proposed by the installer”. These three options indicate some form of auto-partition, leading to the combined 70% of 964 responses preferring auto-partitioning. 

Next steps for Anaconda

Finally, we wanted your input on what the next steps for Anaconda should be. The team has been considering a few different approaches, and most of you ranked the “Ability to select pre-defined partitioning configuration options with streamlined steps” as your #1 choice, closely followed by “Ability to customize details of partitioning”. This tells us that you are expecting a more guided experience for partitioning, especially given that most of you also feel there is a lot left to learn about Linux storage partitioning. Be on the lookout for what’s next with Anaconda!

Thanks to all who participated

Again, thank you to all who took the time to fill out the survey. You have provided the team with plenty of data to consider for the future of Anaconda Installer. 

Posted on Leave a comment

Changes to the ELN kernel RPM NVR

Red Hat is excited to announce significant changes to ELN kernel RPM NVR in the kernel-ark project.  This change will be to limited to the kernel-ark ELN RPMs and does not impact Fedora.  If you don’t use Fedora ELN builds you can likely stop reading as this change won’t affect you. 

What is the kernel-ark project?

The kernel-ark project is an upstream kernel-based repository from which the Fedora kernel RPMs are built (contributions welcomed!).  This project is also used by the Centos Stream and Red Hat Enterprise Linux (RHEL) maintainers to implement, test, and verify code that is destined to be used in Centos Stream and RHEL. In other words, the kernel-ark repository contains code that is enabled to build several different kernels which may contain unique code for different use cases.  The kernel RPMs used for CentOS Stream and RHEL are commonly referred to as the ‘ELN’ (Enterprise Linux Next) RPMs.

Why are there ELN RPMs?  Why can’t Centos Stream and Red Hat use Fedora RPMs?

While Fedora Linux is the source of a lot of code that lands in CentOS Stream and later RHEL, the kernel configuration used in each operating system is unique.  Fedora Linux is configured to achieve its specific goals and targets.  CentOS Stream and RHEL do the same but for a slightly different set of goals and targets.

The differences are significant enough that the Fedora Linux, Centos Stream, and RHEL kernel maintainers recognized the need for separate RPMs targeted for Fedora and those targeted for Centos Stream and RHEL.  Examples of these differences are BTRFS is enabled in Fedora Linux but not in ELN and there are some specific devices that are disabled in ELN but are enabled in Fedora Linux.

Red Hat uses kernel-ark’s ELN RPMs to continuously test upstream changes with a Red Hat specific configuration.  This enables Red Hat to monitor performance and resolve issues before they make it into a Red Hat Enterprise Linux (RHEL) release.  In accordance with Red Hat’s long established ‘upstream first’ policy, all fixes and suggestions for improvements are sent back to the upstream kernel community.  This benefits the entire Linux community with improvements due to issues resolved in ELN.

This structure also allows Red Hat to test and make changes without affecting the Fedora Linux kernel except in a positive or desired way, such as through bug fixes.  Fedora Linux, CentOS Stream, and RHEL have the opportunity to accept or reject changes easily.

What ELN NVR changes are being made?

Before explaining the changes to the ELN kernel rpm NVR, it is important to understand what an NVR is.  All RPMs have Name, Version, and Release (NVR) directives that describe the RPM package.  In the case of the Fedora Linux kernel RPMs the Name is ‘kernel’ and the Version is the kernel’s uname (aka UTSNAME or the kernel version).  The last field, the Release, contains additional upstream tagging information (release candidate tags), a monotonically increasing build number, and a distribution tag.  The NVR is separate from the kernel’s uname and the uname is not generated from it.  Instead, we have traditionally generated the NVR from the uname.

For example, for a recent Fedora Linux kernel build,

$ rpm -qi kernel-6.3.0-0.rc5.42.fc39.src.rpm

Name: kernel
Version: 6.3.0
Release: 0.rc5.42.fc39

In the next few weeks, the Centos Stream and RHEL maintainers will introduce ELN RPMs that have new kernel Name, Version, and Release (NVR) directives that are unique to the ELN builds.  This change has no impact on the kernel uname.  The net result of the change is that the version number will have more meaning to CentOS Stream and RHEL builds, instead of being solely based on the kernel uname.  For example, an ELN kernel may have NVR kernel-redhat-1.0.39.eln which packages a kernel with a kernel uname of 6.3.0-39.eln.

We have already decided that the new ‘Name’ directive for the ELN kernel NVR will be changed from ‘kernel’ to ‘kernel-redhat’. More information on the Version and Release directive changes will be released in the following months as they are finalized by the Centos Stream and RHEL kernel maintainers.  You can follow these discussions on the Fedora Kernel Mailing List.

Why is the ELN NVR being changed?

The new ELN NVR will allow for better coordination of feature introduction, bug fixes, and CVE resolutions in future versions of Centos Stream and RHEL.  More information on these improvements to the Centos Stream and RHEL ecosystems will be released in the upcoming months. 

How is Fedora Linux impacted?

Fedora Linux is not impacted by these changes.  

Since the inception of the kernel-ark project, the Fedora Linux, Centos Stream, and RHEL maintainers have been extraordinarily careful to ensure that Fedora Linux kernel builds are not impacted by ELN kernel builds (and vice-versa) in the kernel-ark project.  The commitment to prevent cross-OS issues is strictly enforced by the maintainers.  Due to the maintainers continued diligence, there is no impact to Fedora Linux.

How can I obtain Fedora Linux or ELN kernels?

Fedora Linux and ELN kernels can be downloaded from Fedora Project’s koji instance.

Posted on Leave a comment

What’s new in Fedora Workstation 38

Fedora Workstation 38 is the latest version of the leading-edge Linux desktop OS, made by a worldwide community, including you! This article describes some of the user-facing changes in this new version of Fedora Workstation. Upgrade today from GNOME Software, or use dnf system-upgrade in a terminal emulator!

GNOME 44

Fedora Workstation 38 features the newest version of the GNOME desktop environment. GNOME 44 features subtle tweaks and revamps all throughout, most notably in the Quick Settings menu and the Settings app. More details about can be found in the GNOME 44 release notes.

File chooser

Most of the GNOME applications are built on GTK 4.10. This introduces a revamped file chooser with an icon view and image previews.

GTK 4.10's new file chooser, showing the icon view with image previews.
Icon view with image previews, new in GTK 4.10

Quick Settings tweaks

For GNOME 44 There have been a number of improvements to the Quick Settings menu. The new version includes a new Bluetooth menu, which introduces the ability to quickly connect and disconnect known Bluetooth devices. Additional information is available in each quick settings button, thanks to new subtitles.

The Bluetooth menu can now be used to connect to known devices

Also in the quick settings menu, a new background apps feature lists Flatpak apps which are running without a visible window.

Background Apps lets you see sandboxed apps running without a visible window and close them

Core applications

GNOME’s core applications have received significant improvements in the new version.

Settings has seen a round of updates, focused on improving the experience in each of the settings panels. Here are some notable changes:

  • Major redesigns of Mouse & Touchpad and Accessibility significantly improves usability.
  • Updated Device Security now uses clearer language.
  • Redesigned sound now includes new windows for the volume mixer and alert sound.
  • You can now share your Wi-Fi credentials to another device through a QR code.
The Mouse & Touchpad panel in the GNOME Settings app, showing the Touchpad settings.
The revamped Mouse & Touchpad panel in Settings

In Files, there is now an option to expand folders in the list view.

The tree view can be turned on in Files’ settings

GNOME Software now automatically checks for unused Flatpak runtimes and removes them, saving disk space. You can also choose to only allow open source apps in search results.

In Contacts, you can now share a contact through a QR code, making it super easy to share a contact from your desktop to your phone!

Third-party repositories

Fedora’s third-party repositories feature makes it easy to enable a selection of additional software repos. Previous versions included a filtered version of Flathub, which included a small number of apps. For Fedora 38, filtering of Flathub content no longer occurs. This means that the third party repos now provide full access to all of Flathub.

The third party repos must still be manually enabled, and individual repositories may be disabled from the GNOME Software settings. If you want to keep proprietary apps from showing up in your search results, you can also do that in GNOME Software’s preferences menu.

You are in control.

Under-the-hood changes throughout Fedora Linux 38

Fedora Linux 38 features many under the hood changes. Here are some notable ones:

  • The latest Linux kernel, version 6.2, brings extended hardware support, bug fixes and performance improvements.
  • The length of time that system services may block shutdown has been reduced. This means that, if a service delays your machine from powering off, it will be much less disruptive than in the past.
  • RPM now uses the Rust-written Sequoia OpenGPG parser for better security.
  • The Noto fonts are now the default for Khmer and Thai. The variable versions of the Noto CJK fonts are now used for Chinese, Japanese, and Korean. This reduces disk usage.
  • Profiling will be easier from Fedora 38, thanks to changes in its default build configuration. The expectation is that this will result in performance improvements in future versions.

Also check out…

Official spins for the Budgie desktop environment and Sway tiling Wayland compositor are now available!

Posted on Leave a comment

Linux bcache with writeback cache (how it works and doesn’t work)

bcache is a simple and good way to have large disks (typically rotary and slow) exhibit performance quite similar to an SSD disk, using a small SSD disk or a small part of an SDD.

In general, bcache is a system for having devices composed of slow and large disks, with fast and small disks attached as a cache.

This article will discuss performance and some optimization tips as well as configuration of bcache.

The following terms are used in bcache to describe how it works and the parts of bcache:

backing device slow and large disk (disk intended to actually hold the data)
cache device fast and small disk (cache)
dirty cache data present only in the cache device
writeback writing to the cache device and later (much later) to the backing device
writeback rate cache write speed in the backing device

A disk data cache has always existed, it is the free RAM in the operating system. When data is read from the disk it is copied to RAM. If the data is already in RAM, it is read from RAM rather than being read from disk again. When data is written to the disk, it is written to RAM and after a few moments written to the disk as well. The time data spends only in RAM is very short since RAM is volatile.

bcache is similar, only it has various modes of cache operation. The mode that is faster in writing data is writeback. It works the same as for RAM, only instead of RAM there is a SATA or NVME SSD device. The data may reside only in the cache for much longer, even forever, so it is a bit riskier (if you break the SSD, the data that resided only in the cache is lost, with a good chance that the whole filesystem becomes inaccessible).

Performance Comparison

It is very difficult to gather reliable data from any tests, either with real cases or with special programs. They always give extremely variable, different, unstable values. The various caches present and the type of filesystem (btrfs, journaled, etc.), make the values very variable. It is advisable to ignore small differences (say 5-10%).

The following performance data refers to the test below (random and multiple reads/writes), trying to always maintain the same conditions and repeating three times in immediate sequence.

$ sysbench fileio --file-total-size=2G --file-test-mode=rndrw --time=30 --max-requests=0 run

The tables below show the performance of the separate devices:

Performance of the backing device (RAID 1 with 1TB rotary disks)

Throughput:
read, MiB/s: 0.22 read, MiB/s: 0.23 read, MiB/s: 0.19
written, MiB/s: 0.15 written, MiB/s: 0.16 written, MiB/s 0.13
Latency (ms):
max: 174.92 max: 879.59 max: 1335.30
95th percentile: 87.56 95th percentile: 87.56 95th percentile: 89.16
RAID 1 with 1TB rotary disks

Performance of the cache device (SSD SATA 100GB)

Throughput:
read, MiB/s: 7.28 read, MiB/s: 7.21 read, MiB/s: 7.51
written, MiB/s: 4.86  written, MiB/s: 4.81 written, MiB/s 5.01
Latency (ms):
max: 126.55 max: 102.39 max: 107.95
95th percentile: 1.47 95th percentile: 1.47 95th percentile: 1.47
Cache device (SSD SATA 100GB)

The theoretical expectation that a bcache device will be as fast as the cache device is (physically) impossible to achieve. On average, bcache is significantly slower and only sometimes approaches the same performance as the cache device. Improved performance almost always requires various compromises.

Consider an example assuming there is a 1TB bcache device and a 100GB cache. When writing a 1TB file, the cache device is filled, then partially emptied to the backing device, and refilled again, until the file is fully written.

Because of this (and also because part of the cache also serves data when reading) there is a limit on the length of the file’s sequential data that are written to the cache. Once the limit is exceeded, the file data is written (or read) directly to the backing device, bypassing the cache.

bcache also limits the response delay of the disks, but disproportionately so, especially for SSD SATA, degrading the performance of the cache.

The dirty cache should be emptied to decrease the risk of data loss and to have cache available when it is needed. This should only be done when the devices exhibit little or no activity, otherwise the performance available for normal use collapses.

Unfortunately, the default settings are too low, and the writeback rate adjustment is crude. To improve the writeback rate adjustment it is necessary to write a program (I wrote a script for this).

The following commands provide the necessary optimizations (required at each startup) to get better performance from the bcache device.

# echo 0 > /sys/block/bcache0/bcache/cache/congested_write_threshold_us
# echo 0 > /sys/block/bcache0/bcache/cache/congested_read_threshold_us
# echo 600000000 > /sys/block/bcache0/bcache/sequential_cutoff
# echo 40 > /sys/block/bcache0/bcache/writeback_percent

The following tables compare the performance with the default values and the optimization results.

Performance with default values

Throughput:
read, MiB/s: 3.37 read, MiB/s: 2.67 read, MiB/s: 2.61
written, MiB/s: 2.24 written, MiB/s: 1.78 written, MiB/s 1.74
Latency (ms):
max: 128.51 max: 102.61   max: 142.04
95th percentile: 9.22 95th percentile: 10.84 95th percentile: 11.04
Default values (SSD SATA 100GB)

Performance with optimizations

Throughput:
read, MiB/s: 5.96 read, MiB/s: 3.89 read, MiB/s: 3.81
written, MiB/s: 3.98 written, MiB/s: 2.59 written, MiB/s 2.54
Latency (ms):
max: 131.95 max: 133.23 max: 117.76
95th percentile: 2.61 95th percentile: 2.66 95th percentile: 2.66
Optimization (SSD SATA 100GB)

Performance with the writeback rate adjustment script

Throughput:  
read, MiB/s: 6.25 read, MiB/s: 4.29 read, MiB/s: 5.12
written, MiB/s: 4.17 written, MiB/s: 2.86 written, MiB/s 3.41
Latency (ms):  
max: 130.92 max: 115.96 max: 122.69
95th percentile: 2.61 95th percentile: 2.66 95th percentile: 2.61
Writeback rate adjustment (SSD SATA 100GB)

In single operations (without anything else happening in the system) on large files, adjusting the writeback rate becomes irrelevant.

Prepare the backing, cache and bcache device

To create a bcache device you need to install the bcache-tools. The command for this is:

# dnf install bcache-tools

bcache devices are visible as /dev/bcacheN (for example /dev/bcache0 ). Once created, they are managed like any other disk.

More details are available at https://docs.kernel.org/admin-guide/bcache.html

CAUTION: Any operation performed can immediately destroy the data on the partitions and disks on which you are operating. Backup is advised.

In the following example /dev/md0 is the backing device and /dev/sda7 is the cache device.

WARNING: bcache device cannot be resized.
NOTE: bcache refuses to use partitions or disks with a filesystem already present.

To delete an existing filesystem you can use:
# wipefs -a /dev/md0 # wipefs -a /dev/sda7 

Create the backing device (and therefore the bcache device)

# bcache make -B /dev/md0
if necessary (device status is inactive)
# bcache register /dev/md0

Creating the cache device (and hooking the cache to the backing device)

# bcache make -C /dev/sda7
if necessary (device status is inactive)
# bcache register /dev/sda7
# bcache attach /dev/sda7 /dev/md0
# bcache set-cachemode /dev/md0 writeback

Check the status

# bcache show

The output from this command includes information similar to the following:
(if the status of a device is inactive, it means that it must be registered)

Name Type State Bname AttachToDev
/dev/md0 1 (data) clean(running) bcache0 /dev/sda7
/dev/sda7 3 (cache) active N/A N/A
bcache show

Optimize

# echo 0 > /sys/block/bcache0/bcache/cache/congested_write_threshold_us
# echo 0 > /sys/block/bcache0/bcache/cache/congested_read_threshold_us
# echo 600000000 > /sys/block/bcache0/bcache/sequential_cutoff
# echo 40 > /sys/block/bcache0/bcache/writeback_percent

In closing

Hopefully this article will provide some insight on the benefits of bcache if it suits your needs.

As always, nothing fits all cases and all people’s preferences. However, understanding (even roughly) how things work, and especially how they don’t work, as well as how to adapt them, makes the difference in having satisfactory results or not


Addendum

The following charts show the performance with a SSD NVME cache device rather than SSD SATA as shown above.

Performance of the cache device (SSD NVME 100GB)

Throughput:  
read, MiB/s: 16.31 read, MiB/s: 16.17 read, MiB/s: 15.77
written, MiB/s: 10.87 written, MiB/s: 10.78 written, MiB/s 10.51
Latency (ms):  
max: 17.50 max: 15.30 max: 46.61
95th percentile: 1.10 95th percentile: 1.10 95th percentile: 1.10
Cache device (SSD NVME 100GB)

Performance with optimizations

Throughput:
read, MiB/s: 7.96 read, MiB/s: 6.87 read, MiB/s: 7.73
written, MiB/s: 5.31 written, MiB/s: 4.58 written, MiB/s 5.15
Latency (ms):
max: 50.79 max: 84.40 max: 108.71
95th percentile: 2.00 95th percentile: 2.03 95th percentile: 2.00
Optimization (SSD NVME da 100GB)

Performance with the writeback rate adjustment script

Throughput:  
read, MiB/s: 8.43 read, MiB/s: 7.52 read, MiB/s: 7.34
written, MiB/s: 5.62 written, MiB/s: 5.02 written, MiB/s 4.89
Latency (ms):  
max: 72.71 max: 78.60 max: 50.61
95th percentile: 2.00 95th percentile: 2.03 95th percentile: 2.11
Writeback rate adjustment (SSD NVME 100GB)
Posted on Leave a comment

Contribute at the Fedora CoreOS, Upgrade, and IoT Test Days

Fedora test days are events where anyone can help make certain that changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed to Fedora before, this is a perfect way to get started.

There are five upcoming test days in the next two weeks covering three topics:

  • Tues 28 March through Sunday 02 April, is to test the Fedora CoreOS.
  • Wed March 28th through March 31st , is to test the Upgrade
  • Monday April 03 through April 07 , is to test Fedora IoT .

Come and test with us to make Fedora 38 even better. Read more below on how to do it.

Fedora 38 CoreOS Test Week

The Fedora 38 CoreOS Test Week focuses on testing FCOS based on Fedora 38. The FCOS next stream is already rebased on Fedora 38 content, which will be coming soon to testing and stable. To prepare for the content being promoted to other streams the Fedora CoreOS and QA teams have organized test days on Tues, March 28, 2023 (results accepted through Sun , November 12). Refer to the wiki page for links to the test cases and materials you’ll need to participate. The FCOS and QA team will meet and communicate with the community sync on a Google Meet at the beginning of test week and async over multiple matrix/element channels. Read more about them in this announcement.

Upgrade test day

As we come closer to Fedora Linux 38 release dates, it’s time to test upgrades. This release has a lot of changes and it becomes essential that we test the graphical upgrade methods as well as the command line. As a part of these test days, we will test upgrading from a full updated, F36 and F37 to F38 for all architectures (x86_64, ARM, aarch64) and variants (WS, cloud, server, silverblue, IoT).

IoT test week

For this test week, the focus is all-around; test all the bits that come in a Fedora IoT release as well as validate different hardware. This includes:

  • Basic installation to different media
  • Installing in a VM
  • rpm-ostree upgrades, layering, rebasing
  • Basic container manipulation with Podman.

We welcome all different types of hardware, but have a specific list of target hardware for convenience.

How do test days work?

A test day is an event where anyone can help make certain that changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. Test days are the perfect way to start contributing if you not in the past.

The only requirement to get started is the ability to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about all the test days are on the wiki page links provided above. If you are available on or around the days of the events, please do some testing and report your results.