Posted on Leave a comment

Anaconda Installer Partitioning and Storage Survey Results

Back in late January, we distributed a survey focusing on partitioning preferences for Anaconda Installer (OS Installer for RHEL, CentOS, and Fedora). We were able to get 1269 responses! Thank you to all who participated. The data we collected will help the Anaconda team continue to provide an installer that best suits the majority’s needs. 

Given the high participation rate, we’re excited to share the main results and findings with you!

Who are our users?

First, we wanted to understand who the users are. The most common answers to demographic-style questions gave us the following results:

  • 96% (of 1138 responses) are desktop/workstation Linux users
  • Have 11+ years of experience using Linux (50% of 1138 responses)
  • 90% use Fedora (sometimes in combination with RHEL, CentOS, Ubuntu, or Debian)

450 users identify as having an Intermediate level of expertise with Linux storage partitioning

n=1138

n=1109

These data points mean that most of you have been using Linux with Fedora (not exclusively) on a desktop/workstation for over a decade! That’s impressive! But when it comes to Linux storage partitioning there is still quite a bit to learn – and we are here to make that easier. 

What storage and partitioning is used?

Storage

Once we got a better picture of who our participants were, we asked questions regarding your current storage and partitioning set up. For example, we uncovered that when it comes to identifying the disks you will use for your installation, most of you are mainly interested in the disk name, size, and the sizes of partitions. This helps the team decide which data is more helpful to present on the disk selection screen. 

n=969

Partitioning

Then, we asked questions about your preferences and expectations regarding partitioning. From studies in the past, we kept seeing an almost even preference for auto-partitioning and custom partitioning because of the different needs they each fulfill. However, this survey clarified that there is a slightly stronger preference for auto-partitioning, but many of you made it clear that you need the Installer to allow some customizations to partitioning. The team is certainly keeping this in mind. In fact, we asked you “How do you prefer to create partitions (storage layout) during the installation process?” and most of the multiple-choice responses were split between “Installer creates the partitions for me”, “Installer creates partitions based on my set preferences”, or “I modify the partitions proposed by the installer”. These three options indicate some form of auto-partition, leading to the combined 70% of 964 responses preferring auto-partitioning. 

Next steps for Anaconda

Finally, we wanted your input on what the next steps for Anaconda should be. The team has been considering a few different approaches, and most of you ranked the “Ability to select pre-defined partitioning configuration options with streamlined steps” as your #1 choice, closely followed by “Ability to customize details of partitioning”. This tells us that you are expecting a more guided experience for partitioning, especially given that most of you also feel there is a lot left to learn about Linux storage partitioning. Be on the lookout for what’s next with Anaconda!

Thanks to all who participated

Again, thank you to all who took the time to fill out the survey. You have provided the team with plenty of data to consider for the future of Anaconda Installer. 

Posted on Leave a comment

Changes to the ELN kernel RPM NVR

Red Hat is excited to announce significant changes to ELN kernel RPM NVR in the kernel-ark project.  This change will be to limited to the kernel-ark ELN RPMs and does not impact Fedora.  If you don’t use Fedora ELN builds you can likely stop reading as this change won’t affect you. 

What is the kernel-ark project?

The kernel-ark project is an upstream kernel-based repository from which the Fedora kernel RPMs are built (contributions welcomed!).  This project is also used by the Centos Stream and Red Hat Enterprise Linux (RHEL) maintainers to implement, test, and verify code that is destined to be used in Centos Stream and RHEL. In other words, the kernel-ark repository contains code that is enabled to build several different kernels which may contain unique code for different use cases.  The kernel RPMs used for CentOS Stream and RHEL are commonly referred to as the ‘ELN’ (Enterprise Linux Next) RPMs.

Why are there ELN RPMs?  Why can’t Centos Stream and Red Hat use Fedora RPMs?

While Fedora Linux is the source of a lot of code that lands in CentOS Stream and later RHEL, the kernel configuration used in each operating system is unique.  Fedora Linux is configured to achieve its specific goals and targets.  CentOS Stream and RHEL do the same but for a slightly different set of goals and targets.

The differences are significant enough that the Fedora Linux, Centos Stream, and RHEL kernel maintainers recognized the need for separate RPMs targeted for Fedora and those targeted for Centos Stream and RHEL.  Examples of these differences are BTRFS is enabled in Fedora Linux but not in ELN and there are some specific devices that are disabled in ELN but are enabled in Fedora Linux.

Red Hat uses kernel-ark’s ELN RPMs to continuously test upstream changes with a Red Hat specific configuration.  This enables Red Hat to monitor performance and resolve issues before they make it into a Red Hat Enterprise Linux (RHEL) release.  In accordance with Red Hat’s long established ‘upstream first’ policy, all fixes and suggestions for improvements are sent back to the upstream kernel community.  This benefits the entire Linux community with improvements due to issues resolved in ELN.

This structure also allows Red Hat to test and make changes without affecting the Fedora Linux kernel except in a positive or desired way, such as through bug fixes.  Fedora Linux, CentOS Stream, and RHEL have the opportunity to accept or reject changes easily.

What ELN NVR changes are being made?

Before explaining the changes to the ELN kernel rpm NVR, it is important to understand what an NVR is.  All RPMs have Name, Version, and Release (NVR) directives that describe the RPM package.  In the case of the Fedora Linux kernel RPMs the Name is ‘kernel’ and the Version is the kernel’s uname (aka UTSNAME or the kernel version).  The last field, the Release, contains additional upstream tagging information (release candidate tags), a monotonically increasing build number, and a distribution tag.  The NVR is separate from the kernel’s uname and the uname is not generated from it.  Instead, we have traditionally generated the NVR from the uname.

For example, for a recent Fedora Linux kernel build,

$ rpm -qi kernel-6.3.0-0.rc5.42.fc39.src.rpm

Name: kernel
Version: 6.3.0
Release: 0.rc5.42.fc39

In the next few weeks, the Centos Stream and RHEL maintainers will introduce ELN RPMs that have new kernel Name, Version, and Release (NVR) directives that are unique to the ELN builds.  This change has no impact on the kernel uname.  The net result of the change is that the version number will have more meaning to CentOS Stream and RHEL builds, instead of being solely based on the kernel uname.  For example, an ELN kernel may have NVR kernel-redhat-1.0.39.eln which packages a kernel with a kernel uname of 6.3.0-39.eln.

We have already decided that the new ‘Name’ directive for the ELN kernel NVR will be changed from ‘kernel’ to ‘kernel-redhat’. More information on the Version and Release directive changes will be released in the following months as they are finalized by the Centos Stream and RHEL kernel maintainers.  You can follow these discussions on the Fedora Kernel Mailing List.

Why is the ELN NVR being changed?

The new ELN NVR will allow for better coordination of feature introduction, bug fixes, and CVE resolutions in future versions of Centos Stream and RHEL.  More information on these improvements to the Centos Stream and RHEL ecosystems will be released in the upcoming months. 

How is Fedora Linux impacted?

Fedora Linux is not impacted by these changes.  

Since the inception of the kernel-ark project, the Fedora Linux, Centos Stream, and RHEL maintainers have been extraordinarily careful to ensure that Fedora Linux kernel builds are not impacted by ELN kernel builds (and vice-versa) in the kernel-ark project.  The commitment to prevent cross-OS issues is strictly enforced by the maintainers.  Due to the maintainers continued diligence, there is no impact to Fedora Linux.

How can I obtain Fedora Linux or ELN kernels?

Fedora Linux and ELN kernels can be downloaded from Fedora Project’s koji instance.

Posted on Leave a comment

What’s new in Fedora Workstation 38

Fedora Workstation 38 is the latest version of the leading-edge Linux desktop OS, made by a worldwide community, including you! This article describes some of the user-facing changes in this new version of Fedora Workstation. Upgrade today from GNOME Software, or use dnf system-upgrade in a terminal emulator!

GNOME 44

Fedora Workstation 38 features the newest version of the GNOME desktop environment. GNOME 44 features subtle tweaks and revamps all throughout, most notably in the Quick Settings menu and the Settings app. More details about can be found in the GNOME 44 release notes.

File chooser

Most of the GNOME applications are built on GTK 4.10. This introduces a revamped file chooser with an icon view and image previews.

GTK 4.10's new file chooser, showing the icon view with image previews.
Icon view with image previews, new in GTK 4.10

Quick Settings tweaks

For GNOME 44 There have been a number of improvements to the Quick Settings menu. The new version includes a new Bluetooth menu, which introduces the ability to quickly connect and disconnect known Bluetooth devices. Additional information is available in each quick settings button, thanks to new subtitles.

The Bluetooth menu can now be used to connect to known devices

Also in the quick settings menu, a new background apps feature lists Flatpak apps which are running without a visible window.

Background Apps lets you see sandboxed apps running without a visible window and close them

Core applications

GNOME’s core applications have received significant improvements in the new version.

Settings has seen a round of updates, focused on improving the experience in each of the settings panels. Here are some notable changes:

  • Major redesigns of Mouse & Touchpad and Accessibility significantly improves usability.
  • Updated Device Security now uses clearer language.
  • Redesigned sound now includes new windows for the volume mixer and alert sound.
  • You can now share your Wi-Fi credentials to another device through a QR code.
The Mouse & Touchpad panel in the GNOME Settings app, showing the Touchpad settings.
The revamped Mouse & Touchpad panel in Settings

In Files, there is now an option to expand folders in the list view.

The tree view can be turned on in Files’ settings

GNOME Software now automatically checks for unused Flatpak runtimes and removes them, saving disk space. You can also choose to only allow open source apps in search results.

In Contacts, you can now share a contact through a QR code, making it super easy to share a contact from your desktop to your phone!

Third-party repositories

Fedora’s third-party repositories feature makes it easy to enable a selection of additional software repos. Previous versions included a filtered version of Flathub, which included a small number of apps. For Fedora 38, filtering of Flathub content no longer occurs. This means that the third party repos now provide full access to all of Flathub.

The third party repos must still be manually enabled, and individual repositories may be disabled from the GNOME Software settings. If you want to keep proprietary apps from showing up in your search results, you can also do that in GNOME Software’s preferences menu.

You are in control.

Under-the-hood changes throughout Fedora Linux 38

Fedora Linux 38 features many under the hood changes. Here are some notable ones:

  • The latest Linux kernel, version 6.2, brings extended hardware support, bug fixes and performance improvements.
  • The length of time that system services may block shutdown has been reduced. This means that, if a service delays your machine from powering off, it will be much less disruptive than in the past.
  • RPM now uses the Rust-written Sequoia OpenGPG parser for better security.
  • The Noto fonts are now the default for Khmer and Thai. The variable versions of the Noto CJK fonts are now used for Chinese, Japanese, and Korean. This reduces disk usage.
  • Profiling will be easier from Fedora 38, thanks to changes in its default build configuration. The expectation is that this will result in performance improvements in future versions.

Also check out…

Official spins for the Budgie desktop environment and Sway tiling Wayland compositor are now available!

Posted on Leave a comment

Linux bcache with writeback cache (how it works and doesn’t work)

bcache is a simple and good way to have large disks (typically rotary and slow) exhibit performance quite similar to an SSD disk, using a small SSD disk or a small part of an SDD.

In general, bcache is a system for having devices composed of slow and large disks, with fast and small disks attached as a cache.

This article will discuss performance and some optimization tips as well as configuration of bcache.

The following terms are used in bcache to describe how it works and the parts of bcache:

backing device slow and large disk (disk intended to actually hold the data)
cache device fast and small disk (cache)
dirty cache data present only in the cache device
writeback writing to the cache device and later (much later) to the backing device
writeback rate cache write speed in the backing device

A disk data cache has always existed, it is the free RAM in the operating system. When data is read from the disk it is copied to RAM. If the data is already in RAM, it is read from RAM rather than being read from disk again. When data is written to the disk, it is written to RAM and after a few moments written to the disk as well. The time data spends only in RAM is very short since RAM is volatile.

bcache is similar, only it has various modes of cache operation. The mode that is faster in writing data is writeback. It works the same as for RAM, only instead of RAM there is a SATA or NVME SSD device. The data may reside only in the cache for much longer, even forever, so it is a bit riskier (if you break the SSD, the data that resided only in the cache is lost, with a good chance that the whole filesystem becomes inaccessible).

Performance Comparison

It is very difficult to gather reliable data from any tests, either with real cases or with special programs. They always give extremely variable, different, unstable values. The various caches present and the type of filesystem (btrfs, journaled, etc.), make the values very variable. It is advisable to ignore small differences (say 5-10%).

The following performance data refers to the test below (random and multiple reads/writes), trying to always maintain the same conditions and repeating three times in immediate sequence.

$ sysbench fileio --file-total-size=2G --file-test-mode=rndrw --time=30 --max-requests=0 run

The tables below show the performance of the separate devices:

Performance of the backing device (RAID 1 with 1TB rotary disks)

Throughput:
read, MiB/s: 0.22 read, MiB/s: 0.23 read, MiB/s: 0.19
written, MiB/s: 0.15 written, MiB/s: 0.16 written, MiB/s 0.13
Latency (ms):
max: 174.92 max: 879.59 max: 1335.30
95th percentile: 87.56 95th percentile: 87.56 95th percentile: 89.16
RAID 1 with 1TB rotary disks

Performance of the cache device (SSD SATA 100GB)

Throughput:
read, MiB/s: 7.28 read, MiB/s: 7.21 read, MiB/s: 7.51
written, MiB/s: 4.86  written, MiB/s: 4.81 written, MiB/s 5.01
Latency (ms):
max: 126.55 max: 102.39 max: 107.95
95th percentile: 1.47 95th percentile: 1.47 95th percentile: 1.47
Cache device (SSD SATA 100GB)

The theoretical expectation that a bcache device will be as fast as the cache device is (physically) impossible to achieve. On average, bcache is significantly slower and only sometimes approaches the same performance as the cache device. Improved performance almost always requires various compromises.

Consider an example assuming there is a 1TB bcache device and a 100GB cache. When writing a 1TB file, the cache device is filled, then partially emptied to the backing device, and refilled again, until the file is fully written.

Because of this (and also because part of the cache also serves data when reading) there is a limit on the length of the file’s sequential data that are written to the cache. Once the limit is exceeded, the file data is written (or read) directly to the backing device, bypassing the cache.

bcache also limits the response delay of the disks, but disproportionately so, especially for SSD SATA, degrading the performance of the cache.

The dirty cache should be emptied to decrease the risk of data loss and to have cache available when it is needed. This should only be done when the devices exhibit little or no activity, otherwise the performance available for normal use collapses.

Unfortunately, the default settings are too low, and the writeback rate adjustment is crude. To improve the writeback rate adjustment it is necessary to write a program (I wrote a script for this).

The following commands provide the necessary optimizations (required at each startup) to get better performance from the bcache device.

# echo 0 > /sys/block/bcache0/bcache/cache/congested_write_threshold_us
# echo 0 > /sys/block/bcache0/bcache/cache/congested_read_threshold_us
# echo 600000000 > /sys/block/bcache0/bcache/sequential_cutoff
# echo 40 > /sys/block/bcache0/bcache/writeback_percent

The following tables compare the performance with the default values and the optimization results.

Performance with default values

Throughput:
read, MiB/s: 3.37 read, MiB/s: 2.67 read, MiB/s: 2.61
written, MiB/s: 2.24 written, MiB/s: 1.78 written, MiB/s 1.74
Latency (ms):
max: 128.51 max: 102.61   max: 142.04
95th percentile: 9.22 95th percentile: 10.84 95th percentile: 11.04
Default values (SSD SATA 100GB)

Performance with optimizations

Throughput:
read, MiB/s: 5.96 read, MiB/s: 3.89 read, MiB/s: 3.81
written, MiB/s: 3.98 written, MiB/s: 2.59 written, MiB/s 2.54
Latency (ms):
max: 131.95 max: 133.23 max: 117.76
95th percentile: 2.61 95th percentile: 2.66 95th percentile: 2.66
Optimization (SSD SATA 100GB)

Performance with the writeback rate adjustment script

Throughput:  
read, MiB/s: 6.25 read, MiB/s: 4.29 read, MiB/s: 5.12
written, MiB/s: 4.17 written, MiB/s: 2.86 written, MiB/s 3.41
Latency (ms):  
max: 130.92 max: 115.96 max: 122.69
95th percentile: 2.61 95th percentile: 2.66 95th percentile: 2.61
Writeback rate adjustment (SSD SATA 100GB)

In single operations (without anything else happening in the system) on large files, adjusting the writeback rate becomes irrelevant.

Prepare the backing, cache and bcache device

To create a bcache device you need to install the bcache-tools. The command for this is:

# dnf install bcache-tools

bcache devices are visible as /dev/bcacheN (for example /dev/bcache0 ). Once created, they are managed like any other disk.

More details are available at https://docs.kernel.org/admin-guide/bcache.html

CAUTION: Any operation performed can immediately destroy the data on the partitions and disks on which you are operating. Backup is advised.

In the following example /dev/md0 is the backing device and /dev/sda7 is the cache device.

WARNING: bcache device cannot be resized.
NOTE: bcache refuses to use partitions or disks with a filesystem already present.

To delete an existing filesystem you can use:
# wipefs -a /dev/md0 # wipefs -a /dev/sda7 

Create the backing device (and therefore the bcache device)

# bcache make -B /dev/md0
if necessary (device status is inactive)
# bcache register /dev/md0

Creating the cache device (and hooking the cache to the backing device)

# bcache make -C /dev/sda7
if necessary (device status is inactive)
# bcache register /dev/sda7
# bcache attach /dev/sda7 /dev/md0
# bcache set-cachemode /dev/md0 writeback

Check the status

# bcache show

The output from this command includes information similar to the following:
(if the status of a device is inactive, it means that it must be registered)

Name Type State Bname AttachToDev
/dev/md0 1 (data) clean(running) bcache0 /dev/sda7
/dev/sda7 3 (cache) active N/A N/A
bcache show

Optimize

# echo 0 > /sys/block/bcache0/bcache/cache/congested_write_threshold_us
# echo 0 > /sys/block/bcache0/bcache/cache/congested_read_threshold_us
# echo 600000000 > /sys/block/bcache0/bcache/sequential_cutoff
# echo 40 > /sys/block/bcache0/bcache/writeback_percent

In closing

Hopefully this article will provide some insight on the benefits of bcache if it suits your needs.

As always, nothing fits all cases and all people’s preferences. However, understanding (even roughly) how things work, and especially how they don’t work, as well as how to adapt them, makes the difference in having satisfactory results or not


Addendum

The following charts show the performance with a SSD NVME cache device rather than SSD SATA as shown above.

Performance of the cache device (SSD NVME 100GB)

Throughput:  
read, MiB/s: 16.31 read, MiB/s: 16.17 read, MiB/s: 15.77
written, MiB/s: 10.87 written, MiB/s: 10.78 written, MiB/s 10.51
Latency (ms):  
max: 17.50 max: 15.30 max: 46.61
95th percentile: 1.10 95th percentile: 1.10 95th percentile: 1.10
Cache device (SSD NVME 100GB)

Performance with optimizations

Throughput:
read, MiB/s: 7.96 read, MiB/s: 6.87 read, MiB/s: 7.73
written, MiB/s: 5.31 written, MiB/s: 4.58 written, MiB/s 5.15
Latency (ms):
max: 50.79 max: 84.40 max: 108.71
95th percentile: 2.00 95th percentile: 2.03 95th percentile: 2.00
Optimization (SSD NVME da 100GB)

Performance with the writeback rate adjustment script

Throughput:  
read, MiB/s: 8.43 read, MiB/s: 7.52 read, MiB/s: 7.34
written, MiB/s: 5.62 written, MiB/s: 5.02 written, MiB/s 4.89
Latency (ms):  
max: 72.71 max: 78.60 max: 50.61
95th percentile: 2.00 95th percentile: 2.03 95th percentile: 2.11
Writeback rate adjustment (SSD NVME 100GB)
Posted on Leave a comment

Contribute at the Fedora CoreOS, Upgrade, and IoT Test Days

Fedora test days are events where anyone can help make certain that changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed to Fedora before, this is a perfect way to get started.

There are five upcoming test days in the next two weeks covering three topics:

  • Tues 28 March through Sunday 02 April, is to test the Fedora CoreOS.
  • Wed March 28th through March 31st , is to test the Upgrade
  • Monday April 03 through April 07 , is to test Fedora IoT .

Come and test with us to make Fedora 38 even better. Read more below on how to do it.

Fedora 38 CoreOS Test Week

The Fedora 38 CoreOS Test Week focuses on testing FCOS based on Fedora 38. The FCOS next stream is already rebased on Fedora 38 content, which will be coming soon to testing and stable. To prepare for the content being promoted to other streams the Fedora CoreOS and QA teams have organized test days on Tues, March 28, 2023 (results accepted through Sun , November 12). Refer to the wiki page for links to the test cases and materials you’ll need to participate. The FCOS and QA team will meet and communicate with the community sync on a Google Meet at the beginning of test week and async over multiple matrix/element channels. Read more about them in this announcement.

Upgrade test day

As we come closer to Fedora Linux 38 release dates, it’s time to test upgrades. This release has a lot of changes and it becomes essential that we test the graphical upgrade methods as well as the command line. As a part of these test days, we will test upgrading from a full updated, F36 and F37 to F38 for all architectures (x86_64, ARM, aarch64) and variants (WS, cloud, server, silverblue, IoT).

IoT test week

For this test week, the focus is all-around; test all the bits that come in a Fedora IoT release as well as validate different hardware. This includes:

  • Basic installation to different media
  • Installing in a VM
  • rpm-ostree upgrades, layering, rebasing
  • Basic container manipulation with Podman.

We welcome all different types of hardware, but have a specific list of target hardware for convenience.

How do test days work?

A test day is an event where anyone can help make certain that changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. Test days are the perfect way to start contributing if you not in the past.

The only requirement to get started is the ability to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about all the test days are on the wiki page links provided above. If you are available on or around the days of the events, please do some testing and report your results.

Posted on Leave a comment

Fedora Linux editions part 3: Labs

Everyone uses their computer in different ways, according to their needs. You may work as a designer, so you need various design software on your computer. Or maybe you’re a gamer, so you need an operating system that supports the games you like. Sometimes we don’t have enough time to prepare an operating system that supports our needs. Fedora Linux Lab editions are here for you for that reason. Fedora Labs is a selection of curated bundles of purpose-driven software and content curated and maintained by members of the Fedora Community. This article will go into a little more detail about the Fedora Linux Lab editions.

You can find an overview of all the Fedora Linux variants in my previous article Introduce the different Fedora Linux editions.


Astronomy

Fedora Astronomy is made for both amateur and professional astronomers. You can do various activities related to astronomy with this Fedora Linux. Some of the applications in Fedora Astronomy are Astropy, Kstars, Celestia, Virtualplanet, Astromatic, etc. Fedora Astronomy comes with KDE Plasma as its default desktop environment.

Details about the applications included and the download link are available at this link: https://labs.fedoraproject.org/en/astronomy/


Comp Neuro

Fedora Comp Neuro was created by the NeuroFedora Team to support computational neuroscience. Some of the applications included in Fedora Linux are Neuron, Brian, Genesis, SciPy, Moose, NeuroML, NetPyNE, etc. Those applications can support your work, such as modeling software, analysis tools, and general productivity tools.

Details about the applications included and the download link are available at this link: https://labs.fedoraproject.org/en/comp-neuro/


Design Suite

This Fedora Linux is for you if you are a designer. You will get a complete Fedora Linux with various tools for designing, such as GIMP, Inkscape, Blender, Darktable, Krita, Pitivi, etc. You are ready to create various creative works with those tools, such as web page designs, posters, flyers, 3D models, videos, and animations. This Fedora Design Suite is created by designers, for designers.

Details about the applications included and the download link are available at this link: https://labs.fedoraproject.org/en/design-suite/


Games

Playing games is fun, and you can do it with Fedora Games. This Fedora Linux is comes with various game genres, such as first-person shooters, real-time and turn-based strategy games, and puzzle games. Some of the games on Fedora Linux are Extreme Tux Racer, Wesnoth, Hedgewars, Colossus, BZFlag, Freeciv, Warzone 2011, MegaGlest, and Fillets.

Details about the applications included and the download link are available at this link: https://labs.fedoraproject.org/en/games/


Jams

Almost everyone likes music. Some of you may be a musician or music producer. Or maybe you are someone who likes to play with audio. Then this Fedora Jam is for you, as it comes with JACK, ALSA, PulseAudio, and various support for audio and music. Some of the default applications from Fedora Jam are Ardor, Qtractor, Hydrogen, MuseScore, TuxGuitar, SooperLooper, etc.

Details about the applications included and the download link are available at this link: https://labs.fedoraproject.org/en/jam/


Python Classroom

Fedora Python Classroom will make your work related to Python easier, especially if you are a Python developer, teacher, or instructor. Fedora Python Classroom is supported by various important stuff pre-installed. Some of the default applications on Fedora Linux are IPython, Jypyter Notebook, git, tox, Python 3 IDLE, etc. Fedora Python Classroom has 3 variants, namely you can run it graphically with GNOME, or with Vagrant or Docker containers.

Details about the applications included and the download link are available at this link: https://labs.fedoraproject.org/en/python-classroom/


Security Lab

Fedora Security Lab is Fedora Linux for security testers and developers. Xfce comes as a default desktop environment with customizations to suit the needs of security auditing, forensics, system rescue, etc. This Fedora Linux provides several applications that are installed by default to support your work in the security field, such as Etherape, Ettercap, Medusa, Nmap, Scap-workbench, Skipfish, Sqlninja, Wireshark, and Yersinia.

Details about the applications included and the download link are available at this link: https://labs.fedoraproject.org/en/security/


Robotics Suite

Fedora Robotic Suite is Fedora Linux with a wide variety of free and open robotics software packages. This Fedora Linux is suitable for professionals or hobbyists related to robotics. Some of the default applications are Player, SimSpark, Fawkes, Gazebo, Stage, PCL, Arduino, Eclipse, and MRPT.

Details about the applications included and the download link are available at this link: https://labs.fedoraproject.org/en/robotics/


Scientific

Your scientific and numerical work will become easier with Fedora Scientific. This Fedora Linux features a variety of useful open source scientific and numerical tools. KDE Plasma is the default desktop environment along with various applications that will support your work, such as IPython, Pandas, Gnuplot, Matplotlib, R, Maxima, LaTeX, GNU Octave, and GNU Scientific Library.

Details about the applications included and the download link are available at this link: https://labs.fedoraproject.org/en/scientific/


Conclusion

You have many choices of Fedora Linux to suit your work or hobby. Fedora Labs makes that easy. You don’t need to do a lot of configuration from scratch because Fedora Labs will do it for you. You can find complete information about Fedora Labs at https://labs.fedoraproject.org/.

Posted on Leave a comment

Contribute at the Fedora Kernel , GNOME , i18n, and DNF test days

Fedora test days are events where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed to Fedora before, this is a perfect way to get started.

There are five upcoming test days in the upcoming weeks:

  • Sunday 05 March through Sunday 12 March, is to test the Kernel 6.2.
  • Monday March 06 through March 10 , two test day periods focusing on testing GNOME Desktop and Core Apps.
  • Tues March 07 through March 13 , is to test i18n .
  • Tues March 14, is to test DNF 5.

Come and test with us to make the upcoming Fedora 38 even better. Read more below on how to do it.

Kernel 6.2 test week

The kernel team is working on final integration for kernel 6.2. This recently released version will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week.

Sunday 05 March through Sunday 12 March will be the Kernel test week. Refer to the wiki page for links to the test images you’ll need to participate. This document clearly outlines the steps.

GNOME 44 test week

GNOME is the default desktop environment for Fedora Workstation and thus for many Fedora users. As a part of the planned change, GNOME 44 landed on Fedora and will ship with Fedora 38. Since GNOME is such a huge part of user experience and requires a lot of testing, the Workstation
WG and Fedora QA team have decided to split the test week into two parts:

Mon March 06 through Wed March 8, we will be testing GNOME Desktop and Core Apps. You can find the test day page here.
Thurs March 09 and Fri March 10, the focus will be to test GNOME Apps in general. This will be shipped by default. The test day page is here.

i18n test week

The i18n test week focuses on testing internationalization features in Fedora Linux.

The test week is Tuesday 7 March through Monday 13 March. The test week page is available here.

DNF 5

Since the brand new dnf5 package has landed in F38, we would like to organize a test day to get some initial feedback on it. We will be testing DNF 5 to iron out any rough edges.

The test day will be Tuesday 14 March. The test day page is available here .

How do test days work?

A test day is an event where anyone can help make sure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about both test days is available on the wiki pages mentioned above. If you’re available on or around the days of the events, please do some testing and report your results. All the test day pages receive some final touches which complete about 24 hrs before the test day begins. We urge you to be patient about resources that are in most cases uploaded hours before the test day starts.

Come and test with us to make the upcoming Fedora 38 even better.

Posted on Leave a comment

4 cool new projects to try in Copr for March 2023

This article introduces four new projects available in Copr, with installation instructions.

Copr is a build-system for anyone in the Fedora community. It hosts thousands of projects for various purposes and audiences. Some of them should never be installed by anyone, some are already being transitioned to the official Fedora Linux repositories, and the rest are somewhere in between. Copr gives you the opportunity to install 3rd party software that is not available in Fedora Linux repositories, try nightly versions of your dependencies, use patched builds of your favorite tools to support some non-standard use-cases, and just experiment freely.

If you don’t know how to enable a repository or if you are concerned about whether it is safe to use Copr, please consult the project documentation.

This article takes a closer look at interesting projects that recently landed in Copr.

Sticky

Do you always forget your passwords, write them on sticky notes and post them all around your monitor? Well, please don’t use Sticky for that. But it is a great note-taking application with support for checklists, text formatting, spell-checking, backups, and so on. It also supports adjusting note visibility and organizing notes into groups.

Installation instructions

The repo currently provides Sticky for Fedora 36, 37, 38, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable a-random-linux-lover/sticky
sudo dnf install sticky

Webapp-manager

Generations of programmers spent over three decades creating, improving, and re-inventing window managers for us to disregard all of that, and live inside of a web browser with dozens of tabs. Webapp-manager allows you to run websites as if they were applications, and return to the previous paradigm.

Installation instructions

The repo currently provides webapp-manager for Fedora 36, 37, 38, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable kylegospo/webapp-manager
sudo dnf install webapp-manager

Umoria

Umoria (The Dungeons of Moria) is a single-player dungeon crawl game inspired by J. R. R. Tolkien’s novel The Lord of the Rings. It is considered to be the first roguelike game ever created. A player begins their epic adventure by acquiring weapons and supplies in the town level and then descends to the dungeons to face the evil that lurks beneath.

Installation instructions

The repo currently provides Umoria for Fedora 36, 37, 38, and Fedora Rawhide. To install it, use these commands:

sudo dnf copr enable whitehara/umoria
sudo dnf install umoria

PyCharm

JetBrains PyCharm is a popular IDE for the Python programming language. It provides intelligent code completion, on-the-fly error checking, quick fixes, and much more. The phracek/PyCharm repository is a great example of a well-maintained project that lives in Copr and has for a long time. Created eight years ago for Fedora 20, it provided support for every subsequent Fedora release. It is now a part of the Third-Party Repositories that can be opted into during the Fedora installation.

Installation instructions

The repo currently provides PyCharm for Fedora 36, 37, 38, Fedora Rawhide, EPEL 7, 8, and 9. To install it, use these commands:

sudo dnf copr enable phracek/PyCharm
sudo dnf install pycharm-community
Posted on Leave a comment

Working with Btrfs – Compression

This article will explore transparent filesystem compression in Btrfs and how it can help with saving storage space. This is part of a series that takes a closer look at Btrfs, the default filesystem for Fedora Workstation, and Fedora Silverblue since Fedora Linux 33.

In case you missed it, here’s the previous article from this series: https://fedoramagazine.org/working-with-btrfs-snapshots

Introduction

Most of us have probably experienced running out of storage space already. Maybe you want to download a large file from the internet, or you need to quickly copy over some pictures from your phone, and the operation suddenly fails. While storage space is steadily becoming cheaper, an increasing number of devices are either manufactured with a fixed amount of storage or are difficult to extend by end-users.

But what can you do when storage space is scarce? Maybe you will resort to cloud storage, or you find some means of external storage to carry around with you.

In this article I’ll investigate another solution to this problem: transparent filesystem compression, a feature built into Btrfs. Ideally, this will solve your storage problems while requiring hardly any modification to your system at all! Let’s see how.

Transparent compression explained

First, let’s investigate what transparent compression means. You can compress files with compression algorithms such as gzip, xz, or bzip2. This is usually an explicit operation: You take a compression utility and let it operate on your file. While this provides space savings, depending on the file content, it has a major drawback: When you want to access the file to read or modify it, you have to decompress it first.

This is not only a tedious process, but also temporarily defeats the space savings you had achieved previously. Moreover, you end up (de)compressing parts of the file that you didn’t intend to touch in the first place. Clearly there is something better than that!

Transparent compression on the other hand takes place at the filesystem level. Here, compressed files still look like regular uncompressed files to the user. However, they are stored with compression applied on disk. This works because the filesystem selectively decompresses only the parts of a file that you access and makes sure to compress them again as it writes changes to disk.

The compression here is transparent in that it isn’t noticeable to the user, except possibly for a small increase in CPU load during file access. Hence, you can apply this to existing systems without performing hardware modifications or resorting to cloud storage.

Comparing compression algorithms

Btrfs offers multiple compression algorithms to choose from. For technical reasons it cannot use arbitrary compression programs. It currently supports:

  • zstd
  • lzo
  • zlib

The good news is that, due to how transparent compression works, you don’t have to install these programs for Btrfs to use them. In the following paragraphs, you will see how to run a simple benchmark to compare the individual compression algorithms. In order to perform the benchmark, however, you must install the necessary executables. There’s no need to keep them installed afterwards, so you’ll use a podman container to make sure you don’t leave any traces in your system.

Because typing the same commands over and over is a tedious task, I have prepared a ready-to-run bash script that is hosted on Gitlab (https://gitlab.com/hartang/btrfs-compression-test). This will run a single compression and decompression with each of the above-mentioned algorithms at varying compression levels.

First, download the script:

$ curl -LO https://gitlab.com/hartang/btrfs-compression-test/-/raw/main/btrfs_compression_test.sh

Next, spin up a Fedora Linux container that mounts your current working directory so you can exchange files with the host and run the script in there:

$ podman run --rm -it --security-opt label=disable -v "$PWD:$PWD" \ -w "$PWD" registry.fedoraproject.org/fedora:37

Finally run the script with:

$ chmod +x ./btrfs_compression_test.sh
$ ./btrfs_compression_test.sh

The output on my machine looks like this:

[INFO] Using file 'glibc-2.36.tar' as compression target
[INFO] Target file 'glibc-2.36.tar' not found, downloading now...
################################################################### 100.0%
[ OK ] Download successful!
[INFO] Copying 'glibc-2.36.tar' to '/tmp/tmp.vNBWYg1Vol/' for benchmark...
[INFO] Installing required utilities
[INFO] Testing compression for 'zlib' Level | Time (compress) | Compression Ratio | Time (decompress)
-------+-----------------+-------------------+------------------- 1 | 0.322 s | 18.324 % | 0.659 s 2 | 0.342 s | 17.738 % | 0.635 s 3 | 0.473 s | 17.181 % | 0.647 s 4 | 0.505 s | 16.101 % | 0.607 s 5 | 0.640 s | 15.270 % | 0.590 s 6 | 0.958 s | 14.858 % | 0.577 s 7 | 1.198 s | 14.716 % | 0.561 s 8 | 2.577 s | 14.619 % | 0.571 s 9 | 3.114 s | 14.605 % | 0.570 s [INFO] Testing compression for 'zstd' Level | Time (compress) | Compression Ratio | Time (decompress)
-------+-----------------+-------------------+------------------- 1 | 0.492 s | 14.831 % | 0.313 s 2 | 0.607 s | 14.008 % | 0.341 s 3 | 0.709 s | 13.195 % | 0.318 s 4 | 0.683 s | 13.108 % | 0.306 s 5 | 1.300 s | 11.825 % | 0.292 s 6 | 1.824 s | 11.298 % | 0.286 s 7 | 2.215 s | 11.052 % | 0.284 s 8 | 2.834 s | 10.619 % | 0.294 s 9 | 3.079 s | 10.408 % | 0.272 s 10 | 4.355 s | 10.254 % | 0.282 s 11 | 6.161 s | 10.167 % | 0.283 s 12 | 6.670 s | 10.165 % | 0.304 s 13 | 12.471 s | 10.183 % | 0.279 s 14 | 15.619 s | 10.075 % | 0.267 s 15 | 21.387 s | 9.989 % | 0.270 s [INFO] Testing compression for 'lzo' Level | Time (compress) | Compression Ratio | Time (decompress)
-------+-----------------+-------------------+------------------- 1 | 0.447 s | 25.677 % | 0.438 s 2 | 0.448 s | 25.582 % | 0.438 s 3 | 0.444 s | 25.582 % | 0.441 s 4 | 0.444 s | 25.582 % | 0.444 s 5 | 0.445 s | 25.582 % | 0.453 s 6 | 0.438 s | 25.582 % | 0.444 s 7 | 8.990 s | 18.666 % | 0.410 s 8 | 34.233 s | 18.463 % | 0.405 s 9 | 41.328 s | 18.450 % | 0.426 s [INFO] Cleaning up...
[ OK ] Benchmark complete!

It is important to note a few things before making decisions based on the numbers from the script:

  • Not all files compress equally well. Modern multimedia formats such as images or movies compress their contents already and don’t compress well beyond that.
  • The script performs each compression and decompression exactly once. Running it repeatedly on the same input file will generate slightly different outputs. Hence, the times should be understood as estimates, rather than an exact measurement.

Given the numbers in my output, I decided to use the zstd compression algorithm with compression level 3 on my systems. Depending on your needs, you may want to choose higher compression levels (for example, if your storage devices are comparatively slow). To get an estimate of the achievable read/write speeds, you can divide the source archives size (about 260 MB) by the (de)compression times.

The compression test works on the GNU libc 2.36 source code by default. If you want to see the results for a custom file, you can give the script a file path as the first argument. Keep in mind that the file must be accessible from inside the container.

Feel free to read the script code and modify it to your liking if you want to test a few other things or perform a more detailed benchmark!

Configuring compression in Btrfs

Transparent filesystem compression in Btrfs is configurable in a number of ways:

  • As mount option when mounting the filesystem (applies to all subvolumes of the same Btrfs filesystem)
  • With Btrfs file properties
  • During btrfs filesystem defrag (not permanent, not shown here)
  • With the chattr file attribute interface (not shown here)

I’ll only take a look at the first two of these.

Enabling compression at mount-time

There is a Btrfs mount option that enables file compression:

$ sudo mount -o compress=<ALGORITHM>:<LEVEL> ...

For example, to mount a filesystem and compress it with the zstd algorithm on level 3, you would write:

$ sudo mount -o compress=zstd:3 ...

Setting the compression level is optional. It is important to note that the compress mount option applies to the whole Btrfs filesystem and all of its subvolumes. Additionally, it is the only currently supported way of specifying the compression level to use.

In order to apply compression to the root filesystem, it must be specified in /etc/fstab. The Fedora Linux Installer, for example, enables zstd compression on level 1 by default, which looks like this in /etc/fstab:

$ cat /etc/fstab
[ ... ]
UUID=47b03671-39f1-43a7-b0a7-db733bfb47ff / btrfs subvol=root,compress=zstd:1,[ ... ] 0 0

Enabling compression per-file

Another way of specifying compressions is via Btrfs filesystem properties. To read the compression setting for any file, folder or subvolume, use the following command:

$ btrfs property get <PATH> compression

Likewise, you can configure compression like this:

$ sudo btrfs property set <PATH> compression <VALUE>

For example, to enable zlib compression for all files under /etc:

$ sudo btrfs property set /etc compression zlib

You can get a list of supported values with man btrfs-property. Keep in mind that this interface doesn’t allow specifying the compression level. In addition, if a compression property is set, it overrides other compression configured at mount time.

Compressing existing files

At this point, if you apply compression to your existing filesystem and check the space usage with df or similar commands, you will notice that nothing has changed. That is because Btrfs, by itself, doesn’t “recompress” all your existing files. Compression will only take place when writing new data to disk. There are a few ways to perform an explicit recompression:

  1. Wait and do nothing: As files are modified and written back to disk, Btrfs compresses the newly written file contents as configured. At some point, if we wait long enough, an increasing part of our files are rewritten and, hence, compressed.
  2. Move files to a different filesystem and back again: Depending on which files you want to apply compression to, this can become a rather tedious operation.
  3. Perform a Btrfs defragmetation

The last option is probably the most convenient, but it comes with a caveat on Btrfs filesystems that already contain snapshots: it will break shared extent between snapshots. In other words, all the shared content between two snapshots, or a snapshot and its’ parent subvolume, will be present multiple times after a defrag operation.

Hence, if you already have a lot of snapshots on your filesystem, you shouldn’t run a defragmentation on the whole filesystem. This isn’t necessary either, since with Btrfs you can defragment specific directories or even single files, if you wish to do so.

You can use the following command to perform a defragmentation:

$ sudo btrfs filesystem defragment -r /path/to/defragment

For example, you can defragment your home directory like this:

$ sudo btrfs filesystem defragment -r "$HOME"

In case of doubt it’s a good idea to start with defragmenting individual large files and continuing with increasingly large directories while monitoring free space on the file system.

Measuring filesystem compression

At some point, you may wonder just how much space you have saved thanks to file system compression. But how do you tell? First, to tell if a Btrfs filesystem is mounted with compression applied, you can use the following command:

$ findmnt -vno OPTIONS /path/to/mountpoint | grep compress

If you get a result, the filesystem at the given mount point is using compression! Next, the command compsize can tell you how much space your files need:

$ sudo compsize -x /path/to/examine

On my home directory, the result looks like this:

$ sudo compsize -x "$HOME"
Processed 942853 files, 550658 regular extents (799985 refs), 462779 inline.
Type Perc Disk Usage Uncompressed Referenced
TOTAL 81% 74G 91G 111G
none 100% 67G 67G 77G
zstd 28% 6.6G 23G 33G

The individual lines tell you the “Type” of compression applied to files. The “TOTAL” is the sum of all the lines below it. The columns, on the other hand, tell you how much space our files need:

  • “Disk Usage” is the actual amount of storage allocated on the hard drive,
  • “Uncompressed” is the amount of storage the files would need without compression applied,
  • “Referenced” is the total size of all uncompressed files added up.

“Referenced” can differ from the numbers in “Uncompressed” if, for example, one has deduplicated files previously, or if there are snapshots that share extents. In the example above, you can see that 91 GB worth of uncompressed files occupy only 74 GB of storage on my disk! Depending on the type of files stored in a directory and the compression level applied, these numbers can vary significantly.

Additional notes about file compression

Btrfs uses a heuristic algorithm to detect compressed files. This is done because compressed files usually do not compress well, so there is no point in wasting CPU cycles in attempting further compression. To this end, Btrfs measures the compression ratio when compressing data before writing it to disk. If the first portions of a file compress poorly, the file is marked as incompressible and no further compression takes place.

If, for some reason, you want Btrfs to compress all data it writes, you can mount a Btrfs filesystem with the compress-force option, like this:

$ sudo mount -o compress-force=zstd:3 ...

When configured like this, Btrfs will compress all data it writes to disk with the zstd algorithm at compression level 3.

An important thing to note is that a Btrfs filesystem with a lot of data and compression enabled may take a few seconds longer to mount than without compression applied. This has technical reasons and is normal behavior which doesn’t influence filesystem operation.

Conclusion

This article detailed transparent filesystem compression in Btrfs. It is a built-in, comparatively cheap, way to get some extra storage space out of existing hardware without needing modifications.

The next articles in this series will deal with:

  • Qgroups – Limiting your filesystem size
  • RAID – Replace your mdadm configuration

If there are other topics related to Btrfs that you want to know more about, have a look at the Btrfs Wiki [1] and Docs [2]. Don’t forget to check out the first three articles of this series, if you haven’t already! If you feel that there is something missing from this article series, let me know in the comments below. See you in the next article!

Sources

[1]: https://btrfs.wiki.kernel.org/index.php/Main_Page
[2]: https://btrfs.readthedocs.io/en/latest/Introduction.html

Posted on Leave a comment

Podman Checkpoint

Podman is a tool which runs, manages and deploys containers under the OCI standard. Running containers with rootless access and creating pods (a Pod is a group of containers ) are additional features of Podman. This article describes and explains how to use checkpointing in Podman to save the state of a running container for later use.

Checkpointing Containers : Checkpoint / Restore In User-space, or CRIU, is Linux software available in the Fedora Linux repository as the “criu” package. It can freeze a running container (or an individual application) and checkpoint its state to disk (Reference : https://criu.org/Main_Page). The saved data can be used to restore the container and run it exactly as it was during the time of the freeze. Using this, we can achieve live migration,  snapshots, or remote debugging of applications or containers. This capability requires CRIU 3.11 or later installed on the system. 

Podman Checkpoint

# podman container checkpoint <containername> 

This command will create a checkpoint of the container and freeze its state. Checkpointing a container will stop the running container as well. If you do podman ps there will be no container existing named <containername>.

You can export the checkpoint to a specific location as a file and copy that file to a different server

# podman container checkpoint <containername> -e /tmp/mycheckpoint.tar.gz

Podman Restore

# podman container restore --keep <containername> 

the –keep option will restore the container with all the temporary files.

To import the container checkpoint you can use:

# podman container restore -i /tmp/mycheckpoint.tar.gz

Live Migration using Podman Checkpoint

This section describes how to migrate a container from client1 to client2 using the podman checkpoint feature. This example uses the https://lab.redhat.com/tracks/rhel-system-roles playground provided by Red Hat as it has multiple hosts with ssh-keygen already configured.

The example will run a container with some process on client1, create a checkpoint, and migrate it to client2. First run a container on the client1 machine with the commands below:

podman run --name=demo1 -d docker.io/httpd
podman exec -it demo1 bash
sleep 600& (run a process for verification )
exit

The above snippet runs a container as demo1 with the httpd process which runs a sleep process for 600 seconds ( 10 mins ) in background. You can verify this by doing:

# podman top demo1 USER PID PPID %CPU ELAPSED TTY TIME COMMAND
root 1 0 0.000 5m40.61208846s ? 0s httpd -DFOREGROUND www-data 3 1 0.000 5m40.613179941s ? 0s httpd -DFOREGROUND www-data 4 1 0.000 5m40.613258012s ? 0s httpd -DFOREGROUND www-data 5 1 0.000 5m40.613312515s ? 0s httpd -DFOREGROUND root 88 1 0.000 16.613370018s ? 0s sleep 600

Now create a container checkpoint and export it to a specific file:

# podman container checkpoint myapache2 -e /tmp/mycheckpoint.tar.gz
# scp /tmp/mycheckpoint.tar.gz client2:/tmp/

Then on client2:

# cd /tmp
# podman container restore -i mycheckpoint.tar.gz
# podman top demo1

You should see the output as follows:

USER PID PPID %CPU ELAPSED TTY TIME COMMAND
root 1 0 0.000 5m40.61208846s ? 0s httpd -DFOREGROUND www-data 3 1 0.000 5m40.613179941s ? 0s httpd -DFOREGROUND www-data 4 1 0.000 5m40.613258012s ? 0s httpd -DFOREGROUND www-data 5 1 0.000 5m40.613312515s ? 0s httpd -DFOREGROUND root 88 1 0.000 16.613370018s ? 0s sleep 600

In this way you can achieve a live migration using the podman checkpoint feature.