Posted on Leave a comment

How to become a Shortwave listener (SWL) with Fedora Linux and Software Defined Radio

Catching signals from others is how we have started communicating as human beings. It all started, of course, with our vocal cords. Then we moved to smoke signals for long-distance communication. At some point, we discovered radio waves and are still using them for contact. This article will describe how you can tune in using Fedora Linux and an SDR dongle.

My journey

I got interested in radio communication as a hobby when I was a kid, while my local club, LZ2KRS, was still a thing. I was so excited to be able to listen and communicate with people worldwide. It opened a whole new world for me. I was living in a communist country back then and this was a way to escape just for a bit. It also taught me about ethics and technology.  

Year after year my hobby grew and now, in the Internet era with all the cool devices you can use, it’s getting even more exciting. So I want to show you how to do it with Fedora Linux and a hardware dongle.

What is Ham Radio

Amateur Radio (ham radio) is a popular hobby and service that brings people, electronics, and communication together. People use ham radio to talk across town, worldwide, or even into space, without the Internet or cell phones. 

What’s SWLing?

To broadcast with your ham radio or SDR system, you need to obtain a license from a governmental body. But to intercept signals and listen to the open communication between two amateur radio stations, you don’t need one.

The term SWLing comes from the abbreviation of Short Wave Listener, where you listen to stations communicating in the shortwave bands between 3 and 30 MHz. This can be used for long-distance communication using the ionosphere, a layer of the Earth’s atmosphere. 

To get started, you don’t need a license. Still, I recommend getting yourself an SWL sign to identify yourself in a listening contest. These are competitions for categories like who will discover the most connections in a month or who can listen to contacts from each country in the world. 

How to get an SWL Sign?

There are two options:

  • Contact your national radio club and ask them to issue one for you. I got my Czech one, OK1-36568, after a few weeks.
  • Join the Short Wave Amateur Radio Listening community and request a sign there.

You will get more information and help from either of these locations if you get stuck in some fashion! 

QSL Cards

You can also use your sign to send QSL cards via post or electronically. This is a great way to communicate with people worldwide and make friends.

Per Wikipedia, QSL card is a written confirmation of either a two-way radio communication between two amateur radio or citizens band stations; a one-way reception of a signal from an AM radio, FM radio, television, or shortwave broadcasting station; or the reception of a two-way radio communication by a third party listener (in our case). 

A typical QSL card is the same size and made from the same material as a regular postcard; most are sent through snail mail. 

Replace the radio receiver with your Fedora Linux.

The focal point of the ham radio hobby is the radio transmitter/receiver. Most of the time, enthusiasts build their radio from scratch, but this differs from what I will write about here. 

SDR

A software-defined radio (SDR) system is a radio communication system that uses software for the modulation and demodulation of radio signals. In other words, a piece of hardware and software takes the place of a radio transmitter/receiver. This helps you discover more in a way that you are familiar with – a User Interface with built-in functions instead of the limited interface of a radio receiver. 

My explanation oversimplifies things, so if you want to go deep and read more about SDR, here is an excellent start.

SDR Set Up under Fedora Linux

Choosing the proper hardware

If you search the Internet for an SDR dongle, you’ll find tons of ideas depending on your budget. In this tutorial, I’ll work with the one I have, which works well under Fedora 37 – it is available from Nooelec.

A note: The dongle covers frequencies from 25MHz to 1750MHz, which doesn’t cover the Short Wave bands. You would need an additional device to listen to them. This is included in the package I linked above. Some other hardware providers offer all-in-one products.

Check if the dongle is visible

Before installing anything, detect whether Fedora Linux recognizes your USB dongle. I hope you didn’t buy a fake one :-). Use the following command to list the USB devices on your system.

lsusb

One of the output lines (in the case of Nooelec) should be

Realtek Semiconductor Corp. RTL2838 DVB-T

A screen from Fedora Linux showing the results of the lsusb command listing the Realtek Device we will be using in this exercise.

Now proceed by installing the software you need

Fedora offers a set of tools and drivers packaged as a group. Even though you would not use all the components in this package from the beginning, I recommend installing it. You’ll have more software to play with.

sudo dnf group install 'Electronic Lab'

I advise you to explore what’s in the group by running this command:

sudo dnf group info 'Electronic Lab'

Now check if you have everything set up correctly by running:

rtl_test

You should see something like this:

A screen from Fedora Linux console showing the results of the previous command listing the device and its properties.

Do not forget to kill this process because the device will be busy and cannot be used in the next step. A simple Ctrl + c works.

Gqrx

You have the dongle already in your device’s USB port and all the software you need to get started. 

 Now it’s time to intercept your first signal. Start the program called Gqrx. Don’t be alarmed by the strange interface. You’ll get used to it. 

Configure the I/O Device Screen

A screen showing the settings panel of the software Gqrx, for the device we use.

From the “Device” Dropdown, select the ‘RealtekRTL2838...’

Leave the rest untouched for the moment.

If you don’t see your device there, click the “Device Scan” button at the bottom of the screen.

When your device is selected, click “OK” and the dialogue will close.

Configure the frequency screen

Before you start intercepting signals, ensure there is something out there that proves that everything works correctly. Since the dongle covers the FM radio band as well, do this:

  • Locate your favorite radio station’s frequency. Mine is 105MHZ
  • Set it in the Frequency field
  • Select WFM (stereo) in the “Mode” dropdown. If you don’t do this, you will not hear a sound.
A screen from the gqrx software helping us to se the frequency to our favorite FM station.

Play

And now, you need to start the reception by clicking the “play button” in your main menu. You will see the frequency visualized like this:

A screen from gqrz displaying the signal received.

If you hear a sound, everything is ready to move to the next step.

If you don’t hear anything, check if everything is set up correctly. You may ask a questions in the comments for this article; I can direct you to the proper forum to solve this.

Feel free to play with some more FM broadcasts. You have the antenna for it in your pack.

Let’s go Short Wave

In the case of the Nooelec, you need to add one more device to the USB dongle and turn it on. Instructions on how to do that are included in the package you receive.  

In short, you plug the “Up Converter” into your USB dongle and make sure the switch is in the “convert” position. Some videos are available on how to do it if you get stuck.

You will need an antenna and a good location

Now things get trickier. If you live in an area where you don’t see an open space out your window or other buildings surround your building, you might have trouble catching a Short Wave radio amateur signal. 

Let’s try this to see if it works

Try to be in the open. I usually listen from my terrace, which could be better but works under particular conditions.

Apart from the hardware, you would need a long wire to act as your antenna. Try the antenna that comes with the hardware initially – the telescoping one from Nooelec, but it will catch only powerful signals.

Let’s go back to Gqrx

Now with the converter, you need to make some changes to your device screen:

A screen from gqrx showing how to set up the SDR with the up Converter, You need to add the value of -125Mhz to the LNB LO Field.

Please note the –125Mhz for the LNB LO field. This is required for the Up Converter to work.

Tune your frequency to 14.100 Mhz and make sure your Mode is USB (standing for Upper sideband) because this is this band’s main demodulation option.

Then go to your FFT Settings screen, use the zoom slider, and set it to see about 100 kHz. In our case, you should have between 14.05 to 14.15 Mhz on your screen.

Also, click the “Enable Band Plan” to see the information about the SW bands you are exploring.

Then hit the play button and start exploring the space between 14.0 and 14.3 Mhz to get any amateur radio transmission.

A screen showing the gqrc signal receiver in work with the setting described in this section.

When intercepting a transmission, adjust your settings to improve your listening experience. It’s a journey that you have already started.

Most probably, you will hear something like this:

“CQ CQ CQ this is ..(followed by the radio license number spelled with the ham radio phonetic alphabet). 

Listen very carefully, and by the call sign, you will be able to determine the location of the radio amateurs’ country.

You can visit the QRZCQ website to learn more about them and even send them a QSL card confirming their connection.

Keep the momentum going.

Now you have some tools and ideas for starting Short Wave Listening. 

This is the first step of an incredible and exciting journey you can have together with your Fedora Linux OS. 

You will discover the pleasure of building your antenna for the specific band, reading more about how the ionosphere helps, how to be a part of a listening competition, and what those Q-codes mean.

73

Posted on Leave a comment

Anaconda Web UI storage feedback requested!

As you might know, the Anaconda Web UI preview image has a simple “erase everything” partitioning right now because partitioning is a pretty big and problematic topic. On one hand, Linux guru people want to control everything; on the other hand, we also need to support beginner users. We are also constrained by the capabilities of the existing backend and storage tooling and consistency with the rest of Anaconda. Anaconda team is looking for your storage feedback to help us with design of the Web UI!

In general, partitioning is one of the most complex, problematic, and controversial parts of what Anaconda is doing. Because of that and the great feedback from the last blog, we decided to ask you for feedback again to know where we should focus. We’re looking for feedback from everyone. More answers are better here. We’d like to get input if you’re using Fedora, RHEL, Debian, OpenSUSE, Windows, or Linux, even if it’s just for a week. All these inputs are valuable!

Please help us shape one of the most complex parts of the Anaconda installer!

With just a few minutes of your time filling out the questionnaire, you can help us decide which path we’d like to choose for partitioning.

Questionnaire link: https://redhatdg.co1.qualtrics.com/jfe/form/SV_87bPLycfp1ueko6 

Posted on Leave a comment

Using OpenSearch in Fedora Linux

OpenSearch is Amazon’s open-source search engine and analytics suite. Individuals, businesses, and organizations can use the service to search for a wide range of information and use visualization tools to better understand user behavior and search trends. This article will discuss how you can use OpenSearch in Fedora Linux.

Prerequisites

What can OpenSearch do?

OpenSearch provides several features and tools. These are:

  • Applications that monitor and debug your cluster.
  • Manage security and event information.
  • Enable seamless, personalized search results.
  • A web-based user interface for searching and browsing search results.
  • The ability to search for specific terms or phrases within a document or webpage.
  • The ability to filter search results by date, relevance, or other criteria.
  • The ability to create and save searches for later use.
  • The ability to customize the appearance and functionality of the search results page.
  • Advanced analytics and reporting tools to help users understand and analyze search traffic and user behavior.

The following sections will guide you through the basics of creating a domain, uploading test data, and visualizing your information with OpenSearch Dashboards.

What is an OpenSearch Service domain?

An OpenSearch Service domain is a service provided by AWS that allows you to create, manage, and configure your cluster(s) using either the AWS console or the AWS command-line interface (CLI). This tutorial, will use the AWS console to create and configure your domain.

Getting started

To begin the domain setup, launch your preferred browser and log in to your AWS console. Navigate to the Amazon OpenSearch Service page, then click Create domain.

Create domain page segment which features options to choose your domain name and create a custom endpoint.

Choose your domain name and leave the Enable custom endpoint box unmarked.

Create domain page segment which features options to choose your deployment type, which version of OpenSearch or Elastic search you'd like to use, and enable compatibility mode.

OpenSearch is a fork of Elasticsearch version 7.10. You can choose any version up to Elasticsearch version 7.10 in addition to OpenSearch versions.

Choose Development and testing for your deployment type, the most recent OpenSearch version, and enable compatibility.

Create domain page segment which features options to enable Auto-Tune or add a maintenance window.

Leave Auto-Tune enabled and Add maintenance window unmarked.

Create domain page segment which features options to configure your nodes based on the needs for you application.

The Data nodes options allows you to customize your nodes based on the needs of your applications:

  • Availability Zones (AZ)
    • Amazon Web Services (AWS) Availability Zones are physically separate and isolated data center locations within an AWS region. Each Availability Zone is designed to be fault-tolerant, with redundant power, networking, and cooling infrastructure.
  • Instance type:
    • Refers to the type of virtual server you’d like to use for your application.
  • Number of nodes:
    • The number of nodes you’d like to allocate to each of your AZs.

Since we’re running in a small development setting, set your AZ to 2, your Instance type to t3.small.search, and Number of nodes to 2. Don’t change the default settings for your Storage type, EBS volume type, and EBS storage size per node.

Create domain page segment which features options to select Warm and cold data storage and the number of master nodes you'd like to use. Warm and cold data storage are cost effective solutions for storing large amounts of data and the default frequency of snapshots taken of your cluster is hourly.

Ignore these options for now, but read on for more information:

  • Warm and cold data storage:
    • For use cases that require a cost effective solution for storing large amounts of non-mutable data.
  • Dedicated master nodes
    • Allows you to choose how many master nodes you’d like to use for your domain.
  • Snapshot configuration:
    • Set to hourly by default.
Create domain page segment which features options to set what type of network access you'd like to use and enable granular level control over your data.

VPC access is recommended for production environments. You’ll also need to create a master user login to access OpenSearch Dashboards, OpenSearch’s data visualization tool. We’ll discuss how to use OpenSearch dashboards after you configure your domain.

Select Public access and Create master user, and set up your login.

Create domain page segment which features options to integrate your already existing authentication and Amazon Cognito authentication and set your domain's access policy.

Leave Prepare SAML authentication and Enable Amazon Cognito authentication option boxes unchecked and select Only use fine-grained access control for your access policy.

Create domain page segment which features option to set what type of encryption you'd like your domain to use.

Select Use AWS owned key, ignore the optional configurations, click Create to create your domain, then wait for your domain to activate.

Using OpenSearch Dashboards

OpenSearch Dashboards is a tool that allows you to create and customize interactive dashboards to visualize the data your site receives from user interaction. These dashboards are visual representations of data from various sources such as logs, metrics, and security events, which can be customized to meet your specific needs, including:

  • Dragging and dropping different types of visualizations, such as graphs, maps, and tables, onto a dashboard.
  • Filtering and manipulating data to highlight specific trends or patterns.
  • Sharing dashboards with other users or embedding them in other applications.
  • Collaborating with other users in real-time on the same dashboard.

Navigate to domains and select it from the list.

A list of your domains that provides information on metrics such as Cluster Health, Searchable documents, Total free space, and more.

Click OpenSearch Dashboards URL to access your OpenSearch Dashboard.

Your domain page that lists general information (such as name and Cluster health) and cluster configuration.

You’ll be presented with one of the following screens after you’ve logged into your dashboard:

OpenSearch Dashboard initial login prompt. The prompt asks if you would like to add data or explore the platform.
Upon first login
OpenSearch Dashboards home page. Has options to add sample data or interact with the OpenSearch API
Upon subsequent logins

Visualization options

Click Add sample data to add sample data provided by AWS.

Page showing 3 options of sample data you can upload to your domain. The options are eCommerce orders. flight data, and web logs.

You may select any of the three options. The Sample web logs option will be used, here, to view examples of types of visualization options you can use to analyze your data.

OpenSearch Dashboard visualizations which include Unique visitors, Visitors by OS, and a search query to search for what OS users use in other countries.
OpenSearch Dashboard visualizations which include response codes over time + Annotations and Unique Visitors vs Average Bytes.
OpenSearch Dashboard visualizations which include a file type scatter plot, and a table that shows what hosts, and how many bytes and unique vists the site received in the last hour.
OpenSearch Dashboard visualizations which that shows a heatmap of what country a visitor came from throughout the day.
OpenSearch Dashboard visualizations which that shows a map of which part of the world visitors viewed the site from.
OpenSearch Dashboard visualizations which that shows a Source and Destination Sankey Chart.

Click Create new to add more visualization options:

Analyze your own data to analyze

You can upload one or more of your documents by entering commands through a CLI.

Add a single document

curl -XPUT -u 'master-user:[master-user-password]' 'domain-endpoint/[domain name]/_doc/1' -d '{"field1": "string1", "field2": ["string3","string4"]}' -H 'Content-Type: application/json'

Add multiple documents

Create a JSON file with your documents and run a command to add multiple documents:

JSON file format:

{ "index" : { "_index": "indexname", "_id" : "2" } }
{"field1": "string1", "field2": ["string2", "string3", "string4"], "field3": 1234, "field4": ["String, 5", "String, 6"]}
{ "index" : { "_index": "indexname", "_id" : "3" } }
{"field5": "string7", "field6": ["string8", "string9", "string10"], "field7": 5678, "field8": ["String, 11", "String, 12"]}
{ "index" : { "_index": "indexname", "_id" : "4" } }
{"field9": "string13", "field10": ["string14", "string15", "string16"], "field11": 1011, "field12": ["String, 17", "String, 18"]}

JSON file naming restrictions:

  • All letters must be lowercase.
  • Index names cannot begin with _ or – .
  • Index names can’t contain spaces, commas, : , ” , * , + , / , \ , | , ? , # , > , or < .

Command to run:

curl -XPOST -u 'master-user:[master-user-password]' 'domain-endpoint/_bulk' --data-binary @bulk_[domain name].json -H 'Content-Type: application/json'

You can now create and configure your own domain and use OpenSearch Dashboards to visualize the data your domain receives.

Posted on Leave a comment

Contribute at the Fedora Linux Test Week for Kernel 6.1

The kernel team is working on final integration for Linux kernel 6.1. This version was just recently released, and will arrive soon in Fedora Linux. As a result, the Fedora Linux kernel and QA teams have organized a test week from Tuesday, Jan 03, 2023 to Sunday, Jan 07, 2023. Refer to the wiki page for links to the test images you’ll need to participate. Continue reading for details.

How does a test week work?

A test week is an event where anyone can help make sure changes in Fedora Linux work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

- Download test materials, which include some large files
- Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the days of the event, please do some testing and report your results. We have a document which provides all the necessary steps.

Happy testing, and we hope to see you on test day.

Posted on Leave a comment

Working with Btrfs – Snapshots

This article will explore what Btrfs snapshots are, how they work, and how you can benefit from taking snapshots in every-day situations. This is part of a series that takes a closer look at Btrfs, the default filesystem for Fedora Workstation and Fedora Silverblue since Fedora Linux 33.

In case you missed it, here’s the previous article from this series: https://fedoramagazine.org/working-with-btrfs-subvolumes/

Introduction

Imagine you work on a file over extended periods of time, repeatedly adding changes and undoing them. Then, at some point you realize: Parts of the changes you undid two hours ago would be very helpful now. And yesterday you had already changed this particular bit, too, before you trashed that design. But of course, because you regularly save your files, old changes are lost. Many people have probably experienced a situation like this before. Wouldn’t it be great if you could recover old file versions without having to manually copy them at regular intervals?

This is just one typical situation where Btrfs snapshots can help you out. When used correctly, snapshots also give you a great backup solution for your PC.

Below you will find a lot of examples related to snapshots. If you want to follow along, you must have access to a Btrfs filesystem and root access. You can check the file system of a directory using the following command:

$ findmnt -no FSTYPE /home
btrfs

Here the findmnt command shows the type of filesystem for your /home/ directory. If it says btrfs, you’re all set. Let’s create a new directory in which to perform some experiments:

$ mkdir ~/btrfs-snapshot-test
$ cd ~/btrfs-snapshot-test

In the text below, you will find lots of command responses in boxes such as shown above. Please keep in mind while reading/comparing command output that the box contents may be wrapped at the end of the line. This may make it difficult to recognize long lines that are broken across multiple lines for readability. When in doubt, try to resize your browser window and see how the text behaves!

Snapshots in Btrfs

Let’s start with an elementary question: What is a Btrfs snapshot? If you look in the Docs [1] and Wiki [2], you won’t immediately find an answer to this question. In fact, it is nowhere to be found in the “Features” section. If you search a little, you will find snapshots mentioned extensively along with Btrfs subvolumes [3]. So now what?

Remember that snapshots were both mentioned in the previous articles of this series? There it said:

What is the advantage of CoW? In simple terms: a history of the modified and edited files can be kept. Btrfs will keep the references to the old file versions (inodes) somewhere they can be easily accessed. This reference is a snapshot: An image of the filesystem state at some point in time.

Working with Btrfs: General Concepts

and also:

Another advantage of separating / and /home is that you can take snapshots separately. A subvolume is a boundary for snapshots, and snapshots will never contain the contents of other subvolumes below the subvolume that the snapshot is taken of.

Working with Btrfs: Subvolumes

It seems snapshots have something to do with Btrfs subvolumes. You may have heard about snapshots in other contexts before, for example with LVM, the Logical Volume Manager. While technically they serve the same purpose, they are different in terms of how they reach their goal.

Every Btrfs snapshot is a subvolume. However, not every subvolume is a snapshot. The difference is in what the subvolume contains. A snapshot is a subvolume with added content: it holds references to current and/or past versions of files (inodes). Let’s see where snapshots come from!

Creating Btrfs snapshots

To use snapshots, you need a Btrfs subvolume to take snapshots of. Let’s create one inside our test folder (~/btrfs-snapshot-test):

$ cd ~/btrfs-snapshot-test
$ sudo btrfs subvolume create demo
Create subvolume './demo'
$ sudo chown -R $(id -u):$(id -g) demo/
$ cd demo

Since by default Btrfs subvolumes are owned by root, you must call chown to modify the files in the subvolume to be owned by a regular user. Now add a few files inside it:

$ touch foo bar baz
$ echo "Lorem ipsum dolor sit amet, " > foo

Your directory now looks something like this:

$ ls -l
total 4
-rw-r--r--. 1 hartan hartan 0 Dec 20 08:11 bar
-rw-r--r--. 1 hartan hartan 0 Dec 20 08:11 baz
-rw-r--r--. 1 hartan hartan 29 Dec 20 08:11 foo

Let’s create the very first snapshot from that:

$ cd ..
$ sudo btrfs subvolume snapshot demo demo-1
Create a snapshot of 'demo' in './demo-1'

And that’s it. Let’s see what was achieved:

$ ls -l
total 0
drwxr-xr-x. 1 hartan hartan 18 Dec 20 08:11 demo
drwxr-xr-x. 1 hartan hartan 18 Dec 20 08:11 demo-1
$ tree
.
├── demo
│   ├── bar
│   ├── baz
│   └── foo
└── demo-1 ├── bar ├── baz └── foo 2 directories, 6 files

It seems it made a copy! To verify, let’s read the contents of foo from the snapshot:

$ cat demo/foo
Lorem ipsum dolor sit amet,
$ cat demo-1/foo
Lorem ipsum dolor sit amet,

The real effect becomes apparent when we modify the original file:

$ echo "consectetur adipiscing elit, " >> demo/foo
$ cat demo/foo
Lorem ipsum dolor sit amet, consectetur adipiscing elit,
$ cat demo-1/foo
Lorem ipsum dolor sit amet,

This shows that the snapshot still holds the “old” version of the data: The content of foo hasn’t changed. So far, you could have achieved the exact same thing with a simple file copy. You can now go ahead and continue working on the old file, too:

$ echo "sed do eiusmod tempor incididunt" >> demo-1/foo
$ cat demo-1/foo
Lorem ipsum dolor sit amet, sed do eiusmod tempor incididunt

Under the hood, however, our snapshot is in fact a new Btrfs subvolume. You can verify this with the following command:

$ sudo btrfs subvolume list -o .
ID 259 gen 265 top level 256 path home/hartan/btrfs-snapshot-test/demo
ID 260 gen 264 top level 256 path home/hartan/btrfs-snapshot-test/demo-1

Btrfs snapshots vs. file copies

So what’s the point of all this? Up until now snapshots seem to be a complicated way to copy files around. In fact, there is more to snapshots than meets the eye. Let’s create a bigger file:

$ dd if=/dev/urandom of=demo/bigfile bs=1M count=512
512+0 records in
512+0 records out
536870912 bytes (537 MB, 512 MiB) copied, 1.3454 s, 399 MB/s

There is now a new file demo/bigfile that is 512 MiB in size. Let’s make another snapshot so you don’t lose it when you modify the data:

$ sudo btrfs subvolume snapshot demo demo-2
Create a snapshot of 'demo' in './demo-2'

Now let’s simulate some changes by appending a small string to the file:

$ echo "small changes" >> demo/bigfile

Here’s the resulting file structure:

$ tree
.
├── demo
│   ├── bar
│   ├── baz
│   ├── bigfile
│   └── foo
├── demo-1
│   ├── bar
│   ├── baz
│   └── foo
└── demo-2 ├── bar ├── baz ├── bigfile └── foo 3 directories, 11 files

But the real magic happens somewhere else. Had you copied demo/bigfile, you would now have two files of about 512 MiB in size with mostly the same content. However, since they are distinct copies, they would occupy about 1 GiB of storage total. Keep in mind that the difference between both files is hardly more than 10 Bytes – that’s almost nothing compared to the original file size.

Btrfs snapshots work different than file copies: They keep references to current and past inodes instead. When you appended the change to the file, under the hood Btrfs allocated some more space to store the changes in and added a reference to this new data to the original inode. The previous contents remain untouched. If it helps your mental model, you can think of this as “storing” merely the difference between the original file and the modified version.

Let’s have a look at the effect of this:

$ sudo compsize .
Processed 11 files, 5 regular extents (9 refs), 3 inline.
Type Perc Disk Usage Uncompressed Referenced TOTAL 100% 512M 512M 1.0G none 100% 512M 512M 1.0G

The interesting figure here is seen in line “TOTAL”:

  • “Referenced” is the total size of all the files in the current directory, summed up
  • “Disk Usage” is the amount of storage space allocated on your disk to store the files

While you have a total of 1 GiB files, it takes merely 512 MiB to store them.

Btrfs snapshots and backups

So far, in this article, you have seen how to create Btrfs snapshots and what makes them so special. One may be tempted to think: If I take a series of Btrfs snapshots locally on my PC, I have a solid backup strategy. This is not the case. If the underlying data, which is shared by Btrfs subvolumes, is accidentally damaged (by something outside of Btrfs’ influence, e.g. cosmic rays), all the subvolumes pointing to this data contain the same error.

To turn the snapshots into real backups you should store them on a different Btrfs filesystem, such as on an external drive. For the purposes of this article let’s create a new Btrfs filesystem contained inside a file and mount it to simulate an external drive. If you have an external drive formatted with Btrfs lying around, feel free to substitute all the paths mentioned in the following commands to try it out! Let’s create a new Btrfs filesystem:

Note: The commands below will create a new file of 8 GB size on your filesystem. If you want to follow the steps below, please ensure you have at least 8 GB of disk space available. Do not allocate less than 8 GB to the file, as Btrfs may otherwise encounter issues during mounting.

$ truncate -s 8G btrfs_filesystem.img
$ sudo mkfs.btrfs -L "backup-drive" btrfs_filesystem.img
btrfs-progs v5.18
See http://btrfs.wiki.kernel.org for more information. [ ... ] Devices: ID SIZE PATH 1 8.00GiB btrfs_filesystem.img

These commands created a new file of 8 GB in size named btrfs_filesystem.img and formatted a Btrfs filesystem inside it. Now you can mount it as if it were an external drive:

$ mkdir backup-drive
$ sudo mount btrfs_filesystem.img backup-drive
$ sudo chown -R $(id -u):$(id -g) backup-drive
$ ls -lh
total 4.7M
drwxr-xr-x. 1 hartan hartan 0 Dec 20 08:35 backup-drive
-rw-r--r--. 1 hartan hartan 8.0G Dec 20 08:37 btrfs_filesystem.img
drwxr-xr-x. 1 hartan hartan 32 Dec 20 08:14 demo
drwxr-xr-x. 1 hartan hartan 18 Dec 20 08:11 demo-1
drwxr-xr-x. 1 hartan hartan 32 Dec 20 08:14 demo-2

Great, now there is an independent Btrfs filesystem mounted under backup-drive! Let’s try to take another snapshot and place it there:

$ sudo btrfs subvolume snapshot demo backup-drive/demo-3
Create a snapshot of 'demo' in 'backup-drive/demo-3'
ERROR: cannot snapshot 'demo': Invalid cross-device link

What happened? Well, you tried to take a snapshot of demo and store it in a different Btrfs filesystem (a different device from Btrfs’ point of view). Remember that a Btrfs subvolume only holds references to files and their contents (inodes)? This is exactly the problem: The files and contents exist in our home filesystem, but not in the newly-created backup-drive. You have to find a way to transfer the subvolume along with its contents to the new filesystem.

Storing snapshots on a different Btrfs filesystem

The Btrfs utilities include two special commands for this purpose. Let’s see how they work first:

$ sudo btrfs send demo | sudo btrfs receive backup-drive/
ERROR: subvolume /home/hartan/btrfs-snapshot-test/demo is not read-only
ERROR: empty stream is not considered valid

Another error! This time it tells you that the subvolume we’re trying to transfer is not read-only. This is true: You can write new contents to all of the snapshots/subvolumes created so far. You can create read-only snapshots like this:

$ sudo btrfs subvolume snapshot -r demo demo-3-ro
Create a readonly snapshot of 'demo' in './demo-3-ro'

Unlike previously, here the -r option is added to the snapshot subcommand. This creates a read-only snapshot, which is easily verified:

$ touch demo-3-ro/another-file
touch: cannot touch 'demo-3-ro/another-file': Read-only file system

Now you can retry transferring the subvolumes:

$ sudo btrfs send demo-3-ro | sudo btrfs receive backup-drive/
At subvol demo-3-ro
At subvol demo-3-ro
$ tree ├── backup-drive
│   └── demo-3-ro
│   ├── bar
│   ├── baz
│   ├── bigfile
│   └── foo
├── btrfs_filesystem.img
├── demo
[ ... ]
└── demo-3-ro ├── bar ├── baz ├── bigfile └── foo 6 directories, 20 files

It worked! You have successfully transferred a read-only snapshot of our original subvolume demo to an external Btrfs filesystem.

Storing snapshots on non-Btrfs filesystems

Above you have seen how you can store Btrfs subvolumes/snapshots on another Btrfs filesystem. But what can you do if you do not have another Btrfs filesystem and cannot create one, for example because the external drives need a filesystem that allows compatibility with Windows or MacOS hosts? In such cases you can store subvolumes in files:

$ sudo btrfs send -f demo-3-ro-subvolume.btrfs demo-3-ro
At subvol demo-3-ro
$ ls -lh demo-3-ro-subvolume.btrfs -rw-------. 1 root root 513M Dec 21 10:39 demo-3-ro-subvolume.btrfs

The file demo-3-ro-subvolume.btrfs now contains everything that is needed to recreate the demo-3-ro subvolume at a later point in time.

Incrementally sending subvolumes

If you perform this action repeatedly for different subvolumes, you will notice at some point that the different subvolumes do not share their file contents any more. This is because when sending a subvolume such as above, all the data needed to recreate this standalone subvolume is transferred to the target. You can, however, instruct Btrfs to only send the difference between two subvolumes to the target! This so-called incremental send will ensure that shared references remain shared between the subvolumes. To demonstrate this, add a few more changes to our original subvolume:

$ echo "a few more changes" >> demo/bigfile

Next create another read-only snapshot:

$ sudo btrfs subvolume snapshot -r demo demo-4-ro
Create a readonly snapshot of 'demo' in './demo-4-ro'

And now send it:

$ sudo btrfs send -p demo-3-ro demo-4-ro | sudo btrfs receive backup-drive
At subvol demo-4-ro
At snapshot demo-4-ro

In the command above, the -p option specifies a parent subvolume, against which the differences are calculated. It is important to keep in mind that both the source and target Btrfs filesystem must contain the same, unmodified parent subvolume! Ensure that the new subvolume is really there:

$ ls backup-drive/
demo-3-ro demo-4-ro
$ ls -lR backup-drive/demo-4-ro/
backup-drive/demo-4-ro/:
total 524296
-rw-r--r--. 1 hartan hartan 0 Dec 20 08:11 bar
-rw-r--r--. 1 hartan hartan 0 Dec 20 08:11 baz
-rw-r--r--. 1 hartan hartan 536870945 Dec 21 10:49 bigfile
-rw-r--r--. 1 hartan hartan 59 Dec 20 08:13 foo

But how do you know whether the incremental send only transferred the difference between both subvolumes? Let’s transfer the data stream to a file and see how big it is:

$ sudo btrfs send -f demo-4-ro-diff.btrfs -p demo-3-ro demo-4-ro
At subvol demo-4-ro
$ ls -l demo-4-ro-diff.btrfs -rw-------. 1 root root 315 Dec 21 10:55 demo-4-ro-diff.btrfs

According to ls, the file is merely 315 bytes in size! This means that the incremental send only transferred the changes between the two subvolumes, along with additional Btrfs-specific metadata.

Restoring subvolumes from snapshots

Before continuing, let’s do some cleaning up of the things you don’t need at the moment:

$ sudo rm -rf demo-4-ro-diff.btrfs demo-3-ro-subvolume.btrfs
$ sudo btrfs subvolume delete demo-1 demo-2 demo-3-ro demo-4-ro
$ ls -l
total 531516
drwxr-xr-x. 1 hartan hartan 36 Dec 21 10:50 backup-drive
-rw-r--r--. 1 hartan hartan 8589934592 Dec 21 10:51 btrfs_filesystem.img
drwxr-xr-x. 1 hartan hartan 32 Dec 20 08:14 demo

So far you have managed to create read/write and read-only snapshots of Btrfs subvolumes and send them to an external location. In order to turn this into a backup strategy, however, there has to be a way to send the subvolumes back to the original filesystem and make them writable again. For this purpose, let’s move the demo subvolume somewhere else and try to recreate it from the most recent snapshot. First: Rename the “broken” subvolume. It will be deleted once the restore was successful:

$ mv demo demo-broken

Second: Transfer the most recent snapshot back to this filesystem:

$ sudo btrfs send backup-drive/demo-4-ro | sudo btrfs receive .
At subvol backup-drive/demo-4-ro
At subvol demo-4-ro
[hartan@fedora btrfs-snapshot-test]$ ls
backup-drive btrfs_filesystem.img demo-4-ro demo-broken

Third: Create a read-write subvolume from the snapshot:

$ sudo btrfs subvolume snapshot demo-4-ro demo
Create a snapshot of 'demo-4-ro' in './demo'
$ ls
backup-drive btrfs_filesystem.img demo demo-4-ro demo-broken

The last step is important: You cannot just rename demo-4-ro to demo, because it would still be a read-only subvolume! Finally you can check whether everything you need is there:

$ tree demo
demo
├── bar
├── baz
├── bigfile
└── foo 0 directories, 4 files
$ tail -c -19 demo/bigfile a few more changes

The last command above tells you that the last 19 characters in bigfile are in fact the change last performed. At this point, you may want to copy recent changes from demo-broken to the new demo subvolume. Since you didn’t perform any other changes, you can now delete the obsolete subvolumes:

$ sudo btrfs subvolume delete demo-4-ro demo-broken
Delete subvolume (no-commit): '/home/hartan/btrfs-snapshot-test/demo-4-ro'
Delete subvolume (no-commit): '/home/hartan/btrfs-snapshot-test/demo-broken'

And that’s it! You have successfully restored the demo subvolume from a snapshot that was previously stored on a different Btrfs filesystem (external media).

Subvolumes as boundary for snapshots

In the second article of this series I mentioned that subvolumes are boundaries for snapshots, but what exactly does that mean? In simple terms, a snapshot of a subvolume will only contain the content of this particular subvolume, and none of the nested subvolumes below. Let’s have a look at this:

$ sudo btrfs subvolume create demo/nested
Create subvolume 'demo/nested'
$ sudo chown -R $(id -u):$(id -g) demo/nested
$ touch demo/nested/another_file

Let’s take a snapshot as before:

$ sudo btrfs subvolume snapshot demo demo-nested
Create a snapshot of 'demo' in './demo-nested'

And check out the contents:

$ tree demo-nested
demo-nested
├── bar
├── baz
├── bigfile
├── foo
└── nested 1 directory, 4 files $ tree demo
demo
├── bar
├── baz
├── bigfile
├── foo
└── nested └── another_file 1 directory, 5 files

Notice that another_file is missing, even though the folder nested is present. This happens because nested is a subvolume: The snapshot of demo contains the folder (mountpoint) for the nested subvolume, but its contents aren’t present. Currently there is no way to perform snapshots recursively to include nested subvolumes. However, we can take advantage of this to exclude folders from snapshots! This is typically useful for data that you can reproduce easily, or that will rarely change. Examples include virtual machine or container images, movies, game files and more.

Before we wrap up the article, let’s remove everything we created while testing:

$ sudo btrfs subvolume delete demo/nested demo demo-nested
Delete subvolume (no-commit): '/home/hartan/btrfs-snapshot-test/demo/nested'
Delete subvolume (no-commit): '/home/hartan/btrfs-snapshot-test/demo'
Delete subvolume (no-commit): '/home/hartan/btrfs-snapshot-test/demo-nested'
$ sudo umount backup-drive
$ cd ..
$ rm -rf btrfs-snapshot-test/

Final thoughts on Btrfs-based backups

If you decide you want to use Btrfs to perform regular backups of your data, you may want to use a tool that automates this task for you. The Btrfs wiki has a list of backup tools specialized on Btrfs [4]. On this page you will also find another summary of the steps to perform Btrfs backups by hand. Personally, I have had a lot of good experiences with btrbk [5] and I am using it to perform my own backups. In addition to backups, btrbk can also keep a list of Btrfs snapshots locally on your PC. I use this to safeguard against accidental data deletion.

If you want to know more about performing backups using Btrfs, leave a comment below and I’ll consider writing a follow-up article that deals exclusively with this topic.

Conclusion

This article investigated Btrfs snapshots, which are Btrfs subvolumes under the hood. You learned how to create read/write and read-only snapshots, and how this mechanism can help safeguard against data loss.

The next articles in this series will deal with:

  • Compression – Transparently saving storage space
  • Qgroups – Limiting your filesystem size
  • RAID – Replace your mdadm configuration

If there are other topics related to Btrfs that you want to know more about, have a look at the Btrfs Wiki [2] and Docs [1]. Don’t forget to check out the first two articles of this series, if you haven’t already! If you feel that there is something missing from this article series, let me know in the comments below. See you in the next article!

Sources

[1]: https://btrfs.readthedocs.io/en/latest/Introduction.html
[2]: https://btrfs.wiki.kernel.org/index.php/Main_Page
[3]: https://btrfs.readthedocs.io/en/latest/Subvolumes.html
[4]: https://btrfs.wiki.kernel.org/index.php/Incremental_Backup#Available_Backup_Tools
[5]: https://github.com/digint/btrbk

Posted on Leave a comment

GitHub Actions: Use Podman to run Fedora Linux

Introduction

GitHub enables distributed and collaborative code development. To ensure software works correctly, many projects use continuous integration to build and test each new contribution before including it. The continuous integration service on GitHub is GitHub actions.

Background

GitHub offers testing on Ubuntu, macOS and Windows operating systems. However, there is a wide variety of other operating systems and you may want to ensure that an open source project developed on GitHub runs well on another operating system, in particular Fedora Linux.

Podman is a command line tool that can run a different Linux operating system in a container. This provides a convenient way to test software on other operating systems. The article Getting Started with Podman in Fedora Linux introduces how to run Podman on Fedora.

This article demonstrates how to run Fedora Linux in a container using Podman. The host operating system can be any distro that has Podman installed, even macOS or Windows. In the following demo, the host operating system is Ubuntu. This will allow us to test that projects developed on GitHub will work successfully on Fedora, even if Fedora is not available as a base operating system for GitHub actions.

Example GitHub Actions Configuration

As an example, we add continuous integration for Fedora Linux to RedAmber, a project enabling the use of dataframes for machine learning and other data science applications in Ruby. This project relies on Apache Arrow release 10 or greater, so we need to use Fedora Linux Rawhide (F38) since Fedora Linux 37 currently has Apache Arrow release 9 in the Fedora repositories.

GitHub has great documentation on using GitHub Actions. In summary, we need to create a yaml file in the .github/workflows directory of the project, and then enable GitHub Actions if it is not already enabled. A sample yaml file which you can easily modify is below:

name: CI
on: push: branches: - main pull_request: jobs: test: name: fedora runs-on: ubuntu-latest steps: - name: Setup Podman run: | sudo apt update sudo apt-get -y install podman podman pull fedora:38 - name: Get source uses: actions/checkout@v3 with: path: 'red_amber' - name: Create container and run tests run: | { echo 'FROM fedora:38' echo '# set TZ to ensure the test using timestamp' echo 'ENV TZ=Asia/Tokyo' echo 'RUN dnf -y update' echo 'RUN dnf -y install gcc-c++ git libarrow-devel libarrow-glib-devel ruby-devel' echo 'RUN dnf clean all' echo 'COPY red_amber red_amber' echo 'WORKDIR /red_amber' echo 'RUN bundle install' echo 'RUN bundle exec rake test' } > podmanfile podman build --tag fedora38test -f ./podmanfile

Adding the above yaml file enables testing on Fedora Linux running as a guest on Ubuntu. Similar workflows should work for other projects developed on GitHub, thereby ensuring a wide variety of software will run well on Fedora Linux.

Acknowledgements

Benson Muite is grateful to Hirokazu SUZUKI for creating RedAmber, improving the workflow, and using it to test RedAmber on Fedora Linux.

Posted on Leave a comment

Getting ready for an exciting 2023

[This message comes directly from the desk of Matthew Miller, the Fedora Project Leader. — Ed.]

This “love letter to the community” started in 2020 as a way to shine a little light in a very dark time, and to encourage everyone — including me — by reminding us all of the great work done by great people in Fedora. But it’s become one of my favorite things to do all year. We’re no longer just trying to get through a dark time. We’re looking forward to an exciting era in Fedora’s future.

The work we did this year sets a great foundation for building our future. I don’t just mean the Fedora Linux 36 and 37 releases, although we should definitely be proud of those. But there’s a continued sense of excitement around the community. We’re growing and bringing new energy.

This year, Nest With Fedora grew even more in a time where everyone is feeling virtual event fatigue. And we introduced Hatch — regional events where you could meet with other local-ish contributors. Reading the recaps, I wish I could have gone to all of them. But it was great to spend time with some of you in Rochester. I’ve really, really missed our in-person interactions. Virtual events help keep our global community connected, and help bring in new people who might not be able to join us otherwise, but they can’t substitute for face-to-face meetups. More on that in a moment.

It’s not just a few days of events that has me excited, though. When I look around the project, I see a lot happening. The Fedora CoreOS and Cloud teams promoted their deliverables to Edition status. We wrapped up a huge revamp of our community outreach that began in 2020. The Docs team is more active than it has been in years (and they’ve added a search bar to the site!). We have a complete renovation of our websites in the works. The Marketing team is exploring new ways to promote Fedora, including a presence in the Fediverse. We’re finally almost ready to merge Ask Fedora and Fedora Discussion, bringing more of our conversations together.

That’s a lot of work for one year. The best part is how organic this work is. This wasn’t some demand from on high (that’s not how Fedora works), but it was people in the community saying “I see work to be done. I’m going to do it!” Fedora is us.

We will celebrate so much more in 2023. We’re still working on the details, but we expect to have a greater in-person experience next year, including funding for hackfests and the return of Flock to Fedora! And of course, it’s the 20th anniversary of Fedora. The world — and the technology that drives it — has changed so much since then. But our values haven’t. The Fedora community remains an inclusive, welcoming, and open-minded community. I’m proud to be a part of it. Happy new year, everyone!

Posted on Leave a comment

Setting up Fedora IoT on Raspberry Pi and rootless Podman containers

Introduction

Fedora IoT is a foundation for Internet of Things (IoT) and Device Edge ecosystems. It’s a secure, immutable, and image-based operating system that supports the deployment of containerized applications. We’ll discuss how you can run Fedora IoT on a Raspberry Pi to deploy a rootless Podman container.

Running Fedora IoT on Raspberry Pi

Prerequisites:

  • PC (with Fedora)
  • SD-Card and SD-Card Reader
  • Raspberry Pi 3 or 4

Download the IoT image & CHECKSUM for your CPU from getfedora.org.

Screenshot of Fedora IoT image download.

After you download your Fedora IoT image, click Verify your Download to download the CHECKSUM file.

Screenshot to show where to find the "Verify your download." button.

Place the CHECKSUM file in the same location where you downloaded your Fedora IoT image.

Then, install gnupg and the arm image installer:

dnf install gnupg2 arm-image-installer

Next, import Fedora’s GPG keys to verify the image you downloaded:

$ curl -O https://getfedora.org/static/fedora.gpg

Then, verify the CHECKSUM file has a good signature:

$ gpgv --keyring ./fedora.gpg *-CHECKSUM

You should see something similar to the following in the output:

$ gpgv --keyring ./fedora.gpg *-CHECKSUM
gpgv: Signature made Fri 19 Mar 2021 10:10:28 AM EDT
gpgv: using RSA key 8C5BA6990BDB26E19F2A1A801161AE6945719A39
gpgv: Good signature from "Fedora (34) <fedora-34-primary@fedoraproject.org>"

Lastly, verify the checksum of your download to verify that the signature matches:

$ sha256sum -c *-CHECKSUM

Now, find the name of the SD-Card. You can use various tools, but in this article, we recommend using udisks command line tool udiskctl. First, verify that you have NOT inserted your SD-Card into your SD-Card reader.

Then, enter the following command:

udisksctl status 

The output displays all the connected devices on your machine. Review what devices are currently displayed. Next, plug in your SD-Card and enter the command again. Write down the name of the device that’s been added to the previous list.

Use caution when flashing your SD-Card. If you choose the wrong device, you might overwrite your hard drive.

Flash the image onto the SD-Card.

$ arm-image-installer --image=</path/to/fedora_image> \ --target=<RPi_Version> --media=/dev/<sd_card_device> \ --addkey=/path/to/pubkey \ --resizefs
  • Image – File path to the image you downloaded.
  • target – Type of arm board you are using (in this example it would be either the Raspberry Pi 3 or 4).
  • media – SD-Card path you identified.
  • addkey – Your SSH key.
  • resizefs – Resizes the image to the full SD-Card unless you have another partition to add.

The image won’t have a per-configured user or password.

Zezere is a provisioning service that can deploy devices without a physical console. Use Zezere to set up and deploy your device.

Navigate to provision.fedoraproject.org, then click the Claim Unowned Devices tab, and claim your device (i.e. your SD-Card). Click the Home tab to view your claimed device, then click the SSH Key Management tab to add your SSH key. This allows you to copy your SSH key to any of your Fedora IoT devices. The keys generated in the SSH Key Management tab are public, so they can be shared without risk to the security of your devices.

Image of Zezere to use as reference for instructions on how to deploy your device.

Return to the Home tab and click Submit provision request on your SD-Card to set up a provisioning request. Select fedora-iot-stable from the drop-down and click Schedule to copy your SSH Key onto your Fedora IoT device.

You’re now ready to run your applications.

Setting up rootless Podman containers

Fedora IoT uses Podman to develop, manage, and run Open Container Initiative (OCI) containers. Rootless containers can be run by unprivileged users, adding security against hackers to ensure they’re safe to share between machines.

Install slirpfnetns and fuse-overlays to begin setup of a rootless Podman container:

 sudo dnf -y install slirp4netns fuse-overlayfs shadow-utils

Rootless Podman containers require the root user to have a range of UIDs/GIDs listed in the /etc/subuid and /etc/subgid files. Update the /etc/subuid and /etc/subgid for each non-root user.

sudo usermod --add-subuids START-RANGE --add-subgids START-RANGE USERNAME 
  • START – Starting UID (ex. 1000)
  • RANGE – Range for you UID (ex. if you put 100, then your UID will range from 1000 to 1100)
  • USERNAME – The username you’re updating.

Podman is now set up to run rootless containers.

More setup recommendations

View the following resources for additional ways you can improve the setup of your containers:

Posted on Leave a comment

Automate container management on Fedora Linux with the Podman Linux System Role

Containers are a popular way to distribute and run software on Linux. One of the tools included in Fedora Linux to work with containers is the Pod Manager tool, also known as Podman. This article describes the use of the Ansible Podman Linux System Roles to automate container management.

With Podman, you can quickly and easily download container images and run containers. For more information on Podman, check out the Getting Started section on the podman.io site.

While Podman is very easy to use, many people are interested in automating Podman for a variety of reasons. For example, maybe you have multiple Fedora Linux systems that you would like to deploy a container workload across, or perhaps you’re a developer and would like to setup an automated process to deploy containers on your local workstation for testing purposes. Whether you are working with containers on a single system, or need to manage containers across a number of systems, automation can be critical to being efficient and saving time.

Overview of Linux System Roles

Linux System Roles are a set of Ansible roles/collections that can help automate the configuration and management of several aspects of Fedora Linux, CentOS Stream, RHEL, and RHEL derivatives. Linux System Roles is packaged in Fedora as an RPM (linux-system-roles) and is also available on Ansible Galaxy. For more information on Linux System Roles, and to see a list of included roles, refer to the Linux System Roles project page.

Linux System Roles recently added a new podman role for automating the management of Podman containers. One of Podman’s unique features is that it is daemonless, so the podman role directly sets the desired configuration on each host, and is capable of configuring the containers.conf, containers-registries.conf, containers-storage.conf, and containers-policy.json settings.

Podman systemd integration and Kubernetes YAML support

The podman system role utilizes the systemd integration with Kubernetes YAML introduced in Podman version 4.2. Podman supports the ability to run containers based on Kubernetes YAML, which can make it easier to transition between Podman and Kubernetes. Podman 4.2 introduced a new podman-kube@.service which uses systemd to manage containers defined in Kubernetes YAML. You’ll see an example of how the podman system role utilizes this functionality below.

Demo environment overview

In my environment I have four systems running Fedora Linux. The fedora-controlnode.example.com system will be the Ansible control node — this is where I’ll install Ansible and Linux System Roles. The other three systems, fedora-node1.example.com, fedora-node2.example.com, and fedora3-node3.example.com are the systems that I would like to deploy container workloads on to.

On these three systems, I would like to deploy a Nextcloud container. I would also like to deploy a web server container on these systems and run this as a non-privileged user (also referred to as a rootless container). I’ll use the httpd-24 container image that is a Red Hat Universal Base Image (UBI).

Setting up the control node system

Starting on the fedora-controlnode.example.com system, I’ll need to install the linux-system-roles and ansible packages:

[ansible@fedora-controlnode ~]$ sudo dnf install linux-system-roles ansible 

I’ll also need to configure SSH keys and the sudo configuration so that a user on the fedora-controlnode.example.com host can authenticate and escalate to root privileges on each of the three managed nodes. In this example, I am using an account named ansible.

Defining the Kubernetes YAML for the Nextcloud container

I’ll create a Kubernetes YAML file named nextcloud.yml with the following content that defines how I want the Nextcloud container configured:

apiVersion: v1
kind: Pod
metadata: name: nextcloud
spec: containers: - name: nextcloud image: docker.io/library/nextcloud ports: - containerPort: 80 hostPort: 8000 volumeMounts: - mountPath: /var/www/html:Z name: nextcloud-html volumes: - name: nextcloud-html hostPath: path: /nextcloud-html

The key parts of this YAML specify:

  • the name of the container,
  • the URL for the container image,
  • that the container’s port 80 will be published on the host as port 8000,
  • that the /var/www/html directory should use a volume mount using the /nextcloud-html directory on the host.

Defining the Kubernetes YAML for the web server

I’d also like to deploy a container running a web server, so I’ll define the following Kubernetes YAML file for it, named ubi8-httpd.yml:

apiVersion: v1
kind: Pod
metadata: name: ubi8-httpd
spec: containers: - name: ubi8-httpd image: registry.access.redhat.com/ubi8/httpd-24 ports: - containerPort: 8080 hostPort: 8080 volumeMounts: - mountPath: /var/www/html:Z name: ubi8-html volumes: - name: ubi8-html hostPath: path: ubi8-html

This is similar to the nextcloud.yml file:

  • specifying the name of the container,
  • the URL for the container image,
  • that the container’s port 8080 should be published on the host as port 8080,
  • that the /var/www/html directory should use a volume mount using the ubi8-html directory on the host.

Note that later on we’ll configure this container to run as a non-privileged user, so this path will be relative to the user’s home directory.

Defining the Ansible inventory file

I need to define a Ansible inventory file that lists the host names of the systems I would like to deploy the containers on. I’ll create a simple inventory file, named inventory, with the list of my three managed nodes:

fedora-node1.example.com
fedora-node2.example.com
fedora-node3.example.com

Defining the Ansible playbook

The final file I need to create is the actual Ansible playbook file, which I’ll name podman.yml with the following content:

- name: Run the podman system role hosts: all vars: podman_firewall: - port: 8080/tcp state: enabled - port: 8000/tcp state: enabled podman_create_host_directories: true podman_host_directories: "ubi8-html": owner: ansible group: ansible mode: "0755" podman_kube_specs: - state: started run_as_user: ansible run_as_group: ansible kube_file_src: ubi8-httpd.yml - state: started kube_file_src: nextcloud.yml roles: - fedora.linux_system_roles.podman

- name: Create index.html file hosts: all tasks: - ansible.builtin.copy: content: "Hello from {{ ansible_hostname }}" dest: /home/ansible/ubi8-html/index.html owner: ansible group: ansible mode: 0644 serole: object_r setype: container_file_t seuser: system_u

This playbook contains two plays, the first is named Run the podman system role. This play defines variables that control the podman system role, which is called as part of this play. The variables defined are:

  • podman_firewall: specifies that port 8080/tcp and 8000/tcp should be enabled. These ports are used by the ubi8-httpd and nextcloud containers, respectively.
  • podman_create_host_directories: specifies that host directories defined in the Kubernetes files will be created if they don’t exist
  • podman_host_directories: Within the ubi8-httpd.html Kubernetes YAML file, I defined a ubi8-html volume. This variable specifies that this ubi8-html directory on the hosts will be created with the ansible owner and group, and with a 0755 mode. Note that the nextcloud-html volume, defined in the nextcloud.yml file, is not listed here so the default ownership and permissions will be used when the directory is created on the hosts.
  • podman_kube_specs: This lists the Kubernetes YAML files that the podman system role should manage. It refers to the two files that were previously explained, ubi8-httpd.yml, and nextcloud.yml . Note that for the ubi8-httpd.yml container, it is also specified that this should be run as the ansible user and group.

The second play, Create index.html file, uses the ansible.builtin.copy module to deploy a index.html file to the /home/ansible/ubi8-html/ directory. This will provide the web server running from the ubi8-html containers content to serve.

Running the playbook

The next step is to run the playbook from the fedora-controlnode.example.com host with the following command:

[ansible@fedora-controlnode ~]$ ansible-playbook -i inventory -b podman.yml

I’ll verify that the playbook completes successfully with no failed tasks:

At this point, the nextcloud and ubi8-html containers should be deployed on each of the three managed nodes.

Validating the Nextcloud containers

Now, I’ll validate the successful deployment of the nextcloud containers on the three managed nodes. I can validate that Nextcloud is accessible by connecting to each host on port 8000 using a web browser, which shows the Nextcloud configuration screen on each host:

I’ll further investigate the fedora-node1.example.com host by connecting to it over SSH and using sudo to access a root shell:

[ansible@fedora-controlnode ~]$ ssh fedora-node1.example.com [ansible@fedora-node1 ~]$ sudo su - [root@fedora-node1 ~]# 

Run podman ps to validate that the nextcloud container is running:

[root@fedora-node1 ~]# podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7b6b131a652d localhost/podman-pause:4.2.1-1662580699 4 minutes ago Up 4 minutes ago 0aa0edcf4b08-service
71a2a1a48232 localhost/podman-pause:4.2.1-1662580699 4 minutes ago Up 4 minutes ago 0.0.0.0:8000->80/tcp 8b226e4ad5c1-infra
c307a07c7cae docker.io/library/nextcloud:latest apache2-foregroun... 4 minutes ago Up 4 minutes ago 0.0.0.0:8000->80/tcp nextcloud-nextcloud

Validate that the /nextcloud-html directory on the host has been populated with content from the container:

[root@fedora-node1 ~]# ls -al /nextcloud-html/
total 112
drwxr-xr-x. 1 33 tape 420 Nov 7 13:16 .
dr-xr-xr-x. 1 root root 186 Nov 7 13:12 ..
drwxr-xr-x. 1 33 tape 880 Nov 7 13:16 3rdparty
drwxr-xr-x. 1 33 tape 1182 Nov 7 13:16 apps
-rw-r--r--. 1 33 tape 19327 Nov 7 13:16 AUTHORS
drwxr-xr-x. 1 33 tape 408 Nov 7 13:17 config
-rw-r--r--. 1 33 tape 4095 Nov 7 13:16 console.php
-rw-r--r--. 1 33 tape 34520 Nov 7 13:16 COPYING
drwxr-xr-x. 1 33 tape 440 Nov 7 13:16 core
...
...

I can also see that a systemd unit has created for this container:

[root@fedora-node1 ~]# systemctl list-units | grep nextcloud podman-kube@-etc-containers-ansible\x2dkubernetes.d-nextcloud.yml.service loaded active running A template for running K8s workloads via podman-play-kube [root@fedora-node1 ~]# systemctl status podman-kube@-etc-containers-ansible\\x2dkubernetes.d-nextcloud.yml.service ● podman-kube@-etc-containers-ansible\x2dkubernetes.d-nextcloud.yml.service - A template for running K8s workloads via podman-play-kube Loaded: loaded (/usr/lib/systemd/system/podman-kube@.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2022-11-07 13:16:52 MST; 7min ago Docs: man:podman-play-kube(1) Main PID: 7601 (conmon) Tasks: 3 (limit: 4655) Memory: 31.1M CPU: 2.562s
...
...

Note that the name of the service is quite long because it refers to the name of the Kubernetes YAML file, /etc/containers/ansible-kubernetes.d/nextcloud.yml. This file was deployed by the podman system role. If I display the contents of the file, it matches the contents of the nextcloud.yml Kubernetes YAML file I created on the control node host.

[root@fedora-node1 ~]# cat /etc/containers/ansible-kubernetes.d/nextcloud.yml apiVersion: v1
kind: Pod
metadata: name: nextcloud
spec: containers: - image: docker.io/library/nextcloud name: nextcloud ports: - containerPort: 80 hostPort: 8000 volumeMounts: - mountPath: /var/www/html:Z name: nextcloud-html volumes: - hostPath: path: /nextcloud-html name: nextcloud-html

Validating the ubi8-httpd containers

I’ll also validate that the ub8-httpd container, which was deployed to run as the ansible user and group, is working properly. Back on the fedora-controlnode.example.com host, I’ll validate that I can access the web server on port 8080 on each of the three managed nodes:

[ansible@fedora-controlnode ~]$ for server in fedora-node1.example.com fedora-node2.example.com fedora-node3.example.com; do curl ${server}:8080; echo; done
Hello from fedora-node1
Hello from fedora-node2
Hello from fedora-node3

I’ll also connect to one of the managed nodes as the ansible user to further investigate:

[ansible@fedora-controlnode ~]$ ssh fedora-node1.example.com
[ansible@fedora-node1 ~]$ whoami
ansible

I’ll run podman ps and validate that the ubi8-httpd container is running:

[ansible@fedora-node1 ~]$ podman ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
7b42efd7c9c0 localhost/podman-pause:4.2.1-1662580699 20 minutes ago Up 20 minutes ago 1b46d9874ed0-service
f62b9a2ef9b8 localhost/podman-pause:4.2.1-1662580699 20 minutes ago Up 20 minutes ago 0.0.0.0:8080->8080/tcp 0938dc63acfd-infra
4b3a64783aeb registry.access.redhat.com/ubi8/httpd-24:latest /usr/bin/run-http... 20 minutes ago Up 20 minutes ago 0.0.0.0:8080->8080/tcp ubi8-httpd-ubi8-httpd

This container was deployed as a non-privileged user (the ansible user) so there is a systemd user instance running as the ansible user. I’ll need to specify the –user option on the systemctl command when validating that the systemd unit was created and is running:

[ansible@fedora-node1 ~]$ systemctl --user list-units | grep ubi8 podman-kube@-home-ansible-.config-containers-ansible\x2dkubernetes.d-ubi8\x2dhttpd.yml.service loaded active running A template for running K8s workloads via podman-play-kube [ansible@fedora-node1 ~]$ systemctl --user status podman-kube@-home-ansible-.config-containers-ansible\\x2dkubernetes.d-ubi8\\x2dhttpd.yml.service ● podman-kube@-home-ansible-.config-containers-ansible\x2dkubernetes.d-ubi8\x2dhttpd.yml.service - A template for running K8s workloads via podman-play-kube Loaded: loaded (/usr/lib/systemd/user/podman-kube@.service; enabled; vendor preset: disabled) Active: active (running) since Mon 2022-11-07 13:12:31 MST; 24min ago Docs: man:podman-play-kube(1) Main PID: 5260 (conmon) Tasks: 17 (limit: 4655) Memory: 9.3M CPU: 1.245s
...
...

As previously mentioned, the systemd unit name is so long because it contains the path to the Kubernetes YAML file, which in this case is /home/ansible/.config/containers/ansible-kubernetes.d/ubi8-httpd.yml. This file was deployed by the podman system role and contains the contents of the ubi8-httpd.yml file previously configured on the fedora-controlnode.example.com host.

Validating containers automatically start at boot

I’ll reboot the three managed nodes to validate that the containers automatically start up at boot.

After the reboot, the nextcloud containers are still accessible on each host on port 8000, and the ubi8-httpd containers are accessible on each host at port 8080.

The systemd units for the nextcloud containers and ubi8-httpd containers are both enabled to start at boot. However, note that the ubi8-httpd container is running as a non-privileged user (the ansible user) , so the podman system role has automatically enabled user lingering for the ansible user. This setting enables a systemd user instance to be started at boot, and to keep running when the user logs out, so that the container will automatically start at boot.

Conclusion

The podman Linux System Role can help automate the deployment of Podman containers across your Fedora Linux environment. You can also combine the podman system role with the other Linux System Roles in the Fedora linux-system-roles package to automate even more. For example, you could write a playbook that utilizes the storage Linux System Role to configure filesystems across your environment, and then use the podman system role to deploy containers that utilize those filesystems.

Posted on Leave a comment

Working with Btrfs – Subvolumes

This article is part of a series of articles that takes a closer look at Btrfs, the default filesystem for Fedora Workstation and Fedora Silverblue since Fedora Linux 33.

In case you missed it, here’s the previous article from the series: https://fedoramagazine.org/working-with-btrfs-general-concepts/

Introduction

Subvolumes allow for the partitioning of a Btrfs filesystem into separate sub-filesystems. This means that you can mount subvolumes from a Btrfs filesystem as if they were independent filesystems. In addition, you can, for example, define the maximum space a subvolume may take up via qgroups (We’ll talk about this in another article in this series), or use subvolumes to specifically include or exclude files from snapshots (We’ll talk about this, too, in another article in this series). Every default Fedora Workstation and Fedora Silverblue installation since Fedora Linux 33 makes use of subvolumes. In this article we will explore how it works.

Below you will find a lot of examples related to subvolumes. If you want to follow along, you must have access to some Btrfs filesystem and root access. You can verify whether your /home/ directory is Btrfs via the following command:

$ findmnt -no FSTYPE /home
btrfs

This command will output the name of the filesystem of your /home/ directory. If it says btrfs, you’re all set. Let’s create a new directory to perform some experiments in:

$ mkdir ~/btrfs-subvolume-test
$ cd ~/btrfs-subvolume-test

In the text below, you will find lots of command outputs in boxes such as shown above. Please keep in mind while reading/comparing command outputs that the box contents are wrapped at the end of the line. This makes it difficult to recognize long lines that are broken across multiple lines for readability. When in doubt, try to resize your browser window and see how the text behaves!

Creating and playing with subvolumes

We can create a Btrfs subvolume with the following command:

$ sudo btrfs subvolume create first
Create subvolume './first'

When we inspect the current directory we will see that it now has a new folder named first. Note the first character d in the output below:

$ ls -l
total 0
drwxr-xr-x. 1 root root 0 Oct 15 18:09 first

We can handle this like any regular folder: We can rename it, move it, create new files and folders inside, etc. Note that the folder belongs to root, so we must be root to do these things.

If it acts like a folder and looks like a folder, how do we know whether it’s a Btrfs subvolume? We can use the btrfs tools to list all subvolumes:

$ sudo btrfs subvolume list .
ID 256 gen 30 top level 5 path home
ID 257 gen 30 top level 5 path root
ID 258 gen 25 top level 257 path root/var/lib/machines
ID 259 gen 29 top level 256 path hartan/btrfs-subvolume-test/first

If you’re on a recent and unmodified Fedora Linux installation you will likely see the same output as above. We will inspect home and root as well as the meaning of all the numbers later. For now, we see that there is a subvolume at the path we specified. We can limit the output to the subvolumes below our current location:

$ sudo btrfs subvolume list -o .
ID 259 gen 29 top level 256 path home/hartan/btrfs-subvolume-test/first

Let’s rename the subvolume:

$ sudo mv first second
$ sudo btrfs subvolume list -o .
ID 259 gen 29 top level 256 path home/hartan/btrfs-subvolume-test/second

We can also nest subvolumes:

$ sudo btrfs subvolume create second/third
Create subvolume 'second/third'
$ sudo btrfs subvolume list .
ID 256 gen 34 top level 5 path home
ID 257 gen 37 top level 5 path root
ID 258 gen 25 top level 257 path root/var/lib/machines
ID 259 gen 37 top level 256 path hartan/btrfs-subvolume-test/second
ID 260 gen 37 top level 259 path hartan/btrfs-subvolume-test/second/third

And we can also remove subvolumes, either like we remove folders:

$ sudo rm -r second/third

or via special Btrfs commands:

$ sudo btrfs subvolume delete second
Delete subvolume (no-commit): '/home/hartan/btrfs-subvolume-test/second'

Handling Btrfs subvolumes like separate filesystems

The introduction mentioned that Btrfs subvolumes act like separate filesystems. This means that we can mount subvolumes and pass some mount options to them. First we will create a small folder structure to get a better understanding of what happens:

$ mkdir -p a a/1 a/1/b
$ sudo btrfs subvolume create a/2
Create subvolume 'a/2'
$ sudo touch a/1/c a/1/b/d a/2/e

Here’s what the structure looks like:

$ tree
.
└── a ├── 1 │   ├── b │   │   └── d │   └── c └── 2 └── e 4 directories, 3 files

Verify that there is now a new Btrfs subvolume:

$ sudo btrfs subvolume list -o .
ID 261 gen 41 top level 256 path home/hartan/btrfs-subvolume-test/a/2

To mount the subvolume we must know the path of the block device where the Btrfs filesystem subvolume resides. The following command tells us:

$ findmnt -vno SOURCE /home/
/dev/vda3

Now we can mount the subvolume. Make sure you replace the arguments with the values for your PC:

$ sudo mount -o subvol=home/hartan/btrfs-subvolume-test/a/2 /dev/vda3 a/1/b

Observe that we use the -o flag to give additional options to the mount program. In this case we tell it to mount the subvolume with name home/hartan/btrfs-subvolume-test/a/2 from the btrfs filesystem on device /dev/vda3. This is a Btrfs-specific option and isn’t available in other filesystems.

We see that the directory structure has changed:

$ tree
.
└── a ├── 1 │   ├── b │   │   └── e │   └── c └── 2 └── e 4 directories, 3 files

Note that the file e exists twice now and d is gone. We are now able to access the same Btrfs subvolume by two different paths. All changes we perform in either of the paths are immediately reflected in all other locations:

$ sudo touch a/1/b/x
$ ls -lA a/2
total 0
-rw-r--r--. 1 root root 0 Oct 15 18:14 e
-rw-r--r--. 1 root root 0 Oct 15 18:16 x

Let’s play some more with the mount options. For example we can mount the subvolume as read-only under a/1/b like this (Insert arguments for your PC!):

$ sudo umount a/1/b
$ sudo mount -o subvol=home/hartan/btrfs-subvolume-test/a/2,ro /dev/vda3 a/1/b

We use the same command as above, except that we add ro at the end. Now we can no longer create files via this mount:

$ sudo touch a/1/b/y
touch: cannot touch 'a/1/b/y': Read-only file system

but accessing the subvolume directly still works like before:

$ sudo touch a/2/y
$ tree
.
└── a ├── 1 │   ├── b │   │   ├── e │   │   ├── x │   │   └── y │   └── c └── 2 ├── e ├── x └── y 4 directories, 7 files

Don’t forget to clean up before we move on:

$ sudo rm -rf a
rm: cannot remove 'a/1/b/e': Read-only file system
rm: cannot remove 'a/1/b/x': Read-only file system
rm: cannot remove 'a/1/b/y': Read-only file system

Oh no, what happened? Well, since we mounted the subvolume read-only above, we cannot delete it. A deletion from a filesystems’ perspective is a write operation: To delete a/1/b/e, we remove the directory entry for e from the directory contents of its parent directory, a/1/b in this case. In other words, we must write to a/1/b to tell it that e doesn’t exist any longer. So first we unmount the subvolume again, and then we remove the folder:

$ sudo umount a/1/b
$ sudo rm -rf a
$ tree
. 0 directories, 0 files

Subvolume IDs

Remember the first output of the subvolume list subcommand? That contained a lot of numbers, so let’s see what that is all about. I copied the output here to take another look:

ID 256 gen 30 top level 5 path home
ID 257 gen 30 top level 5 path root
ID 258 gen 25 top level 257 path root/var/lib/machines
ID 259 gen 29 top level 256 path hartan/btrfs-subvolume-test/first

We see there are three columns of numbers, each prefixed with a few letters to describe what they do. The first column of numbers is a subvolumes ID. Subvolume IDs are unique in a Btrfs filesystem and as such uniquely identify subvolumes. This means that the subvolume named home can also be referred to by its ID 256. In the mount command above we wrote:

$ sudo mount -o subvol=hartan/...

Another perfectly legal option is to use subvolume IDs:

$ sudo mount -o subvolid=...

Subvolume IDs start at 256 and increase by 1 for every created subvolume. There is however one exception to this: The filesystem root always has the subvolume name / and the subvolume ID 5. That is right, even the root of a Btrfs filesystem is technically a subvolume. This is just implicitly known, hence it doesn’t show up in the output of btrfs subvolume list. If you mount a Btrfs filesystem without the subvol or subvolid argument, the root subvolume with subvolid=5 is assumed as default. Below we’ll see an example of when one may want to explicitly mount the filesystem root.

The second column of numbers is the generation counter and incremented on every Btrfs transaction. This is mostly an internal counter and won’t be discussed further here.

Finally, the third column of numbers is the subvolume ID of the subvolumes parent. In the output above we see that both subvolume home and root have 5 as their parent subvolume ID. Remember that ID 5 has a special meaning: It is the filesystem root. So we know that home and root are children to the root subvolume. hartan/btrfs-subvolume-test/first on the other hand is a child of the subvolume with ID 256, which in our case is home.

In the next section we have a look at where the subvolumes root and home come from.

Inspecting default subvolumes in Fedora Linux

When you create a new Btrfs filesystem from scratch, there will be no subvolumes in it (Except of course for the root subvolume). So where do the home and root subvolumes in Fedora Linux come from?

These are created by the installer at install time. Traditional installations would often include a separate filesystem partition for the / and /home directories. During boot, these are then appropriately mounted to assemble one full filesystem. But there is an issue with this approach: Unless you use technologies such as lvm, it is very hard to change a partitions size at some point in the future. As a consequence you may end up in a situation where either your / or /home runs out of space, while the respective other partition has lots of unused, free space left.

Since Btrfs subvolumes are all part of the same filesystem, they will share the space that the underlying filesystem offers. Remember when we created the subvolumes above? We never told Btrfs how big they are: A subvolume can take up all the space the filesystem has, by default nothing keeps it from doing so. However, we could dynamically impose size limits via Btrfs qgroups, which can also be modified during runtime (And we’ll see how in a later article in this series).

Another advantage of separating / and /home is that we can take snapshots separately. A subvolume is a boundary for snapshots, and snapshots will never contain the contents of other subvolumes below the subvolume that the snapshot is taken of. More details on snapshots follow in the next article in this series.

Enough of the theory! Let’s see what this is all about. First ensure that your root filesystem is in fact of type Btrfs:

$ findmnt -no FSTYPE /
btrfs

And then get the partition it resides on:

$ findmnt -vno SOURCE /
/dev/vda3

Remember we can mount the filesystem root by its special subvolume ID 5 (Adapt the filesystem partition!):

$ mkdir fedora-rootsubvol
$ sudo mount -o subvolid=5 /dev/vda3 ./fedora-rootsubvol
$ ls fedora-rootsubvol/
home root

And there are the subvolumes of our Fedora Linux installation! But how does Fedora Linux know that the subvolume root belongs to /, and home belongs to /home?

The file /etc/fstab contains so-called static information about the filesystem. In simple terms, during booting your system reads this file, line by line, and mounts all the filesystems listed there. On my system, the file looks like this:

$ cat /etc/fstab
# [ ... ]
# /etc/fstab
# Created by anaconda on Sat Oct 15 12:01:57 2022
# [ ... ]
#
UUID=5e4e42bb-4f2f-4f0e-895f-d1a46ea47807 / btrfs subvol=root,compress=zstd:1 0 0
UUID=e3a798a8-b8f2-40ca-9da7-5e292a6412aa /boot ext4 defaults 1 2
UUID=5e4e42bb-4f2f-4f0e-895f-d1a46ea47807 /home btrfs subvol=home,compress=zstd:1 0 0

(Note that the “UUID” lines above have been wrapped into two lines)

The UUID at the beginning of each line is simply a means to identify disks and filesystem partitions in your system (roughly equivalent to /dev/vda3 as I used above). The second column is the path in the filesystem tree where this filesystem should be mounted. The third column is the filesystem type. We see that the entries for / and /home are of type btrfs, just what we expect! Finally, in the fourth column we see the magic: These are the mount options, and there it says to mount / with the option subvol=root. That is exactly the subvolume we saw in the output of btrfs subvolume list / all the time!

With this information, we can reconstruct the call to mount that creates this filesystem entry:

$ sudo mount -o subvol=root,compress=zstd:1 UUID=5e4e42bb-4f2f-4f0e-895f-d1a46ea47807 /
(again, the line above has been wrapped into two)

And that is how Fedora Linux uses Btrfs subvolumes! If you’re curious as to why Fedora Linux decided to use Btrfs as the default filesystem, refer to the change proposal linked below [1].

More on Btrfs subvolumes

The Btrfs wiki has additional information on subvolumes and most importantly on the mount options that can be applied to Btrfs subvolumes. Some options, like compress can only be applied on a filesystem-wide level and thus affect all subvolumes of a Btrfs filesystem. You can find the entry linked below [2].

If you find it confusing to tell which directories are plain directories and which are subvolumes, you can feel free to adopt a special naming convention for your subvolumes. For example, you could prefix your subvolume names with an “@” to make them easily distinguishable.

Now that you know that subvolumes behave like filesystems, one may ask how best to place a subvolume in a certain location. Say you want a Btrfs subvolume under ~/games, where your home directory (~) is itself a subvolume, how can you achieve that? Given the example above, you may use a command like sudo btrfs subvolume create ~/games. This way, you create so-called nested subvolumes: Inside your subvolume ~, there is now a subvolume games. That is a perfectly fine way to approach this situation.

Another valid solution is to do what Fedora does by default: Create all subvolumes under the root subvolume (i.e. such that their parent subvolume ID is 5), and mount them into the appropriate locations. The Btrfs wiki has an overview of these approaches along with a short discussion about their respective implications on filesystem management [5].

Conclusion

In this article we discovered Btrfs subvolumes, which act like separate Btrfs filesystems inside a Btrfs filesystem. We learned how to create, mount and delete subvolumes. Finally, we explored how Fedora Linux makes use of subvolumes – without us noticing at all.

The next articles in this series will deal with:

  • Snapshots – Going back in time
  • Compression – Transparently saving storage space
  • Qgroups – Limiting your filesystem size
  • RAID – Replace your mdadm configuration

If there are other topics related to Btrfs that you want to know more about, have a look at the Btrfs Wiki [3] and Docs [4]. Don’t forget to check out the first article of this series, if you haven’t already! If you feel that there is something missing from this article series, let us know in the comments below. See you in the next article!

Sources

[1]: https://fedoraproject.org/wiki/Changes/BtrfsByDefault#Benefit_to_Fedora
[2]: https://btrfs.readthedocs.io/en/latest/Subvolumes.html
[3]: https://btrfs.wiki.kernel.org/index.php/Main_Page
[4]: https://btrfs.readthedocs.io/en/latest/Introduction.html
[5]: https://btrfs.wiki.kernel.org/index.php/SysadminGuide#Layout