Posted on Leave a comment

Btrfs Coming to Fedora 33

by Chris Murphy and Langdon White


User data is the most important thing on a computer. Whether it’s source code for the next big release, family pictures, a music library, or anything else, you want it to be safe. Changing the default file system is not a change to make casually. The Fedora Project is changing the default file system for desktop variants (Fedora Workstation, Fedora KDE, etc), for the first time since Fedora 11. Btrfs will replace ext4 as the default filesystem in Fedora 33.

What does this mean for me?

Btrfs is a stable and mature file system with modern features: data integrity, optimizations for SSDs, compression, cheap writable snapshots, multiple device support, and more.

The switch to Btrfs will use a single-partition disk layout, and Btrfs’ built-in volume management. The previous default layout placed constraints on disk usage that can be a difficult adjustment for novice users. Btrfs solves this problem by avoiding it.

As a techie, you may have heard of bit rot, and memory bit flips. Data can be corrupted by a multitude of physical factors, even cosmic rays from the sun! Before an SSD fails outright, often it will return either zeros or garbage, instead of your data. Btrfs safeguards your data with checksums, and performs verification on every read. Corrupt data is never given to your programs, and it won’t replicate into your backups to be discovered another day (or year).

Btrfs uses a “copy-on-write” model: your data and the file system itself are never overwritten. This enhances crash-safeness. When copying a file, Btrfs does not write new data until you actually change the old data, saving space.

In fact, users will save more space when using Btrfs’ transparent compression. Compressing data reduces total writes, saves space, and extends flash drive life. In many cases, it can also improve performance. Compression can be enabled on an entire file system, or per subvolume, directory, and even per file. You will be able to opt-in to using compression in Fedora 33. And it’s one of the features we’re looking forward to taking advantage of by default in future Fedora releases.

Trusted

Facebook uses Btrfs on millions of machines in production. They compare its stability to ext4 and XFS (another file system available in Fedora). In fact, they use Btrfs to “improve” the quality of the consumer storage hardware that they use in production. Btrfs detects problems before the hardware fails.

(open)SUSE have been using Btrfs for many years now, including SUSE Linux Enterprise Server (SLES). You can’t imagine a company that provides support to customers shipping software that they don’t completely trust.

What’s next?

The Change is code complete, and has been testable in Rawhide as the default file system since early July. Btrfs has been explicitly supported in Fedora since 2012. This is expected to be a transparent change for most users, however it is still significant. Fedora will ensure we deliver the dependable and reliable experience Fedora users have come to expect.

Special thanks to: Ben Cotton, Michael Catanzaro, and the Fedora Workstation Working Group for contributing to this article.

Posted on Leave a comment

Contribute at the Fedora Kernel and GNOME test days

Fedora test days are events where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed to Fedora before, this is a perfect way to get started.

There are two upcoming test days in the upcoming week. The first, starts on Monday 17 August through Monday 24 August, is to test the Kernel 5.8. Wednesday August 19, the test day is focusing on testing GNOME. Come and test with us to make the upcoming Fedora 33 even better. Read more below on how to do it.

Kernel test week

The kernel team is working on final integration for kernel 5.8. This version was just recently released and will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week for Monday, August 17 through Monday, August 24. Refer to the wiki page for links to the test images you’ll need to participate. This document clearly outlines the steps.

GNOME test day

GNOME is the default desktop environment for Fedora Workstation and thus for many Fedora users. As a part of the planned change the GNOME megaupdate will land on Fedora which then will be shipped with Fedora 33. To ensure that everything works fine The Workstation WG and QA team will have this test day for on Wednesday, August 19. Refer to the wiki page for links and resources to test the GNOME test day.

How do test days work?

A test day is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to download test materials (which include some large files) and then read and follow directions step by step.

Detailed information about both test days are on the wiki pages above. If you’re available on or around the days of the events, please do some testing and report your results.

Posted on Leave a comment

TCP window scaling, timestamps and SACK

The Linux TCP stack has a myriad of sysctl knobs that allow to change its behavior.  This includes the amount of memory that can be used for receive or transmit operations, the maximum number of sockets and optional features and protocol extensions.

There are  multiple articles that recommend to disable TCP extensions, such as timestamps or selective acknowledgments (SACK) for various “performance tuning” or “security” reasons.

This article provides background on what these extensions do, why they
are enabled by default, how they relate to one another and why it is normally a bad idea to turn them off.

TCP Window scaling

The data transmission rate that TCP can sustain is limited by several factors. Some of these are:

  • Round trip time (RTT).  This is the time it takes for a packet to get to the destination and a reply to come back. Lower is better.
  • lowest link speed of the network paths involved
  • frequency of packet loss
  • the speed at which new data can be made available for transmission
    For example, the CPU needs to be able to pass data to the network adapter fast enough. If the CPU needs to encrypt the data first, the adapter might have to wait for new data. In similar fashion disk storage can be a bottleneck if it can’t read the data fast enough.
  • The maximum possible size of the TCP receive window. The receive window determines how much data (in bytes) TCP can transmit before it has to wait for the receiver to report reception of that data. This is announced by the receiver. The receiver will constantly update this value as it reads and acknowledges reception of the incoming data. The receive windows current value is contained in the TCP header that is part of every segment sent by TCP. The sender is thus aware of the current receive window whenever it receives an acknowledgment from the peer. This means that the higher the round-trip time, the longer it takes for sender to get receive window updates.

TCP is limited to at most 64 kilobytes of unacknowledged (in-flight) data. This is not even close to what is needed to sustain a decent data rate in most networking scenarios. Let us look at some examples.

Theoretical data rate

With a round-trip-time of 100 milliseconds, TCP can transfer at most 640 kilobytes per second. With a 1 second delay, the maximum theoretical data rate drops down to only 64 kilobytes per second.

This is because of the receive window. Once 64kbyte of data have been sent the receive window is already full.  The sender must wait until the peer informs it that at least some of the data has been read by the application. 

The first segment sent reduces the TCP window by the size of that segment. It takes one round-trip before an update of the receive window value will become available. When updates arrive with a 1 second delay, this results in a 64 kilobyte limit even if the link has plenty of bandwidth available.

In order to fully utilize a fast network with several milliseconds of delay, a window size larger than what classic TCP supports is a must. The ’64 kilobyte limit’ is an artifact of the protocols specification: The TCP header reserves only 16bits for the receive window size. This allows receive windows of up to 64KByte. When the TCP protocol was originally designed, this size was not seen as a limit.

Unfortunately, its not possible to just change the TCP header to support a larger maximum window value. Doing so would mean all implementations of TCP would have to be updated simultaneously or they wouldn’t understand one another anymore. To solve this, the interpretation of the receive window value is changed instead.

The ‘window scaling option’ allows to do this while keeping compatibility to existing implementations.

TCP Options: Backwards-compatible protocol extensions

TCP supports optional extensions. This allows to enhance the protocol with new features without the need to update all implementations at once. When a TCP initiator connects to the peer, it also send a list of supported extensions. All extensions follow the same format: an unique option number followed by the length of the option and the option data itself.

The TCP responder checks all the option numbers contained in the connection request. If it does not understand an option number it skips
‘length’ bytes of data and checks the next option number. The responder omits those it did not understand from the reply. This allows both the sender and receiver to learn the common set of supported options.

With window scaling, the option data always consist of a single number.

The window scaling option

 
Window Scale option (WSopt): Kind: 3, Length: 3
    +---------+---------+---------+
    | Kind=3  |Length=3 |shift.cnt|
    +---------+---------+---------+
         1         1         1

The window scaling option tells the peer that the receive window value found in the TCP header should be scaled by the given number to get the real size.

For example, a TCP initiator that announces a window scaling factor of 7 tries to instruct the responder that any future packets that carry a receive window value of 512 really announce a window of 65536 byte. This is an increase by a factor of 128. This would allow a maximum TCP Window of 8 Megabytes.

A TCP responder that does not understand this option ignores it. The TCP packet sent in reply to the connection request (the syn-ack) then does not contain the window scale option. In this case both sides can only use a 64k window size. Fortunately, almost every TCP stack supports and enables this option by default, including Linux.

The responder includes its own desired scaling factor. Both peers can use a different number. Its also legitimate to announce a scaling factor of 0. This means the peer should treat the receive window value it receives verbatim, but it allows scaled values in the reply direction — the recipient can then use a larger receive window.

Unlike SACK or TCP timestamps, the window scaling option only appears in the first two packets of a TCP connection, it cannot be changed afterwards. It is also not possible to determine the scaling factor by looking at a packet capture of a connection that does not contain the initial connection three-way handshake.

The largest supported scaling factor is 14. This allows TCP window sizes
of up to one Gigabyte.

Window scaling downsides

It can cause data corruption in very special cases. Before you disable the option – it is impossible under normal circumstances. There is also a solution in place that prevents this. Unfortunately, some people disable this solution without realizing the relationship with window scaling. First, let’s have a look at the actual problem that needs to be addressed. Imagine the following sequence of events:

  1. The sender transmits segments: s_1, s_2, s_3, … s_n
  2.  The receiver sees: s_1, s_3, .. s_n and sends an acknowledgment for s_1.
  3.  The sender considers s_2 lost and sends it a second time. It also sends new data contained in segment s_n+1.
  4.  The receiver then sees: s_2, s_n+1, s_2: the packet s_2 is received twice.

This can happen for example when a sender triggers re-transmission too early. Such erroneous re-transmits are never a problem in normal cases, even with window scaling. The receiver will just discard the duplicate.

Old data to new data

The TCP sequence number can be at most 4 Gigabyte. If it becomes larger than this, the sequence wraps back to 0 and then increases again. This is not a problem in itself, but if this occur fast enough then the above scenario can create an ambiguity.

If a wrap-around occurs at the right moment, the sequence number s_2 (the re-transmitted packet) can already be larger than s_n+1. Thus, in the last step (4), the receiver may interpret this as: s_2, s_n+1, s_n+m, i.e. it could view the ‘old’ packet s_2 as containing new data.

Normally, this won’t happen because a ‘wrap around’ occurs only every couple of seconds or minutes even on high bandwidth links. The interval between the original and a unneeded re-transmit will be a lot smaller.

For example,with a transmit speed of 50 Megabytes per second, a
duplicate needs to arrive more than one minute late for this to become a problem. The sequence numbers do not wrap fast enough for small delays to induce this problem.

Once TCP approaches ‘Gigabyte per second’ throughput rates, the sequence numbers can wrap so fast that even a delay by only a few milliseconds can create duplicates that TCP cannot detect anymore. By solving the problem of the too small receive window, TCP can now be used for network speeds that were impossible before – and that creates a new, albeit rare problem. To safely use Gigabytes/s speed in environments with very low RTT receivers must be able to detect such old duplicates without relying on the sequence number alone.

TCP time stamps

A best-before date

In the most simple terms, TCP timestamps just add a time stamp to the packets to resolve the ambiguity caused by very fast sequence number wrap around. If a segment appears to contain new data, but its timestamp is older than the last in-window packet, then the sequence number has wrapped and the ”new” packet is actually an older duplicate. This resolves the ambiguity of re-transmits even for extreme corner cases.

But this extension allows for more than just detection of old packets. The other major feature made possible by TCP timestamps are more precise round-trip time measurements (RTTm).

A need for precise round-trip-time estimation

When both peers support timestamps,  every TCP segment carries two additional numbers: a timestamp value and a timestamp echo.

 
TCP Timestamp option (TSopt): Kind: 8, Length: 10
+-------+----+----------------+-----------------+
|Kind=8 | 10 |TS Value (TSval)|EchoReply (TSecr)|
+-------+----+----------------+-----------------+
    1      1         4                4

An accurate RTT estimate is crucial for TCP performance. TCP automatically re-sends data that was not acknowledged. Re-transmission is triggered by a timer: If it expires, TCP considers one or more packets that it has not yet received an acknowledgment for to be lost. They are then sent again.

But “has not been acknowledged” does not mean the segment was lost. It is also possible that the receiver did not send an acknowledgment so far or that the acknowledgment is still in flight. This creates a dilemma: TCP must wait long enough for such slight delays to not matter, but it can’t wait for too long either.

Low versus high network delay

In networks with a high delay, if the timer fires too fast, TCP frequently wastes time and bandwidth with unneeded re-sends.

In networks with a low delay however,  waiting for too long causes reduced throughput when a real packet loss occurs. Therefore, the timer should expire sooner in low-delay networks than in those with a high delay. The tcp retransmit timeout therefore cannot use a fixed constant value as a timeout. It needs to adapt the value based on the delay that it experiences in the network.

Round-trip time measurement

TCP picks a retransmit timeout that is based on the expected round-trip time (RTT). The RTT is not known in advance. RTT is estimated by measuring the delta between the time a segment is sent and the time TCP receives an acknowledgment for the data carried by that segment.

This is complicated by several factors.

  • For performance reasons, TCP does not generate a new acknowledgment for every packet it receives. It waits  for a very small amount of time: If more segments arrive, their reception can be acknowledged with a single ACK packet. This is called “cumulative ACK”.
  •  The round-trip-time is not constant. This is because of a myriad of factors. For example, a client might be a mobile phone switching to different base stations as its moved around. Its also possible that packet switching takes longer when link or CPU utilization increases.
  • a packet that had to be re-sent must be ignored during computation. This is because the sender cannot tell if the ACK for the re-transmitted segment is acknowledging the original transmission (that arrived after all) or the re-transmission.

This last point is significant: When TCP is busy recovering from a loss, it may only receives ACKs for re-transmitted segments. It then can’t measure (update) the RTT during this recovery phase. As a consequence it can’t adjust the re-transmission timeout, which then keeps growing exponentially. That’s a pretty specific case (it assumes that other mechanisms such as fast retransmit or SACK did not help). Nevertheless, with TCP timestamps, RTT evaluation is done even in this case.

If the extension is used, the peer reads the timestamp value from the TCP segments extension space and stores it locally. It then places this value in all the segments it sends back as the “timestamp echo”.

Therefore the option carries two timestamps: Its senders own timestamp and the most recent timestamp it received from the peer. The “echo timestamp” is used by the original sender to compute the RTT. Its the delta between its current timestamp clock and what was reflected in the “timestamp echo”.

Other timestamp uses

TCP timestamps even have other uses beyond PAWS and RTT measurements. For example it becomes possible to detect if a retransmission was unnecessary. If the acknowledgment carries an older timestamp echo, the acknowledgment was for the initial packet, not the re-transmitted one.

Another, more obscure use case for TCP timestamps is related to the TCP syn cookie feature.

TCP connection establishment on server side

When connection requests arrive faster than a server application can accept the new incoming connection, the connection backlog will eventually reach its limit. This can occur because of a mis-configuration of the system or a bug in the application. It also happens when one or more clients send connection requests without reacting to the ‘syn ack’ response. This fills the connection queue with incomplete connections. It takes several seconds for these entries to time out. This is called a “syn flood attack”.

TCP timestamps and TCP syn cookies

Some TCP stacks allow to accept new connections even if the queue is full. When this happens, the Linux kernel will print a prominent message to the system log:

Possible SYN flooding on port P. Sending Cookies. Check SNMP counters.

This mechanism bypasses the connection queue entirely. The information that is normally stored in the connection queue is encoded into the SYN/ACK responses TCP sequence number. When the ACK comes back, the queue entry can be rebuilt from the sequence number.

The sequence number only has limited space to store information. Connections established using the ‘TCP syn cookie’ mechanism can not support TCP options for this reason.

The TCP options that are common to both peers can be stored in the timestamp, however. The ACK packet reflects the value back in the timestamp echo field which allows to recover the agreed-upon TCP options as well. Else, cookie-connections are restricted by the standard 64 kbyte receive window.

Common myths – timestamps are bad for performance

Unfortunately some guides recommend disabling TCP timestamps to reduce the number of times the kernel needs to access the timestamp clock to get the current time. This is not correct. As explained before, RTT estimation is a necessary part of TCP. For this reason, the kernel always takes a microsecond-resolution time stamp when a packet is received/sent.

Linux re-uses the clock timestamp taken for the RTT estimation for the remainder of the packet processing step. This also avoids the extra clock access to add a timestamp to an outgoing TCP packet.

The entire timestamp option only requires 10 bytes of TCP option space in each packet, this is not a significant decrease in space available for packet payload.

common myths – timestamps are a security problem

Some security audit tools and (older) blog posts recommend to disable TCP
timestamps because they allegedly leak system uptime: This would then allow to estimate the patch level of the system/kernel. This was true in the past: The timestamp clock is based on a constantly increasing value that starts at a fixed value on each system boot. A timestamp value would give a estimate as to how long the machine has been running (uptime).

As of Linux 4.12 TCP timestamps do not reveal the uptime anymore. All timestamp values sent use a peer-specific offset. Timestamp values also wrap every 49 days.

In other words, connections from or to address “A” see a different timestamp than connections to the remote address “B”.

Run sysctl net.ipv4.tcp_timestamps=2 to disable the randomization offset. This makes analyzing packet traces recorded by tools like wireshark or tcpdump easier – packets sent from the host then all have the same clock base in their TCP option timestamp.  For normal operation the default setting should be left as-is.

Selective Acknowledgments

TCP has problems if several packets in the same window of data are lost. This is because TCP Acknowledgments are cumulative, but only for packets
that arrived in-sequence. Example:

  • Sender transmits segments s_1, s_2, s_3, … s_n
  • Sender receives ACK for s_2
  • This means that both s_1 and s_2 were received and the
    sender no longer needs to keep these segments around.
  • Should s_3 be re-transmitted? What about s_4? s_n?

The sender waits for a “retransmission timeout” or ‘duplicate ACKs’ for s_2 to arrive. If a retransmit timeout occurs or several duplicate ACKs for s_2 arrive, the sender transmits s_3 again.

If the sender receives an acknowledgment for s_n, s_3 was the only missing packet. This is the ideal case. Only the single lost packet was re-sent.

If the sender receives an acknowledged segment that is smaller than s_n, for example s_4, that means that more than one packet was lost. The
sender needs to re-transmit the next segment as well.

Re-transmit strategies

Its possible to just repeat the same sequence: re-send the next packet until the receiver indicates it has processed all packet up to s_n. The problem with this approach is that it requires one RTT until the sender knows which packet it has to re-send next. While such strategy avoids unnecessary re-transmissions, it can take several seconds and more until TCP has re-sent the entire window of data.

The alternative is to re-send several packets at once. This approach allows TCP to recover more quickly when several packets have been lost. In the above example TCP re-send s_3, s_4, s_5, .. while it can only be sure that s_3 has been lost.

From a latency point of view, neither strategy is optimal. The first strategy is fast if only a single packet has to be re-sent, but takes too long when multiple packets were lost.

The second one is fast even if multiple packet have to be re-sent, but at the cost of wasting bandwidth. In addition, such a TCP sender could have transmitted new data already while it was doing the unneeded re-transmissions.

With the available information TCP cannot know which packets were lost. This is where TCP Selective Acknowledgments (SACK) come in. Just like window scaling and timestamps, it is another optional, yet very useful TCP feature.

The SACK option

 
   TCP Sack-Permitted Option: Kind: 4, Length 2
   +---------+---------+
   | Kind=4  | Length=2|
   +---------+---------+

A sender that supports this extension includes the “Sack Permitted” option in the connection request. If both endpoints support the extension, then a peer that detects a packet is missing in the data stream can inform the sender about this.

 
   TCP SACK Option: Kind: 5, Length: Variable
                     +--------+--------+
                     | Kind=5 | Length |
   +--------+--------+--------+--------+
   |      Left Edge of 1st Block       |
   +--------+--------+--------+--------+
   |      Right Edge of 1st Block      |
   +--------+--------+--------+--------+
   |                                   |
   /            . . .                  /
   |                                   |
   +--------+--------+--------+--------+
   |      Left Edge of nth Block       |
   +--------+--------+--------+--------+
   |      Right Edge of nth Block      |
   +--------+--------+--------+--------+

A receiver that encounters segment_s2 followed by s_5…s_n, it will include a SACK block when it sends the acknowledgment for s_2:

 
                +--------+-------+
                | Kind=5 |   10  |
+--------+------+--------+-------+
| Left edge: s_5                 |
+--------+--------+-------+------+
| Right edge: s_n                |
+--------+-------+-------+-------+

This tells the sender that segments up to s_2 arrived in-sequence, but it also lets the sender know that the segments s_5 to s_n were also received. The sender can then re-transmit these two packets and proceed to send new data.

The mythical lossless network

In theory SACK provides no advantage if the connection cannot experience packet loss. Or the connection has such a low latency that even waiting one full RTT does not matter.

In practice lossless behavior is virtually impossible to ensure.
Even if the network and all its switches and routers have ample bandwidth and buffer space packets can still be lost:

  • The host operating system might be under memory pressure and drop
    packets. Remember that a host might be handling tens of thousands of packet streams simultaneously.
  • The CPU might not be able to drain incoming packets from the network interface fast enough. This causes packet drops in the network adapter itself.
  • If TCP timestamps are not available even a connection with a very small RTT can stall momentarily during loss recovery.

Use of SACK does not increase the size of TCP packets unless a connection experiences packet loss. Because of this, there is hardly a reason to disable this feature. Almost all TCP stacks support SACK – it is typically only absent on low-power IOT-alike devices that are not doing TCP bulk data transfers.

When a Linux system accepts a connection from such a device, TCP automatically disables SACK for the affected connection.

Summary

The three TCP extensions examined in this post are all related to TCP performance and should best be left to the default setting: enabled.

The TCP handshake ensures that only extensions that are understood by both parties are used, so there is never a need to disable an extension globally just because a peer might not support it.

Turning these extensions off results in severe performance penalties, especially in case of TCP Window Scaling and SACK. TCP timestamps can be disabled without an immediate disadvantage, however there is no compelling reason to do so anymore. Keeping them enabled also makes it possible to support TCP options even when SYN cookies come into effect.

Posted on Leave a comment

Backup and restore Toolboxes

Toolboxes started life often described as disposable containers – and that is still one of their major uses: install stuff, then try it out in the relative safety of a container, and lastly, cleanly dispose of it. Minimal risk, fuss and without pesky residual libraries and applications hanging around on the host long after you have finished.

So — why would you backup a Toolbox? Sometimes, they have more permanent uses, contain complex and lengthy installs, or are being used for critical applications. For example, Toolboxes can be used as a development environment, containing hardware associated drivers and applications. Or they could be used for an application you want to run in a container for which there is no Flatpak, or one that has requirements a Flatpak doesn’t satisfy. While they can be handy to use on Fedora Workstation, toolbox containers are often essential for Silverblue users since they offer an easy solution to installing applications that can’t successfully be installed by rpm-ostree. Or for applications that may not have a Flatpak version readily available. In the above situations a busted Toolbox can be a major headache. But if a backup exists, you can quickly restore a Toolbox or move it to another workstation.

The backup process uses Podman to create an image of an existing toolbox container, and save that image to an archive file. To restore the toolbox container, load the image from the archive file and then create a Toolbox from that image. The new toolbox container will be an identical copy of your backed up toolbox container.

It is important to note this process does not backup data, just what you have installed in the toolbox container. This includes packages installed from repositories or from a local rpm file using dnf. If you need to backup data, Podman’s commit command that will be used to capture an image of the toolbox container, has an option to include volumes attached to the container.

Creating a backup

To backup a toolbox container you will need it’s name and container ID which can be gotten by using toolbox list. For this example I am going to backup my golang development toolbox container, imaginatively named go.

$ toolbox list CONTAINER ID CONTAINER NAME CREATED STATUS IMAGE NAME
00ff783a102f go 5 weeks ago exited registry.fedoraproject.org/f32/fedora-toolbox:32

If the container’s status shows as running , you should stop it using podman container stop container_name. Although the commit command has a -p for pause option, make sure that the Toolbox is not running, which helps it initialize correctly when restored from backup.

$ podman container stop go

To create an image of the toolbox container use

podman container commit -p container_ID backup-image-name

Depending on the complexity of the Toolbox, this can take a little while.

 $ podman container commit -p 00ff783a102f go-backup

Now to confirm the image has been created type…

$ toolbox list

You should get output similar to what is below…

IMAGE ID IMAGE NAME CREATED
cfcb13046db7 localhost/go-backup:latest About a minute ago CONTAINER ID CONTAINER NAME CREATED STATUS IMAGE NAME
00ff783a102f go 5 weeks ago exited registry.fedoraproject.org/f32/fedora-toolbox:32

Now to save the backup image to a tar archive file using podman save -o backup-filename.tar backup-image-name.

$ podman save -o go.tar go-backup

Confirm the archive file, our toolbox container backup, was created.

$ ls go.tar 

Do some tidying up, remove the backup image and, if needed, remove the original Toolbox.

$ podman rmi go-backup $ toolbox rm go

Restore a backup

To create an image from the backup file that was made above, you do it with the command podman load -i backup_filename.

$ podman load -i go.tar

Then you can confirm the image was created with…

$ toolbox list IMAGE ID IMAGE NAME CREATED
cfcb13046db7 localhost/go-backup:latest 17 minutes ago

Now create a toolbox container from the restored image, with toolbox create –container container_name ––image image_name, specifying the full repository and version tag as the image name.

$ toolbox create --container go --image localhost/go-backup:latest

Confirm that the toolbox was created.

$ toolbox list IMAGE ID IMAGE NAME CREATED
cfcb13046db7 localhost/go-backup:latest 20 minutes ago CONTAINER ID CONTAINER NAME CREATED STATUS IMAGE NAME
34cef6b7e28d go 21 seconds ago configured localhost/go-backup:latest

Finally, you can test that the restored Toolbox works…

$ toolbox enter --container go

If you can enter the newly created toolbox container, you will see the toolbox prompt and will have successfully backed up and restored your Pet toolbox container.

Posted on Leave a comment

Fedora Origins – Part 01

Editor’s comment: The format of this article is different from the usual article that Fedora Magazine has published: a Fedora origins story told from the point of view of a Fedora user. The author has chosen to tell a story, since to simply present the bare facts is akin to just reading the wiki page about it.

Hello World!

Hello, I am… no, I’m not going to give my real name. Let’s say I’m female, probably shorter and older than you. I used to go by the nick of Isadora, more on that later.

Here you have one of the old RH boxes

Now some context. Back in the late ’90s, internet became popular and PCs started to be a thing. However, most people didn’t have either because it was very expensive and often you could do better with the traditional methods. Yes, computers were very basic back then. I used to play with these pocket games that were fascinating at the time, but totally lame now. Monochrome screens with pixelated flat animations. Not going to dive there, just giving an idea how it was.

In the mid-90s a company named Red Hat emerged and slowly started to make a profit of its own by selling its own business-oriented distribution and software utilities. The name comes from one of its founders, Marc Ewing, who used to wear a red lacrosse in university so other students could spot him easily and ask him questions.
Of course, as it was a business-oriented distribution, and I was busy with multiple other things, I didn’t pay much attention to it. It lacked the software I needed and since I wasn’t a customer, I was nobody to ask for additions. However, it was Linux and as such Open Source. People started to package stuff for RHL and put it in repositories. I was invited to join the community project, Fedora.us. I promptly declined, misunderstanding the name. It was the second time I got invited that I asked ‘what is with the “US” there (in the name)?` Another user explained it was ‘us’ as in ‘we’ not as in the ‘United States.’ They explained a bit about how the community worked and I decided to give it a go.

Then my studies got in the way, and I had to shelve it.

Login Screen in Fedora Core

Press Return

By the time I came back to Fedora.us it had changed its name to Fedora Project and was actively being worked on from within Red Hat. Now, I wasn’t there so my direct knowledge of how this happened is a bit foggy. Some say that Fedora existed separately and Red Hat added/invited them, some say that Fedora was completely RH’s idea, some say they existed independently and at some point met or joined. Choose the version you like, I’ll put some links down there so you can know more details and decide for yourself. As far as I’m concerned, they worked together.

Well, as usual someone dropped some CDs with ISOs for me. If I had an euro for every ISO I’ve been offered, or had tossed at my desk, for me to try it, I would be rich. As a matter of fact, I’m not rich but I do have a big rack full of old distros.

Anyways

Now it’s the early 2000s and things have changed dramatically. Computers’ prices have dropped and internet speed is increasing, plus a set of new technologies make it cheaper and more reliable. Computers now can do so much more than just a decade ago, and they’re smaller too. Screens are bigger, with better colors and resolution. Laptops are starting to become popular though still expensive and less powerful than desktop PCs.

During this time, I tried both Fedora and Red Hat. Now, as has been said before, Red Hat focuses on businesses and companies. Their main concern is having exactly the software their customers need, with the features their customers need, delivered as rock solid stability and a reliable update & support cycle. A lot of customization, variety of options and many cool new features are not their main core. More software means more testing and development work and bigger chances of things failing. Yet the technology industry is constantly changing and innovating. Sticking too much to older versions or proven formulas can be fatal for a company.

So what to do? Well, they solved it with Fedora. Fedora Project would be the innovative, looking ahead test bed, and Red Hat Enterprise Linux was the more conservative, rock solid operating system for businesses. Yes, they changed the name from Red Hat Linux to Red Hat Enterprise Linux. Sounds better, doesn’t it?

Unsurprisingly, Fedora had a fame of being difficult, unstable and for “hackers only”. Whenever I said I was using Fedora, they would give me odd looks or say something like “I want something stable” or “I’m not into that” (meaning they didn’t fancy programming/hacking activities). Countless individuals suggested I might want to use one of the other, beginner-friendly distributions, without themselves even giving Fedora a try! Many would disregard Linux as a whole as an amateur thing, only valid for playing but not good for serious work and companies. To each their own, I suppose.

Note the F and the bubble already there

Yes, but why?

Those early versions were called Fedora Core and had a very uncertain release pattern. The six months cycle came much later. Fedora Core got its name because there were two repositories, Core and Extras. Core had the essentials, so to speak, and was maintained by Red Hat. Extras was, well, everything else. Any software that most users would want or need was included there, and it was maintained by a wide range of contributors.

From the beginning, one of the most powerful reasons for me to use it was the community and its core values. The Four Foundations of Fedora, Freedom, Features, First & Friends were lived and breathed and not just a catchy line on a website or a leaflet. Fedora Project strove (and still does) to deliver the newest features first, caring for freedom (of choice and software) and keeping a good open community, making friends as we contribute to the project.

I also liked the fact that Fedora, as its purpose was testing for Red Hat, delivered a lot of new software and technologies; it was like opening the window to see the future today.

The downside was its unreliable upgrade cycle. You could get a new version in a few months or next year… nobody knew, there was no agreed schedule.

Note how, despite being Fedora, RH’s logo and signature is omnipresent

What was in the box

Fedora Core kept this name up to the sixth version. From the start, it was meant to be a distribution you could use right after installing it, so it came with Gnome 2, KDE 3, OpenOffice and some browser I forgot, possibly Firefox.

I remember it being the first to introduce SELinux and SystemD by default, and to replace LILO with GRUB. I also remember the hardware requirements were something at the time, although they now sound laughable: Pentium II 400MHz, 256MB RAM (yes, you read it right) and 2GB of space in disk. It even had an option for terminal only! This would require only 64MB RAM and Pentium II 200MHz. Amazing, isn’t it?

It had codenames. Not publicly, but it had, and they were quite peculiar. Fedora Core 1 was code named «Yarrow» which is a medium size plant with yellow or white crown-like flowers. Core 2 was Tettnang which is a small town in Baden-Württemberg, Germany. Not sure about Core 3, I think it was Heidelberg, but maybe I’m mixing with later releases. Core 4 was Stentz, if I recall correctly (no idea what it means), Core 5 was a colour, I think Bordeaux, and Core 6 was Zod that I think it was a comic character but I could be wrong. If there was a method in their madness I have no idea. I thought the names amusing but didn’t give a second thought to it as they didn’t affect anything, not even the design of each release.

Ah… good ol` genetic helix

So what now?

Well, of course, Fedora Project has evolved from where we have stopped. But that’s for later articles or this one will be too long. For now, I leave you with an extract of an interview with Matthew Miller, current Project Leader and some links in case you want to know more.

Extracts to interview with Matthew Miller, Project Leader.

Matthew Miller tells about the beginnings in Eduard Lucena’s podcast (transcription here): “Fedora started about 15 years ago, really. It actually started as a thing called Fedora.us.” Back in those days, there was Red Hat Linux.” “Meanwhile, there was this thing called Fedora.us which was basically a project to make additional software available to users of Red Hat Linux. Find things that weren’t part of Red Hat Linux, and package them up, and make them available to everybody. That was started as a community project.”

“Red Hat (then) merged with this Fedora.us project to form Fedora Project that produces an upstream operating system that Red Hat Enterprise Linux is derived from but then moves on a slower pace.”

“We were then two parts, Fedora Core, which was basically inherited from the old Red Hat Linux and only Red Hat employees could do anything with and then Fedora Extras, where community could come together to add things on top of that Fedora Core. It took a little while to get off the ground but it was fairly successful”

Around the time of Fedora Core 6, those were actually merged together into one big Fedora where all of the packages were all part of the same thing. There was no more distinction of Core and Extras, and everything was all together and, more importantly, all the community was all together.

They invited the community to take ownership of the whole thing and for Red Hat to become part of the community rather than separate. That was a huge success.”

Links of interest

Fedora, a visual history
https://www.phoronix.com/scan.php?page=article&item=678&num=1

Red Hat Videos – Fedora’s anniversary
https://youtu.be/DOFXBGh6DZ0

Red Hat Videos – Default to open
https://youtu.be/vhYMRtqvMg8

Fedora’s Mission & Foundations
https://docs.fedoraproject.org/en-US/project/

A short history of Fedora
https://youtu.be/NlNlcLD2zRM

Posted on Leave a comment

How to contribute to Folding@home on Fedora

What is Folding@home?

Folding@home is a distributed computing network for performing biomedical research. Its intent is to help further understand and develop cures for a range of diseases. Their current priority is understanding the behavior of COVID-19 and the virus that causes COVID-19. This article will show you how you can get involved by donating your computer’s idle time.

Sounds cool, how do I help?

In order to donate your computational power to Folding@home, download the FAHClient package from this page. Once you’ve downloaded the package, open your Downloads folder and double click it to open. For instance, on standard Fedora Workstation, this opens GNOME Software, which prompts you to install the package.

Click install and enter your password to continue from here.

How to start Folding@home

Folding@home starts folding as soon as it is installed. In order to control how much CPU/GPU is using you must open the web control interface, available here.

The interface contains information about what project you are contributing to. In order to track “points,” the scoring system of Folding@home, you must set up a user account with Folding@home.

Tracking your work

Now that everything’s done, you may be wondering how you can track the work your computer is doing. All you need to is request a passkey from this page. Enter your email and your desired username. Once you have received the passkey in email, you can enter that into the client settings.

Click on the Change Identity button, and this page appears:

You can also put in a team number here like I have. This allows your points to go towards a group that you support.

Enter the username you gave when you requested a passkey, and then enter the passkey you received.

What next?

That’s all there is to it. Folding@home runs in the background automatically on startup. If you need to pause or lower how much CPU/GPU power it uses, you can change that via the web interface linked above.

You may notice that you don’t receive many work units. That’s because there is currently a shortage of work units to distribute due to a spike of computers being put onto the network. However, different efforts are emerging all the time.

You can visually see the spike in computers on the network from last year at the same time to 4/4/2020

Photo by Joshua Sortino on Unsplash.

Posted on Leave a comment

Submit a supplemental wallpaper for Fedora 32

Attention Fedora community members: Fedora is seeking submissions for supplemental wallpapers to be included with the Fedora 32 release. Whether you’re an active contributor, or have been looking for a easy way to get started contributing, submitting a wallpaper is a great way to help. Read on for more details.

Each release, the Fedora Design Team works with the community on a set of 16 additional wallpapers. Users can install and use these to supplement the standard wallpaper.

Dates and deadlines

The submission phase opened as of March 7, 2020 and ends March 21, 2020 at 23:59 UTC.

Important note: In some circumstances, submissions during the final hours may not get into the election, if there is insufficient time to do legal research. Please help by following the guidelines correctly, and submit only work under a correct license.

The voting phase will open the Monday following the close of submissions, March 23, 2020, and will be open until the end of the month on March 31, 2020 at 23:59 UTC.

How to contribute a wallpaper

Fedora uses the Nuancier application to manage the submissions and the voting process. To submit, you need a Fedora account. If you don’t have one, create one here in the Fedora Account System (FAS). To vote you must have a signed contributor agreement (also accessible in FAS) which only takes a few moments.

You can access Nuancier here along with detailed instructions for submissions.

Posted on Leave a comment

How we decide when to release Fedora

Open source projects can use a variety of different models for deciding when to put out a release. Some projects release on a set schedule. Others decide on what the next release should contain and release whenever that is ready. Some just wake up one day and decide it’s time to release. And other projects go for a rolling release model, avoiding the question entirely.

For Fedora, we go with a schedule-based approach. Releasing twice a year means we can give our contributors time to implement large changes while still keeping on the leading edge. Targeting releases for the end of April and the end of October gives everyone predictability: contributors, users, upstreams, and downstreams.

But it’s not enough to release whatever’s ready on the scheduled date. We want to make sure that we’re releasing quality software. Over the years, the Fedora community has developed a set of processes to help ensure we can meet both our time and and quality targets.

Changes process

Meeting our goals starts months before the release. Contributors propose changes through our Changes process, which ensures that the community has a chance to provide input and be aware of impacts. For changes with a broad impact (called “system-wide changes”), we require a contingency plan that describes how to back out the change if it’s broken or won’t be ready in time. In addition, the change process includes providing steps for testing. This helps make sure we can properly verify the results of a change.

Change proposals are due 2-3 months before the beta release date. This gives the community time to evaluate the impact of the change and make adjustments necessary. For example, a new compiler release might require other package maintainers to fix bugs exposed by the new compiler or to make changes that take advantage of new capabilities.

A few weeks before the beta and final releases, we enter a code freeze. This ensures a stable target for testing. Bugs identified as blockers and non-blocking bugs that are granted a freeze exception are updated in the repo, but everything else must wait. The freeze lasts until the release.

Blocker and freeze exception process

In a project as large as Fedora, it’s impossible to test every possible combination of packages and configurations. So we have a set of test cases that we run to make sure the key features are covered.

As much as we’d like to ship with zero bugs, if we waited until we reached that state, there’d never be another Fedora release again. Instead, we’ve defined release criteria that define what bugs can block the release. We have basic release criteria that apply to all release milestones, and then separate, cumulative criteria for beta and final releases. With beta releases, we’re generally a little more forgiving of rough edges. For a final release, it needs to pass all of beta’s criteria, plus some more that help make it a better user experience.

The week before a scheduled release, we hold a “go/no go meeting“. During this meeting, the QA team, release engineering team, and the Fedora Engineering Steering Committee (FESCo) decide whether or not we will ship the release. As part of the decision process, we conduct a final review of blocker bugs. If any accepted blockers remain, we push the release back to a later date.

Some bugs aren’t severe enough to block the release, but we still would like to get them fixed before the release. This is particularly true of bugs that affect the live image experience. In that case, we grant an exception for updates that fix those bugs.

How you can help

In all my years as a Fedora contributor, I’ve never heard the QA team say “we don’t need any more help.” Contributing to the pre-release testing processes can be a great way to make your first Fedora contribution.

The Blocker Review meetings happen most Mondays in #fedora-blocker-review on IRC. All members of the Fedora community are welcome to participate in the discussion and voting. One particularly useful contribution is to look at the proposed blockers and see if you can reproduce them. Knowing if a bug is widespread or not is important to the blocker decision.

In addition, the QA team conducts test days and test weeks focused on various parts of the distribution: the kernel, GNOME, etc. Test days are announced on Fedora Magazine.

There are plenty of other ways to contribute to the QA process. The Fedora wiki has a list of tasks and how to contact the QA team. The Fedora 32 Beta release is a few weeks away, so now’s a great time to get started!

Posted on Leave a comment

Welcoming our new Fedora Community Action and Impact Coordinator

Good news, everybody! I’m pleased to announce that we have completed our search for a new Fedora Community Action and Impact Coordinator, and she’ll be joining the Open Source Program Office (OSPO) team to work with Fedora as of today. Please give a warm welcome to Marie Nordin.

If you’ve been involved in Fedora, you may have already been working with Marie. She’s a member of the Fedora Design and Badges teams. Her latest contribution to the Design Team is the wallpaper for F31, a collaboration with Máirín Duffy. Marie has made considerable contributions to the Badges project. She has designed over 150 badge designs, created documentation and a style guide, and mentored new design contributors for years. Most recently she has been spear-heading a bunch of work related to bringing badges up to date on both the development and UI/UX of the web app.

Marie is new to Red Hat, joining us after 5 years of involvement with the Fedora community. She was first introduced to Fedora through an Outreachy internship in 2013 working on Fedora Badges. Marie’s most current full time position was in the distribution industry as a purchasing agent, bid coordinator, and manager. She also has a strong background in design outside of her efforts for Fedora, working as a freelance graphic designer for the past 8 years.

I believe that Marie’s varied background in business and administration, her experience with design, and her long term involvement with and passion for Fedora makes her an excellent fit for this position. I’m excited to work with her as both a colleague on her team at Red Hat and as a Fedora contributor.

Feel free to reach out with congratulations, but give her a bit to get fully engaged with Fedora duties.

Congratulations, Marie!

Posted on Leave a comment

Set up single sign-on for Fedora Project services

In addition to an operating system, the Fedora Project provides services for users and developers. Services such as Ask Fedora, the Fedora Project wiki and the Fedora Project mailing lists help users learn how to best take advantage of Fedora. For developers of Fedora, there are many other services such as dist-git, Pagure, Bodhi, COPR and Bugzilla for the packaging and release process.

These services are available with a free account from the Fedora Accounts System (FAS). This account is the passport to all things Fedora! This article covers how to get set up with an account and configure Fedora Workstation for browser single sign-on.

Signing up for a Fedora account

To create a FAS account, browse to the account creation page. Here, you will fill out your basic identity data:

Account creation page

Once you enter your data, the account system sends an email to the address you provided, with a temporary password. Pick a strong password and use it.

Password reset page

Next, the account details page appears. If you want to contribute to the Fedora Project, you should complete the Contributor Agreement now. Otherwise, you are done and you can use your account to log into the various Fedora services.

Account details page

Configuring Fedora Workstation for single sign-On

Now that you have your account, you can sign into any of the Fedora Project services. Most of these services support single sign-on (SSO), so you can sign in without re-entering your username and password.

Fedora Workstation provides an easy workflow to add your Fedora credentials. The GNOME Online Accounts tool helps you quickly set up your system to access many popular services. To access it, go to the Settings menu.

Click on the option labeled Fedora. A prompt opens for you to provide your username and password for your Fedora Account.

GNOME Online Accounts stores your password in GNOME Keyring and automatically acquires your single-sign-on credentials for you when you log in.

Single sign-on with a web browser

Today, Fedora Workstation supports three web browsers out of the box with support for single sign-on with the Fedora Project services. These are Mozilla Firefox, GNOME Web, and Google Chrome.

Due to a bug in Chromium, single sign-on doesn’t work currently if you have more than one set of Kerberos (SSO) credentials active on your session. As a result, Fedora doesn’t enable this function out of the box for Chromium in Fedora.

To sign on to a service, browse to it and select the login option for that service. For most Fedora services, this is all you need to do; the browser handles the rest. Some services such as the Fedora mailing lists and Bugzilla support multiple login types. For them, select the Fedora or Fedora Account System login type.

That’s it! You can now log into any of the Fedora Project services without re-entering your password.

Special consideration for Google Chrome

To enable single sign-on out of the box for Google Chrome, Fedora takes advantage of certain features in Chrome that are intended for use in “managed” environments. A managed environment is traditionally a corporate or other organization that sets certain security and/or monitoring requirements on the browser.

Recently, Google Chrome changed its behavior and it now reports Managed by your organization or possibly Managed by fedoraproject.org under the ⋮ menu in Google Chrome. That link leads to a page that says, “If your Chrome browser is managed, your administrator can set up or restrict certain features, install extensions, monitor activity, and control how you use Chrome.” However, Fedora will never monitor your browser activity or restrict your actions.

Enter chrome://policy in the address bar to see exactly what settings Fedora has enabled in the browser. The AuthNegotiateDelegateWhitelist and AuthServerWhitelist options will be set to *.fedoraproject.org. These are the only changes Fedora makes.