Fedora Linux provides several variants to meet your needs. You can find an overview of all the Fedora Linux variants in my previous article Introduce the different Fedora Linux editions. This article will go into a little more detail about the Fedora Linux official editions. There are five editions — Fedora Workstation, Fedora Server, Fedora IoT, Fedora CoreOS, and Fedora Silverblue. The Fedora Linux download page currently shows that three of these are official editions and the remaining two are emerging editions. This article will cover all five editions.
Fedora Workstation
If you are a laptop or desktop computer user, then Fedora Workstation is the right operating system for you. Fedora workstation is very easy to use. You can use this for daily needs such as work, education, hobbies, and more. For example, you can use it to create documents, make presentations, surf the internet, manipulate images, edit videos, and many other things.
This Fedora Linux edition comes with the GNOME Desktop Environment by default. You can work and do activities comfortably using this appearance concept. You can also customize the appearance of this Fedora Workstation according to your preferences, so you will be more comfortable using it. If you are a new Fedora Workstation user, you can read my previous article Things to do after installing Fedora 34 Workstation. Through the article, you will find it easier to start with Fedora Workstation.
Many companies require their own servers to support their infrastructure. The Fedora Server edition operating system comes with a powerful web-based management interface called Cockpit that has a modern look. Cockpit enables you to easily view and monitor system performance and status.
Fedora Server includes some of the latest technology in the open source world and it is backed by an active community. It is very stable and reliable. However, there is no guarantee that anyone from the Fedora community will be available or able to help if you encounter problems. If you are running mission critical applications and you might require technical support, you might want to consider Red Hat Enterprise Linux instead.
Operating systems designed specifically for IoT devices have become popular. Fedora IoT is an operating system created in response to this. Fedora IoT is an immutable operating system that uses OSTree Technology with atomic updates. This operating system focuses on security which is very important for IoT devices. Fedora IoT has support for multiple architectures. It also comes with a web-based configuration console so that it can be configured remotely without requiring that a keyboard, mouse or monitor be physically connected to the device.
Fedora CoreOS is a container-focused operating system. This operating system is used to run applications safely and reliably in any environment. It is designed for clusters but can also be run as a standalone system. This operating system has high compatibility with Linux Container configurations.
This edition is a variant of Fedora Workstation with an interface that is not much different. However, the difference is that Fedora Silverblue is an immutable operating system with a container-centric workflow. This means that each installation is exactly the same as another installation of the same version. The goal is to make it more stable, less prone to bugs, and easier to test and develop.
Each edition of Fedora Linux has a different purpose. The availability of several editions can help you to get an operating system that suits your needs. The Fedora Linux editions discussed in this article are the operating systems available on the main download page for Fedora Linux. You can find download links and more complete documentation at https://getfedora.org/.
Copr is a build system for anyone in the Fedora community. It hosts thousands of projects for various purposes and audiences. Some of them should never be installed by anyone, some are already being transitioned to the official Fedora Linux repositories, and the rest are somewhere in between. Copr gives you the opportunity to install third-party software that is not available in Fedora Linux repositories, try nightly versions of your dependencies, use patched builds of your favorite tools to support some non-standard use cases, and just experiment freely.
You may be aware the DNF team is working on DNF5. There is a change proposal for Fedora Linux 38. The benefit is that every package management software — including PackageKit, and DNFDragora — should use a common libdnf library. If you have an application that handles RPM packages, you should definitely check out this project.
Hare is a systems programming language designed to be simple, stable and robust. Hare uses a static type system, manual memory management, and a minimal runtime. It is well suited to writing operating systems, system tools, compilers, networking software, and other low-level, high-performance tasks. A detailed overview can be found in these slides.
My summary is: Hare is simpler than C. It can be easy. But if you insist on shooting in your legs, Hare will allow you to do it.
These packages are available for Fedora Linux 35, 36 and Rawhide. They are also available for OpenSUSE Leap and Tumbleweed. To install them, enter these commands:
As automation expands to cover more aspects of IT, more administrators are learning automation skills and applying them to ease their workload. Automation can ease the burden of repetitive tasks and add a level of conformity to infrastructure. But when IT workers deploy automation, there are common mistakes that can wreak havoc on infrastructures large and small. Five common mistakes are typically seen in automation deployments.
Lack of testing
A beginner’s mistake that is commonly made is that automation scripts are not thoroughly tested. A simple shell script can have adverse affects on a server due to typos or logic errors. Multiply that mistake by the number of servers in your infrastructure, and you can have a big mess to clean up. Always test your automation scripts before deploying in large scale.
Unexpected server load
The second mistake that frequently occurs is not predicting the system load the script may put on other resources. Running a script that downloads a file or installs a package from a repository may be fine when the target is a dozen servers. Scripts are often run against hundreds or thousands of servers. This load can bring supporting services to a stand still or crash them entirely. Don’t forget to consider end point impact or set a reasonable concurrency rate.
Run away scripts
One use of automation tools is to ensure compliance to standard settings. Automation can make it easy to ensure that every server in a group has exactly the same settings. Problems may arise if a server in that group needs to be altered from that baseline, and the administrator is not aware of the compliance standard. Unneeded and unwanted services can be installed and enabled leading to possible security concerns.
Lack of documentation
A constant duty for administrators should be to document their work. Companies can have frequent new employees in IT departments due to contracts ending or promotions or regular employee turnover. It is also not uncommon for work groups within a company to be siloed from each other. For these reasons it is important to document what automation is in place. Unlike user run scripts, automation may continue long after the person who created it leaves the group. Administrators can find themselves facing strange behaviors in their infrastructure from automation left unchecked.
Lack of experience
The last mistake on the list is when administrators do not know enough about the systems they are automating. Too often admins are hired to work positions where they do not have adequate training and no one to learn from. This has been especially relevant since COVID when companies are struggling to fill vacancies. Admins are then forced to deal with infrastructure they didn’t set up and may not fully understand. This can lead to very inefficient scripts that waste resources or misconfigured servers.
Conclusion
More and more admins are learning automation to help them in their everyday tasks. As a result, automation is being applied to more areas of technology. Hopefully this list will help prevent new users from making these mistakes and urge seasoned admins to re-evaluate their IT strategies. Automation is meant to ease the burden of repetitive tasks, not cause more work for the end user.
Fedora Silverblue is an operating system for your desktop built on Fedora Linux. It’s excellent for daily use, development, and container-based workflows. It offers numerous advantages such as being able to roll back in case of any problems. If you want to update or rebase to Fedora Linux 36 on your Fedora Silverblue system (these instructions are similar for Fedora Kinoite), this article tells you how. It not only shows you what to do, but also how to revert things if something unforeseen happens.
Prior to actually doing the rebase to Fedora Linux 36, you should apply any pending updates. Enter the following in the terminal:
$ rpm-ostree update
or install updates through GNOME Software and reboot.
Rebasing using GNOME Software
GNOME Software shows you that there is new version of Fedora Linux available on the Updates screen.
Fedora 36 update available
First thing you need to do is download the new image, so click on the Download button. This will take some time. When it’s done you will see that the update is ready to install.
Fedora 36 update ready to install
Click on the Restart & Upgrade button. This step will take only a few moments and the computer will be restarted at the end. After restart you will end up in new and shiny release of Fedora Linux 36. Easy, isn’t it?
Rebasing using terminal
If you prefer to do everything in a terminal, then this part of the guide is for you.
Rebasing to Fedora Linux 36 using the terminal is easy. First, check if the 36 branch is available:
$ ostree remote refs fedora
You should see the following in the output:
fedora:fedora/36/x86_64/silverblue
If you want to pin the current deployment (this deployment will stay as option in GRUB until you remove it), you can do it by running:
$ sudo ostree admin pin 0
To remove the pinned deployment use the following command:
$ sudo ostree admin pin --unpin 2
where 2 is the position in the $rpm-ostree status
Next, rebase your system to the Fedora Linux 36 branch.
$ rpm-ostree rebase fedora:fedora/36/x86_64/silverblue
Finally, the last thing to do is restart your computer and boot to Fedora Linux 36.
How to roll back
If anything bad happens—for instance, if you can’t boot to Fedora Linux 36 at all—it’s easy to go back. Pick the previous entry in the GRUB menu at boot (if you don’t see it, try to press ESC during boot), and your system will start in its previous state before switching to Fedora Linux 36. To make this change permanent, use the following command:
$ rpm-ostree rollback
That’s it. Now you know how to rebase Fedora Silverblue to Fedora Linux 36 and roll back. So why not do it today?
Nowadays, the number of devices is getting bigger and bigger, and modern operating systems must try to support all types and several of them with every integration, with every release. Maintaining a large number of devices is difficult, expensive and also hard to test, specially for plug-and-play devices, like USB devices.
Therefore, it is necessary to create a mechanism to facilitate the maintenance and testing of old and new USB devices. And this is where USB device emulation comes in. In that way, a complete framework including a big bunch of emulated and validated USB devices will allow easier integration and release. The area of application would be very wide: earlier bug search/detection even during development, automatic tests, continuous integration, etc.
How to emulate USB devices
USB/IP project allows sharing the USB devices connected to a local machine so that they can be managed by another machine connected to the network by means of a TCP/IP connection.
Then USB/IP project consists of two parts:
local device support (host) to allow remote access to every necessary control events and data
remote control that catches every necessary control event and data to process like a normal driver
The procedure is valid for Linux and Windows, here I will focus only on Linux.
The idea behind emulation is to replace the remote device support with an application that behaves in the same way. In this way we can emulate devices with software applications that follow the commented USB/IP protocol specification.
In the following points I will describe how to configure and run the remote support and how to connect to our USB emulated device.
Remote support
Remote support is divided in two parts:
kernel space to control a remote device as it was local, that is, to be probed by the normal driver.
user space application to configure access to remote devices.
At this point, it is important to remark that the device emulators, after configuration by user space application, will communicate directly with the kernel space.
Local support has a very similar structure, but the focus of this article is device emulation.
Let’s analyze every part of remote support.
Kernel space
First of all, in order to get the functionality we need to compile the Linux Kernel with the following options:
CONFIG_USBIP_CORE=m CONFIG_USBIP_VHCI_HCD=m
These options enable the USB/IP virtual host controller driver, which is run on the remote machine.
Normal USB drivers need to be also included because they will be probed and configured in the same way from virtual host controller drivers.
Besides there are other important configuration options:
These options define the number of ports per USB/IP virtual host controller and the number of USB/IP virtual host controllers as if adding physical host controllers. These are the default values if CONFIG_USBIP_VHCI_HCD is enabled, increase if necessary.
The commented options and kernel modules are already included in some Linux distributions like Fedora Linux.
Let’s see an example of available virtual USB buses and ports that we will use later.
Default and real resources in example equipment:
$ lsusb Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub $ lsusb -t /: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/15p, 5000M /: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/15p, 480M |__ Port 1: Dev 2, If 0, Class=Human Interface Device, Driver=usbhid, 480M $
Now, we will load the module vhci-hcd into the system (default configuration for CONFIG_USBIP_VHCI_HC_PORTS and CONFIG_USBIP_VHCI_NR_HCS):
$ sudo modprobe vhci-hcd $ lsusb Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub Bus 001 Device 002: ID 0627:0001 Adomax Technology Co., Ltd Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub $ lsusb -t /: Bus 04.Port 1: Dev 1, Class=root_hub, Driver=vhci_hcd/8p, 5000M /: Bus 03.Port 1: Dev 1, Class=root_hub, Driver=vhci_hcd/8p, 480M /: Bus 02.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/15p, 5000M /: Bus 01.Port 1: Dev 1, Class=root_hub, Driver=xhci_hcd/15p, 480M |__ Port 1: Dev 2, If 0, Class=Human Interface Device, Driver=usbhid, 480M $
The remote USB/IP virtual host controller driver will only use the configured virtualized resources. Of course, emulated devices will work in the same way.
User space
The other necessary part in the USB/IP project is the user space tool usbip and needs to be used to configure the referred kernel space on both sides, although, in the same way, we only focus on the remote side, since the local side will be represented by the emulator.
That is, usbip tool will configure the USB/IP virtual controller (tcp client) in kernel space to connect to the device emulator (tcp server) in order to establish a direct connection between them for USB configuration, events, data, etc. exchange.
The tool is independent of the type of device and is able to provide information about available and reserved resources (see more information in the examples below).
The local USB/IP virtual host controller needs to specify the pair bus-port that will used for remote access, it will be the same for emulated devices, but in this case, this pair can be anything because there is no real device and resource reservation is not necessary.
This tool is found on the Linux Kernel repository in order to be totally synchronized with it.
Location of the tool on the Linux Kernel repository: ./tools/usb/usbip
In some distribution like Fedora Linux, the usbip utility can be installed by means of usbip package from repositories. If usbip utility or related package can not be found, follow the instruction in the available README file to compile and install. Suitable rpm package can also be generated from the usbip-emulatorrepository:
$ git clone https://github.com/jtornosm/USBIP-Virtual-USB-Device.git $ cd USBIP-Virtual-USB-Device/usbip $ make rpm ...
$
How to emulate USB devices
Emulators are generated in Python and C. I have started with C development (I will focus on this part), but the same could be done in Python.
For C development, compile emulation tools from the usbip-emulatorrepository:
$ git clone https://github.com/jtornosm/USBIP-Virtual-USB-Device.git $ cd USBIP-Virtual-USB-Device/c $ make ...
$
All the supported devices emulated at this moment will be generated:
hid-keyboard
hid-mouse
cdc-adm
hso
cdc-ether
bt
rpm package (usbip-emulator) can be also generated with:
$ make rpm ...
$
As examples, Vendor and Product IDs are hardcoded in the code.
Following three examples to show how emulation works. We are using the same equipment for the emulator and remote USB/IP but they could run in different equipment. Besides, we are reserving different resources so all the devices could be emulated at the same time.
Example 1: hso
From one terminal, let’s emulate the hso device:
(“1-1” is the pair bus-port for the USB device on the local machine, as we are emulating, it could be anything. It is only important because usbip tool will have to use the same name to request the emulated device)
(As we saw previously, for this example machine, bus 3 is virtualized)
$ ip addr show dev hso
0 3: hso0: <POINTOPOINT,MULTICAST,NOARP> mtu 1486 qdisc noop state DOWN group default qlen 10 link/none $ rfkill list 0: hso-0: Wireless WAN Soft blocked: no Hard blocked: no ...
$ lsusb ... Bus 003 Device 002: ID 0af0:6711 Option GlobeTrotter Express 7.2 v2 ... $ lsusb -t ...
/: Bus 03.Port 1: Dev 1, Class=root_hub, Driver=vhci_hcd/8p, 480M |__ Port 1: Dev 2, If 0, Class=Vendor Specific Class, Driver=hso, 12M ...
$
In order to release resources:
$ sudo usbip port Imported USB devices ==================== Port 00: <Port in Use> at Full Speed(12Mbps) Option : GlobeTrotter Express 7.2 v2 (0af0:6711) 3-1 -> usbip://127.0.0.1:3241/1-1 -> remote bus/dev 001/002 $ sudo usbip detach -p 00 usbip: info: Port 0 is now detached! $
And we can check that the device is released:
$ ip addr show dev hso0 Device "hso0" does not exist. $ rfkill list ...
$ lsusb ... $
After this, we can emulate again or stop the emulated device from the first terminal (i.e. with Ctrl-C).
Example 2: cdc-ether
From one terminal, let’s emulate the cdc-ether device (root permission is required because raw socket needs to bind to specified interface for data plane):
(“1-1” is the pair bus-port for the USB device on the local machine, as we are emulating, it could be anything. It is only important because usbip tool will have to use the same name to request the emulated device)
(As we saw previously, for this example machine, bus 3 is virtualized)
$ ip addr show dev eth0 4: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000 link/ether 88:00:66:99:5b:aa brd ff:ff:ff:ff:ff:ff $ sudo ethtool eth0 ...
Link detected: yes $ lsusb ... Bus 003 Device 003: ID 0fe6:9900 ICS Advent ... $ lsusb -t ...
/: Bus 03.Port 1: Dev 1, Class=root_hub, Driver=vhci_hcd/8p, 480M |__ Port 2: Dev 3, If 0, Class=Communications, Driver=cdc_ether, 480M |__ Port 2: Dev 3, If 1, Class=CDC Data, Driver=cdc_ether, 480M ...
$
For this example, we can also test the data plane.
(IP forwarding is disabled in both sides)
First, we can configure the IP address in the emulated device:
$ sudo ip addr add 10.0.0.1/24 dev eth0 $ ip addr show dev eth0 4: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UNKNOWN group default qlen 1000 link/ether 88:00:66:99:5b:aa brd ff:ff:ff:ff:ff:ff inet 10.0.0.1/24 scope global eth0 valid_lft forever preferred_lft forever $
Second, for example, from other directly Ethernet connected machine (real or virtual) we can configure a macvlan interface in the same subnet to send/receive traffic (ping, iperf, etc.):
$ sudo ip link add macvlan0 link enp1s0 type macvlan mode bridge $ sudo ip addr add 10.0.0.2/24 dev macvlan0 $ sudo ip link set macvlan0 up $ ip addr show dev macvlan0 3: macvlan0@enp1s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether d6:f1:cd:f1:cc:02 brd ff:ff:ff:ff:ff:ff inet 10.0.0.2/24 scope global macvlan0 valid_lft forever preferred_lft forever inet6 fe80::d4f1:cdff:fef1:cc02/64 scope link valid_lft forever preferred_lft forever $ ping 10.0.0.1 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=55.6 ms 64 bytes from 10.0.0.1: icmp_seq=2 ttl=64 time=2.19 ms 64 bytes from 10.0.0.1: icmp_seq=3 ttl=64 time=1.74 ms 64 bytes from 10.0.0.1: icmp_seq=4 ttl=64 time=1.76 ms 64 bytes from 10.0.0.1: icmp_seq=5 ttl=64 time=1.93 ms 64 bytes from 10.0.0.1: icmp_seq=6 ttl=64 time=1.65 ms ...
In order to release resources:
$ sudo usbip port Imported USB devices ==================== ...
Port 01: <Port in Use> at High Speed(480Mbps) ICS Advent : unknown product (0fe6:9900) 3-2 -> usbip://127.0.0.1:3245/1-1 -> remote bus/dev 001/003 $ sudo usbip detach -p 01 usbip: info: Port 1 is now detached! $
And we can check that the device is released:
$ ip addr show dev eth0 Device "eth0" does not exist. $ lsusb ... $
And of course, traffic from the other machine is not working:
From 10.0.0.2 icmp_seq=167 Destination Host Unreachable From 10.0.0.2 icmp_seq=168 Destination Host Unreachable From 10.0.0.2 icmp_seq=169 Destination Host Unreachable From 10.0.0.2 icmp_seq=170 Destination Host Unreachable ...
After this, we can emulate again or stop the emulated device from the first terminal (i.e. with Ctrl-C).
Example 3: bt
From one terminal, let’s emulate the Bluetooth device:
(“1-1” is the pair bus-port for the USB device on the local machine, as we are emulating, it could be anything. It is only important because usbip tool will have to use the same name to request the emulated device)
(As we saw previously, for this example machine, bus 3 is virtualized)
$ hciconfig -a hci0: Type: Primary Bus: USB BD Address: AA:BB:CC:DD:EE:11 ACL MTU: 310:10 SCO MTU: 64:8 UP RUNNING PSCAN ISCAN INQUIRY RX bytes:1451 acl:0 sco:0 events:80 errors:0 TX bytes:1115 acl:0 sco:0 commands:73 errors:0 Features: 0xff 0xff 0x8f 0xfe 0xdb 0xff 0x5b 0x87 Packet type: DM1 DM3 DM5 DH1 DH3 DH5 HV1 HV2 HV3 Link policy: RSWITCH HOLD SNIFF PARK Link mode: SLAVE ACCEPT Name: 'BT USB TEST - CSR8510 A10' Class: 0x000000 Service Classes: Unspecified Device Class: Miscellaneous, HCI Version: 4.0 (0x6) Revision: 0x22bb LMP Version: 3.0 (0x5) Subversion: 0x22bb Manufacturer: Cambridge Silicon Radio (10) $ rfkill list ...
1: hci0: Bluetooth Soft blocked: no Hard blocked: no $ lsusb ...
Bus 003 Device 004: ID 0a12:0001 Cambridge Silicon Radio, Ltd Bluetooth Dongle (HCI mode) ...
$ lsusb -t ...
/: Bus 03.Port 1: Dev 1, Class=root_hub, Driver=vhci_hcd/8p, 480M |__ Port 3: Dev 4, If 0, Class=Wireless, Driver=btusb, 12M |__ Port 3: Dev 4, If 1, Class=Wireless, Driver=btusb, 12M ...
$
And we can turn off and turn on the emulated Bluetooth device, detecting several fake Bluetooth devices:
(At this moment, fake Bluetooth devices are not emulated/simulated so we can not set up)
Turn Bluetooth offTurn Bluetooth on
In order to release resources:
$ sudo usbip port Imported USB devices ==================== ...
Port 02: <Port in Use> at Full Speed(12Mbps) Cambridge Silicon Radio, Ltd : Bluetooth Dongle (HCI mode) (0a12:0001) 3-3 -> usbip://127.0.0.1:3243/1-1 -> remote bus/dev 001/002 $ sudo usbip detach -p 02 usbip: info: Port 2 is now detached! $
And we can check that the device is released:
$ hciconfig $ rfkill list ...
$ lsusb ... $
And of course, device is not detected (as before emulation):
Bluetooth is not found
After this, we can emulate again or stop the emulated device from the first terminal (i.e. with Ctrl-C).
Emulated vs real USB devices
When the real hardware and/or final device is not used to test, we can always feel insecure about the results, and this is the biggest hurdle that we will have to overcome to check the correct operation of the devices by means of emulation.
So, in order to be confident, emulation must be as close as possible to the real hardware and in order to get the most real emulation every aspect of the device must be covered (or at least the necessary ones if they are not related with other aspects). In fact, for a correct test, we must not modify the driver, that is, we must only emulate the physical layer, so that the driver is not able to know if the device is real or emulated.
Starting to test with the real hardware device is a very good idea to get a reference to build the emulator with the same features. For the case of USB devices, the device emulator building is easier because of the existing procedure to get remote control that complies with all the characteristics mentioned above.
Conclusion
USB device emulation is the best way to integrate and test the related features in an efficient, automatic and easy way. But, in order to be confident about the emulation procedure, device emulators need to be previously validated to confirm that they work in the same way as real hardware.
Of course, the USB device emulator is not the same as the real hardware device, but the commented method, thanks to the tested procedure to get remote control of the device, it’s very close to the real scenario and can help a lot to improve our release and testing processes.
Finally, I would like to comment that one of the best advantages of using software emulators is that we will be able to cause specific behaviors, in a simple way, that would be very difficult to reproduce with real hardware, and this could help to find issues and be more robust.
The latest release of Fedora Workstation 36 continues the Fedora Project’s ongoing commitment to delivering the latest innovations in the open source world. This article describes some of the notable user-facing changes that appear in this version.
GNOME 42
Fedora Workstation 36 includes the latest version of the GNOME desktop environment. GNOME 42 includes many improvements and new features. Just some of the improvements include:
Significantly improved input handling, resulting in lower input latency and improved responsiveness when the system is under load. This is particularly beneficial for games and graphics applications.
The Wayland session is now the default for those who use Nvidia’s proprietary graphics driver.
A universal dark mode is now available.
A new interface has been added for taking screenshots and screen video recordings.
In addition, many of the core apps have been ported to GTK 4, and the shell features a number of subtle refinements.
Refreshed look and feel
GNOME 42 as featured in Fedora Workstation 36
GNOME Shell features a refreshed look and feel, with rounder and more clearly separated elements throughout. All the symbolic icons have been updated and the top bar is no longer rounded.
Universal dark mode option
In Settings > Appearance, you can now choose a dark mode option which applies a dark theme to all supported applications. In addition, the pre-installed wallpapers now include dark mode variants. Dark themes can help reduce eye-strain when there is low ambient light, can help conserve battery life on devices with OLED displays, and can reduce the risk of burn-in on OLED displays. Plus, it looks cool!
New screenshot interface
Taking screenshots and screen video recordings is now easier than ever
Previously, pressing the Print Screen key simply took a screenshot of the entire screen and saved it to the Pictures folder. If you wanted to customize your screenshots, you had to remember a keyboard shortcut, or manually open the Screenshots app and use that to take the screenshot you wanted. This was inconvenient.
Now, pressing Print Screen presents you with an all-new user interface that allows you to take a screenshot of either your entire screen, just one window, or a rectangular selection. You can also choose whether to hide or show the mouse pointer, and you can also now take a screen video recording from within the new interface.
Core applications
Apps made in GTK 4 + libadwaita feature a distinct visual style
GNOME’s core applications have seen a number of improvements. A number of them have been ported to GTK 4 and use libadwaita, a new widget library that implements GNOME’s Human Interface Guidelines.
Files now includes the ability to sort files by creation date, and includes some visual refinements, such as a tweaked headerbar design and file renaming interface.
The Software app now includes a more informative update interface, and more prominently features GNOME Circle apps.
The Settings app now has a more visually appealing interface matching the visual tweaks present throughout GNOME Shell.
Text Editor replaces Gedit by default. Text Editor is an all-new app built in GTK 4 and libadwaita. You can always reinstall Gedit by searching for it in the Software app.
Wayland support on Nvidia’s proprietary graphics driver
In previous versions, Fedora Workstation defaulted to the X display server when using Nvidia’s proprietary graphics driver – now, Fedora Workstation 36 uses the Wayland session by default when using Nvidia’s proprietary graphics driver.
If you experience issues with the Wayland session, you can always switch back to the Xorg session by clicking the gear icon at the bottom-right corner of the login screen and choosing “GNOME on Xorg”.
Under-the-hood changes throughout Fedora Linux 36
When installing or upgrading packages with DNF or PackageKit, weak dependencies that have been manually removed will no longer be reinstalled. That is to say: if foo is installed and it has bar as a weak dependency, and bar is then removed, bar will not be reinstalled when foo is updated.
The Noto fonts are now used by default for many languages. This provides greater coverage for different character sets. For users who write in the Malayalam script, the new Meera and RIT Rachana fonts are now the default.
systemd messages now include unit names by default rather than just the description, making troubleshooting easier.
Today, I’m excited to share the results of the hard work of thousands of Fedora Project contributors: our latest release, Fedora Linux 36, is here!
By the community, for the community
Normally when I write these announcements, I talk about some of the great technical changes in the release. This time, I wanted to put the focus on the community that makes those changes happen. Fedora isn’t just a group of people toiling away in isolation — we’re friends. In fact, that’s one of our Four Foundations.
One of our newest Fedora Friends, Juan Carlos Araujo said it beautifully in a Fedora Discussion post:
Besides functionality, stability, features, how it works under the hood, and how cutting-edge it is, I think what makes or breaks a distro are those intangibles, like documentation and the community. And Fedora has it all… especially the intangibles.
We’ve worked hard over the years to make Fedora an inclusive and welcoming community. We want to be a place where experienced contributors and newcomers alike can work together. Just like we want Fedora Linux to be a distribution that appeals to both long-time and novice Linux users.
Speaking of Fedora Linux, let’s take a look at some of the highlights this time around. As always, you should make sure your system is fully up-to-date before upgrading from a previous release. This time especially, because we’ve squashed some very important upgrade-related bugs in F34/F35 updates. Your system upgrade to Fedora Linux 36 could fail if those updates aren’t applied first.
Desktop improvements
Fedora Workstation focuses on the desktop, and in particular, it’s geared toward users who want a “just works” Linux operating system experience. As usual, Fedora Workstation features the latest GNOME release: GNOME 42. While it doesn’t completely provide the answer to life, the universe, and everything, GNOME 42 brings a lot of improvements. Many applications have been ported to GTK 4 for improved style and performance. And two new applications come in GNOME 42: Text Editor and Console. They’re aptly named, so you can guess what they do. Text Editor is the new default text editor and Console is available in the repos.
If you use NVIDIA’s proprietary graphics driver, your desktop sessions will now default to using the Wayland protocol. This allows you to take advantage of hardware acceleration while using the modern desktop compositor.
Of course, we produce more than just the Editions. Fedora Spins and Labs target a variety of audiences and use cases, including Fedora Comp Neuro, which provides tools for computational neuroscience, and desktop environments like Fedora LXQt, which provides a lightweight desktop environment. And don’t forget our alternate architectures: ARM AArch64, Power, and S390x.
Sysadmin improvements
Fedora Linux 36 includes the latest release of Ansible. Ansible 5 splits the “engine” into an ansible-core package and collections packages. This makes maintenance easier and allows you to download only the collections you need. See the Ansible 5 Porting Guide to learn how to update your playbooks.
Beginning in Fedora Server 36, Cockpit provides a module for provisioning and ongoing administration of NFS and Samba shares. This allows administrators to manage network file shares through the Cockpit web interface used to configure other server attributes.
Other updates
No matter what variant of Fedora Linux you use, you’re getting the latest the open source world has to offer. Podman 4.0 will be fully released for the first time in Fedora Linux 36. Podman 4.0 has a huge number of changes and a brand new network stack. It also brings backwards-incompatible API changes, so read the upstream documentation carefully.
Following our “First” foundation, we’ve updated key programming language and system library packages, including Ruby 3.1, Golang 1.18 and PHP 8.1.
We’re excited for you to try out the new release! Go to https://getfedora.org/ and download it now. Or if you’re already running Fedora Linux, follow the easy upgrade instructions. For more information on the new features in Fedora Linux 36, see the release notes.
In the unlikely event of a problem…
If you run into a problem, visit our Ask Fedora user-support forum. This includes a category for common issues.
Thank you everyone
Thanks to the thousands of people who contributed to the Fedora Project in this release cycle. We love having you in the Fedora community. Be sure to join us May 13 – 14 for a virtual release party!
The Docs team is experiencing a new burst of energy. As part of this, we have several big improvements to the Fedora Docs site that we want to share.
Searchable docs
For years, readers have asked for search. We have a lot of documentation on the site, but you sometimes struggle to find what you’re looking for. With the new search feature, you can search the entire Fedora Documentation content.
Lunr.js powers the search. This means your browser downloads the index and does the search locally. The advantage is that there are no external dependencies: searches send nothing to a remote server and there is no external Javascript required. The downside is that the index has to be downloaded before search is available. Although we compress the index, if you’re on a slower connection, you may experience delays.
While the search is a major improvement, it’s not perfect. The search tool is not aware of the context of your search and can’t offer “do you mean _ ?” suggestions. Also, because many pages have similar titles, you can’t always tell which page has the information you’re looking for. We’re looking into adding more context to the page titles and working with teams to make titles more useful.
Stable “latest” URL
Many times when you link to a page on Fedora Docs, you don’t care about the version number. For example, if you’re writing a blog post that links to the Installation Guide, you’d rather it go to the Installation Guide for the latest version. If you don’t actively update your links to Fedora Docs, they grow stale over time.
We recently added a stable /latest URL for release docs. For example, https://docs.fedoraproject.org/en-US/fedora/latest/ points to the Fedora Linux 35 documentation. When we release Fedora Linux 36, soon, that URL will point to F36 documentation. You can use this stable URL when you want to target the latest released version and only use specific versions in the URL when the version matters.
Redesigning the site
Over the next few months, we’re working on a two-pronged approach to documentation. First, we want the user documentation to better reflect the changes in the Fedora distribution over the last few years. In the meantime, there are great differences in terms of installation and administration, which makes uniform guides difficult to write. In fact, most Editions now have their own installation guide. As a first step, we will split the current installation guide into a guide describing where to find the installation guide for the appropriate Edition. It will also include a generic description of Anaconda that Editions can link to or include text parts. We expect to be able to connect the customization during the Fedora Linux 37 development cycle. As a next step, we will revise the Administration Guide and Quick Docs.
In addition, the Docs team will be selecting an Outreachy intern to work on a redesign of the site. This will include both the graphical design of the user interface as well as improving the information architecture of the site. We want to make it easier for you to find exactly what you’re looking for.
If you see an issue on any Fedora Docs page, you can click the bug icon on the top of the page to report an issue. Or click the edit icon to submit a correction!
In the good old days, connecting a Linux box to a network was easy. For each of the interface cards connected to a network, the system administrator would drop a configuration file into the /etc directory. That configuration file would describe the addressing configuration for a particular network. On Fedora Linux, the configuration file would actually be a shell script snippet like this:
A shell script executed on startup would read the file and apply the configuration. Simple.
Towards the end of 2004, however, a change was in the air. Quite literally — the Wi-Fi has become ubiquitous. The portable computers of the day could rapidly connect to new networks and the USB bus allowed even the wired network adapters to come and go while the system was up and running. The network configuration became more dynamic than ever before, rendering the existing network configuration tooling impractical. To the rescue came NetworkManager. On a Fedora Linux system, NetworkManager uses configuration like this:
Looks familiar? It should. From the beginning, NetworkManager was intended to work with the existing configuration formats. In fact, it ended up with plugins which would seamlessly convert between NetworkManager’s internal configuration model and the distribution’s native format. On Fedora, it would be the aforementioned ifcfg files.
Let’s take a closer look at them.
Ifcfg files
The legacy network service, now part of the network-scripts package, originally defined the ifcfg file format. Along with the package comes a file called sysconfig.txt that, quite helpfully, documents the format.
As NetworkManager gained traction it often found itself in need of expressing a configuration that was not supported by the old fashioned network service. Given the nature of configuring things with shell scripts, adding new settings is no big deal. The unknown ones are generally just silently ignored. The NetworkManager’s idea of what ifcfg files should look like is described in the nm-settings-ifcfg-rh(5) manual.
In general, NetworkManager tries hard to write ifcfg files that work well with the legacy network service. Nevertheless, sometimes it is just not possible. These days, the number of network connection types that NetworkManager supports vastly outnumber what the legacy network service can configure. . A new format is now used to express what the legacy format can not. This includes VPN connections, broadband modems and more.
Keyfiles
The new format closely resembled the NetworkManager’s native configuration model:
The actual format should be instantly familiar to everyone familiar with Linux systems. It’s the “ini file” or “keyfile” — a bunch of plain text key-value pairs, much like the ifcfg files use, grouped into sections. The nm-settings-ifcfg-keyfile(5) manual documents the format thoroughly.
The main advantage of using this format is that it closely resembles NetworkManager’s idea of how to express network configuration, used both internally and on the D-Bus API. It’s easier to extend without taking into consideration the quirks of the mechanism that was designed in without the benefit of foresight back when the world was young. This means less code, less surprises and less bugs.
In fact there’s nothing the keyfile format can’t express that ifcfg files can. It can express the simple wired connections just as well as the VPNs or modems.
Migrating to keyfiles
The legacy network service served us well for many years, but its days are now long over. Fedora Linux dropped it many releases ago and without it there is seemingly little reason to use the ifcfg files. That is, for new configurations. While Fedora Linux still supports the ifcfg files, it has defaulted to writing keyfiles for quite some time.
Starting with Fedora Linux 36, the ifcfg support will no longer be present in new installations. If you’re still using ifcfg files, do not worry — the existing systems will keep it on upgrades. Nevertheless, you can still decide to uninstall it and carry your configuration over to keyfiles. Keep on reading to learn how.
If you’re like me, you installed your system years ago and you have a mixture of keyfiles and ifcfg files. Here’s how can you check:
$ nmcli -f TYPE,FILENAME,NAME conn
TYPE FILENAME NAME
ethernet /etc/sysconfig/network-scripts/ifcfg-eth0 eth0
wifi /etc/sysconfig/network-scripts/ifcfg-Guest Guest
wifi /etc/NetworkManager/system-connections/Base48 Base48
vpn /etc/NetworkManager/system-connections/VPN.ovpn My VPN
This example shows a VPN connection that must have always used a keyfile and a Wi-Fi connection presumably created after Fedora Linux switched to writing keyfiles by default. There’s also an Ethernet connection and Wi-Fi one from back in the day that use the ifcfg plugin. Let’s see how we can convert those to keyfiles.
The NetworkManager’s command line utility, nmcli(1), acquired a new connection migrate command, that can change the configuration backend used by a connection profile.
It’s a good idea to make a backup of /etc/sysconfig/network-scripts/ifcfg-* files, in case anything goes wrong. Once you have the backup you can try migrating a single connection to a different configuration backend (keyfile by default):
Connecting industrial machinery to the internet has given birth to infinite opportunities that range from performance improvements and predictive maintenance to data modelling that can lead to novel solutions and use cases. The possibilities are endless. Connecting machinery on such a scale can test the limits of cloud connectivity, depending on your location and network limitations.
An edge device is any piece of hardware that sits at the boundary between two networks. When initial computation happens on servers at the edge, it speeds up user’s interactions with the cloud. Therefore, adding edge devices provides opportunities to optimize performance, shorten the journey, and lighten the load on your cloud connection.
As amazing as it sounds, managing all of this functionality demands continuous attention from administrators. Having a reliable solution to distribute, deploy, and update systems for edge devices from the outset will help you spend time on things that matter.
In this article, we look at how OSTree is well-positioned for upgrading and updating edge devices with versioned updates of Linux-based operating systems. Furthermore, we’ll explore how Pulp facilitates managing and preparing updates of the OSTree content, as well as making it available to edge devices. Together, they provide a powerful free and open-source solution for administering edge devices.
How does OSTree help manage Edge devices?
If you need to deploy hundreds of operating systems to edge devices, safe in the knowledge that you can easily manage future updates and maintenance, an OSTree’s immutable and image-based operating system is ready for the task.
OSTree functions like git, but for operating system binaries. It has git-like content-addressed repositories. The ability to commit and branch entire root filesystem trees resembles the way you submit changes in git. With OSTree, you build an operating system with pre-installed packages, known as an operating system image. After you build the operating system image, it is possible to track it, sign it, test it, and deploy it. These images function as immutable file system trees. When the time comes to change or update, you simply build a new image and deploy it. By atomically switching between different versions of images, you are completely replacing filesystem trees.
OSTree also has a simple CLI that you can use for managing simple workflows, for example, for switching between different versions of images/filesystem trees.
Where do Fedora-IoT Images feature?
As a standalone tool, the base OSTree CLI is not the most feature-rich utility for managing repository content. To make life easier, in the following demo, we will use rpm-ostree. rpm-ostree is a hybrid image/package system that combines the standard OSTree technology as a base image format and accepts RPM on both the client and server-side.
rpm-ostree integrates with Fedora IoT. In comparison to other ecosystems, instead of installing packages via DNF, you install packages with rpm-ostree. After rebooting all changes are applied to a new version of the image.
You can also upgrade or install a new Fedora IoT image with the rpm-ostree utility.
Where and how does Pulp come into this?
Pulp is a platform that handles content management workflows. Using Pulp, you can sync packages from remote repositories such as an RPM server, PyPI, Docker Hub, Ansible Galaxy, and many more. You can host and modify synced packages in repositories inside the Pulp server. You can publish repositories that contain packages available for deployment to production environments.
In our scenario, Pulp provides a platform for storing particular versions of OSTree content, promoting approved content through the content management lifecycle, for example from dev to test, and from test to prod. Pulp also provides a method for publishing content that is consumed by edge devices. Using Pulp, you can pull the latest packages, test, and publish only when safe to do so. Pulp ensures the safety, security, and repeatability of your content supply chain.
The following diagram provides a simplified overview of Pulp. On the left are shown different content types that are mirrored into Pulp from remote sources. These repositories are then served, for instance, to different CI/CD or production environments.
A simplified overview of Pulp. The content is mirrored from remote repositories and made available to different types of environments.
Pulp creates a new repository version automatically when updating or removing packages in a repository. You can distribute each repository version independently.
Pulp has a plugin-based architecture, which means that you must add a plugin for each content type you want to use. For managing OSTree content, you need the OSTree plugin. You can then mirror content from a remote repository, import content from a local tarball, and modify content within a Pulp repository while preserving the integrity of the original content. You can move commits and refs from one repository to another or delete them. Pulp ensures that you are safe to experiment while your production environment remains pinned to a particular version.
Putting it all together
In this section, let’s look at how to build an image with an OSTree commit.
Building a Customized Fedora-IoT Image
We start by booting a new virtual machine (VM) that will have an installed Fedora-IoT OS. For the purposes of this example, it is best to have the same version of the OS installed as the running edge devices have.
All commands in this section are executed on the main admin VM (Fedora IoT 35 OS). On this admin VM, we will build the images that we will then distribute to the edge devices.
Before you begin:
First, ensure that the VM is accessible via SSH. To test, enter the following command from within the target OS:
$ systemctl is-active sshd
Next, ensure that the following tools for composing operating system images are installed:
Now, apply the installed packages by rebooting the system.
In this example a nano editor package is installed on all edge devices. We need to build an image containing a commit with the package.
Create a blueprint file that describes what changes you want to make to the image as shown here:
$ cat install-nano.toml name = "nano-commit"
description = "Installing nano"
version = "0.0.1" [[packages]]
name = "nano"
version = "*"
Push this blueprint to the os build composer utility, which is a tool for composing operating system images. composer-cli communicates with osbuild composer through the CLI:
The composer will use resources available in your current OS (such as a default operating system version).
Regularly check the status of the build:
$ composer-cli compose status
When the build finishes, download the image:
$ composer-cli compose image ${IMAGE_UUID}
The downloaded image is basically an OSTree repository packed into a tarball. When you extract the archived content, you will notice that one ref is referencing the checksum of a commit. You can find it inside the refs/heads/ directory.
Publishing the Customized Image with Pulp
All commands shown in this section are executed on the main admin VM (Fedora IoT 35 OS).
Before you begin:
Ensure that you have installed Pulp and the Pulp CLI for managing OSTree repositories:
Now configure a proxy server or SSH port forwarding to enable network communication between the VM and Pulp. Ensure that you can ping the Pulp server from the VM.
First, create a new OSTree repository:
$ pulp ostree repository create --name fedora-iot
The following command will import the tarball created in the previous section into Pulp:
Distributing the Customized Image to an Edge Device
The Edge device can be another VM or a real device running Fedora IoT.
All commands shown in this section are executed on an Edge device (Fedora IoT 35 OS).
Before you begin:
Configure a proxy server or SSH port forwarding to enable network communication between an Edge device and Pulp. Ensure that you can ping the Pulp server from the Edge device.
Ensure that the Edge device is accessible with SSH:
$ systemctl is-active sshd
The nano package should NOT come pre-installed with the official bare Fedora IoT 35 image. Verify that by attempting to run nano inside your terminal.
In Fedora IoT, updates are retrieved from the URL defined in /etc/ostree/remotes.d/fedora-iot.conf. This file can be modified manually or by adding a new remote repository. Learn more at Adding and Removing Remote Repositories.
You can automate the upgrade procedure with an upgrade policy that will be configured at the beginning of deployment. This is done by writing a kickstart file that will boot up an edge device into a headless state. However, for demonstrative purposes, let’s act like a villain and update the aforementioned configuration file manually to have the following content: