With the recent release of Fedora 30, Fedora 28 officially enters End Of Life (EOL) status effective May 28, 2019. This impacts any systems still on Fedora 28. If you’re not sure what that means to you, read more below.
At this point, packages in the Fedora 28 repositories no longer receive security, bugfix, or enhancement updates. Furthermore, the community adds no new packages to the Fedora 28 collection starting at End of Life. Essentially, the Fedora 28 release will not change again, meaning users no longer receive the normal benefits of this leading-edge operating system.
There’s an easy, free way to keep those benefits. If you’re still running an End of Life version such as Fedora 28, now is the perfect time to upgrade to Fedora 29 or to Fedora 30. Upgrading gives you access to all the community-provided software in Fedora.
Looking back at Fedora 28
Fedora 28 was released on May 1, 2018. As part of their commitment to users, Fedora community members released over 9,700 updates.
This release featured, among many other improvements and upgrades:
GNOME 3.28
Easier options for third-party repositories
Automatic updates for the Fedora Atomic Host
The new Modular repository, allowing you to select from different versions of software for your system
Of course, the Project also offered numerous alternative spins of Fedora, and support for multiple architectures.
About the Fedora release cycle
The Fedora Project offers updates for a Fedora release until a month after the second subsequent version releases. For example, updates for Fedora 29 continue until one month after the release of Fedora 31. Fedora 30 continues to be supported up until one month after the release of Fedora 32.
The Fedora Project wiki contains more detailed information about the entire Fedora Release Life Cycle. The lifecycle includes milestones from development to release, and the post-release support period.
For some people, using GNOME Shell as a traditional desktop manager may be frustrating since it often requires more action of the mouse. In fact, GNOME Shell is also a desktop manager designed for and meant to be driven by the keyboard. Learn how to be more efficient with GNOME Shell with these 5 ways to use the keyboard instead of the mouse.
GNOME activities overview
The activities overview can be easily opened using the Super key from the keyboard. (The Super key usually has a logo on it.) This is really useful when it comes to start an application. For example, it’s easy to start the Firefox web browser with the following key sequence Super + f i r + Enter.
Message tray
In GNOME, notifications are available in the message tray. This is also the place where the calendar and world clocks are available. To open the message tray using the keyboard use the Super+m shortcut. To close the message tray simply use the same shortcut again.
Managing workspaces in GNOME
Gnome Shell uses dynamic workspaces, meaning it creates additional workspaces as they are needed. A great way to be more productive using Gnome is to use one workspace per application or per dedicated activity, and then use the keyboard to navigate between these workspaces.
Let’s look at a practical example. To open a Terminal in the current workspace press the following keys: Super + t e r + Enter. Then, to open a new workspace press Super + PgDn. Open Firefox (Super + f i r + Enter). To come back to the terminal, use Super + PgUp.
Managing an application window
Using the keyboard it is also easy to manage the size of an application window. Minimizing, maximizing and moving the application to the left or the right of the screen can be done with only a few key strokes. Use Super+🠝 to maximize, Super+🠟 to minimize, Super+🠜 and Super+🠞 to move the window left and right.
Multiple windows from the same application
Using the activities overview to start an application is very efficient. But trying to open a new window from an application already running only results in focusing on the open window. To create a new window, instead of simply hitting Enter to start the application, use Ctrl+Enter.
So for example, to start a second instance of the terminal using the application overview, Super + t e r + (Ctrl+Enter).
Then you can use Super+` to switch between windows of the same application.
As shown, GNOME Shell is a really powerful desktop environment when controlled from the keyboard. Learning to use these shortcuts and train your muscle memory to not use the mouse will give you a better user experience, and make you more productive when using GNOME. For other useful shortcuts, check out this page on the GNOME wiki.
Packit (https://packit.dev/) is a CLI tool that helps you auto-package your upstream projects into the Fedora operating system. But what does it really mean?
As a developer, you might want to add or update your package in Fedora. If you’ve done it in the past, you know it’s no easy task. If you haven’t let me reiterate: it’s no easy task.
And this is exactly where packit can help: with just one configuration file in your upstream repository, packit will automatically package your software into Fedora and update it when you update your source code upstream.
Furthermore, packit can synchronize downstream changes to a SPEC file back into the upstream repository. This could be useful if the SPEC file of your package is changed in Fedora repositories and you would like to synchronize it into your upstream project.
Packit also provides a way to build an SRPM package based on an upstream repository checkout, which can be used for building RPM packages in COPR.
Last but not least, packit provides a status command. This command provides information about upstream and downstream repositories, like pull requests, release and more others.
Packit provides also another two commands: build and create-update.
The command packit build performs a production build of your project in Fedora build system – koji. You can Fedora version you want to build against using an option –dist-git-branch. The command packit create-updates creates a Bodhi update for the specific branch using the option —dist-git-branch.
Installation
You can install packit on Fedora using dnf:
sudo dnf install -y packit
Configuration
For demonstration use case, I have selected the upstream repository of colin (https://github.com/user-cont/colin). Colin is a tool to check generic rules and best-practices for containers, dockerfiles, and container images.
First of all, clone colin git repository:
$ git clone https://github.com/user-cont/colin.git $ cd colin
Packit expects to run in the root of your git repository.
Prerequisite for using packit is that you are in a working directory of a git checkout of your upstream project.
Before running any packit command, you need to do several actions. These actions are mandatory for filing a PR into the upstream or downstream repositories and to have access into the Fedora dist-git repositories.
INFO: Running 'anitya' versioneer Version in upstream registries is '0.3.1'. Version in spec file is '0.3.0'. WARNING Version in spec file is outdated Picking version of the latest release from the upstream registry. Checking out upstream version 0.3.1 Using 'master' dist-git branch Copying /home/vagrant/colin/colin.spec to /tmp/tmptfwr123c/colin.spec. Archive colin-0.3.0.tar.gz found in lookaside cache (skipping upload). INFO: Downloading file from URL https://files.pythonhosted.org/packages/source/c/colin/colin-0.3.0.tar.gz 100%[=============================>] 3.18M eta 00:00:00 Downloaded archive: '/tmp/tmptfwr123c/colin-0.3.0.tar.gz' About to upload to lookaside cache won't be doing kinit, no credentials provided PR created: https://src.fedoraproject.org/rpms/colin/pull-request/14
Once the command finishes, you can see a PR in the Fedora Pagure instance which is based on the latest upstream release. Once you review it, it can be merged.
Sync downstream changes back to the upstream repository
Another use case is to sync downstream changes into the upstream project repository.
upstream active branch master using "master" dist-git branch Copying /tmp/tmplvxqtvbb/colin.spec to /home/vagrant/colin/colin.spec. Creating remote fork-ssh with URL git@github.com:phracek/colin.git. Pushing to remote fork-ssh using branch master-downstream-sync. PR created: https://github.com/user-cont/colin/pull/229
As soon as packit finishes, you can see the latest changes taken from the Fedora dist-git repository in the upstream repository. This can be useful, e.g. when Release Engineering performs mass-rebuilds and they update your SPEC file in the Fedora dist-git repository.
Get the status of your upstream project
If you are a developer, you may want to get all the information about the latest releases, tags, pull requests, etc. from the upstream and the downstream repository. Packit provides the status command for this purpose.
The last packit use case is to generate an SRPM package based on a git checkout of your upstream project. The packit command for SRPM generation is srpm.
$ packit srpm Version in spec file is '0.3.1.37.g00bb80e'. SRPM: /home/phracek/work/colin/colin-0.3.1.37.g00bb80e-1.fc29.src.rpm
Packit as a service
In the summer, the people behind packit would like to introduce packit as a service (https://github.com/packit-service/packit-service). In this case, the packit GitHub application will be installed into the upstream repository and packit will perform all the actions automatically, based on the events it receives from GitHub or fedmsg.
Telnet is a client-server protocol that connects to a remote server through TCP over port 23. Telnet does not encrypt data and is considered insecure and passwords can be easily sniffed because data is sent in the clear. However there are still legacy systems that need to use it. This is where stunnel comes to the rescue.
Stunnel is designed to add SSL encryption to programs that have insecure connection protocols. This article shows you how to use it, with telnet as an example.
Server Installation
Install stunnel along with the telnet server and client using sudo:
sudo dnf -y install stunnel telnet-server telnet
Add a firewall rule, entering your password when prompted:
You will be prompted for the following information one line at a time. When asked for Common Name you must enter the correct host name or IP address, but everything else you can skip through by hitting the Enter key.
You are about to be asked to enter information that will be incorporated into your certificate request. What you are about to enter is what is called a Distinguished Name or a DN. There are quite a few fields but you can leave some blank For some fields there will be a default value, If you enter '.', the field will be left blank. ----- Country Name (2 letter code) [XX]: State or Province Name (full name) []: Locality Name (eg, city) [Default City]: Organization Name (eg, company) [Default Company Ltd]: Organizational Unit Name (eg, section) []: Common Name (eg, your name or your server's hostname) []: Email Address []
Merge the RSA key and SSL certificate into a single .pem file, and copy that to the SSL certificate directory:
Now it’s time to define the service and the ports to use for encrypting your connection. Choose a port that is not already in use. This example uses port 450 for tunneling telnet. Edit or create the /etc/stunnel/telnet.conf file:
The accept option is the port the server will listen to for incoming telnet requests. The connect option is the internal port the telnet server listens to.
Next, make a copy of the systemd unit file that allows you to override the packaged version:
A note on the systemctl command is in order. Systemd and the stunnel package provide an additional template unit file by default. The template lets you drop multiple configuration files for stunnel into /etc/stunnel, and use the filename to start the service. For instance, if you had a foobar.conf file, you could start that instance of stunnel with systemctl start stunnel@foobar.service, without having to write any unit files yourself.
If you want, you can set this stunnel template service to start on boot:
systemctl enable stunnel@telnet.service
Client Installation
This part of the article assumes you are logged in as a normal user (with sudo privileges) on the client system. Install stunnel and the telnet client:
dnf -y install stunnel telnet
Copy the stunnel.pem file from the remote server to your client /etc/pki/tls/certs directory. In this example, the IP address of the remote telnet server is 192.168.1.143.
The accept option is the port that will be used for telnet sessions. The connect option is the IP address of your remote server and the port it’s listening on.
Next, enable and start stunnel:
systemctl enable stunnel@telnet.service --now
Test your connection. Since you have a connection established, you will telnet to localhost instead of the hostname or IP address of the remote telnet server:
[user@client ~]$ telnet localhost 450 Trying ::1... telnet: connect to address ::1: Connection refused Trying 127.0.0.1... Connected to localhost. Escape character is '^]'.
Kernel 5.0.9-301.fc30.x86_64 on an x86_64 (0) server login: myuser Password: XXXXXXX Last login: Sun May 5 14:28:22 from localhost [myuser@server ~]$
In addition to providing an operating system, the Fedora Project provides numerous services for users and developers. Services such as Ask Fedora, the Fedora Project Wiki and the Fedora Project Mailing Lists provide users with valuable resources for learning how to best take advantage of Fedora. For developers of Fedora, there are many other services such as dist-git, Pagure, Bodhi, COPR and Bugzilla that are involved with the packaging and release process.
These services are available for use with a free account from the Fedora Accounts System (FAS). This account is the passport to all things Fedora! This article covers how to get set up with an account and configure Fedora Workstation for browser single sign-on.
Signing up for a Fedora account
To create a FAS account, browse to the account creation page. Here, you will fill out your basic identity data:
Account creation page
Once you enter your data, an email will be sent to the email address provided, with a temporary password. Pick a strong password and use it.
Password reset page
Next, the account details page appears. If you intend to become a contributor to the Fedora Project, you should complete the Contributor Agreement now. Otherwise, you are done and your account can now be used to log into the various Fedora services.
Account details page
Configuring Fedora Workstation for single sign-On
Now that you have your account, you can sign into any of the Fedora Project services. Most of these services support single sign-on (SSO), allowing you to sign in without re-entering your username and password.
Fedora Workstation provides an easy workflow to add SSO credentials. The GNOME Online Accounts tool helps you quickly set up your system to access many popular services. To access it, go to the Settings menu.
GNOME Online Accounts
Click on the ⋮ button and select Enterprise Login (Kerberos), which provides a single text prompt for a principal. Enter fasname@FEDORAPROJECT.ORG (being sure to capitalize FEDORAPROJECT.ORG) and click Connect.
Kerberos principal dialog
GNOME prompts you to enter your password for FAS and given the option to save it. If you choose to save it, it is stored in GNOME Keyring and unlocked automatically at login. If you choose not to save it, you will need to open GNOME Online Accounts and enter your password each time you want to enable single sign-on.
Single sign-on with a web browser
Today, Fedora Workstation supports two web browsers “out of the box” with support for single sign-on with the Fedora Project services. These are Mozilla Firefox and Google Chrome. Due to a bug in Chromium, single sign-on does not currently work properly in many cases. As a result, this has not been enabled for Chromium in Fedora.
To sign on to a service, browse to it and select the “login” option for that service. For most Fedora services, this is the only thing you need to do and the browser handles the rest. Some services such as the Fedora Mailing Lists and Bugzilla support multiple login types. For them, you need to select the “Fedora” or “Fedora Account System” login type.
That’s it! You can now log into any of the Fedora Project services without re-entering your password.
Special consideration for Google Chrome
In order to enable single sign-on out of the box for Google Chrome, Fedora needed to take advantage of certain features in Chrome that are intended for use in “managed” environments. A managed environment is traditionally a corporate or other organization that sets certain security and/or monitoring requirements on the browser.
Recently, Google Chrome changed its behavior and it now reports “Managed by your organization” under the ⋮ menu in Google Chrome. That link leads to a page that states “If your Chrome browser is managed, your administrator can set up or restrict certain features, install extensions, monitor activity, and control how you use Chrome.” Fedora will never monitor your browser activity or restrict your actions.
Enter chrome://policy in the address bar to see exactly what settings Fedora has enabled in the browser. The AuthNegotiateDelegateWhitelist and AuthServerWhitelist options will be set to *.fedoraproject.org. These are the only changes Fedora makes.
Linux Containers have become a popular topic, making sure that a container image is not bigger than it should be is considered as a good practice. This article give some tips on how to create smaller Fedora container images.
microdnf
Fedora’s DNF is written in Python and and it’s designed to be extensible as it has wide range of plugins. But Fedora has an alternative base container image which uses an smaller package manager called microdnf written in C. To use this minimal image in a Dockerfile the FROM line should look like this:
FROM registry.fedoraproject.org/fedora-minimal:30
This is an important saving if your image does not need typical DNF dependencies like Python. For example, if you are making a NodeJS image.
Install and Clean up in one layer
To save space it’s important to remove repos meta data using dnf clean all or its microdnf equivalent microdnf clean all. But you should not do this in two steps because that would actually store those files in a container image layer then mark them for deletion in another layer. To do it properly you should do the installation and cleanup in one step like this
FROM registry.fedoraproject.org/fedora-minimal:30 RUN microdnf install nodejs && microdnf clean all
Modularity with microdnf
Modularity is a way to offer you different versions of a stack to choose from. For example you might want non-LTS NodeJS version 11 for a project and old LTS NodeJS version 8 for another and latest LTS NodeJS version 10 for another. You can specify which stream using colon
# dnf module list # dnf module install nodejs:8
The dnf module install command implies two commands one that enables the stream and one that install nodejs from it.
# dnf module enable nodejs:8 # dnf install nodejs
Although microdnf does not offer any command related to modularity, it is possible to enable a module with a configuation file, and libdnf (which microdnf uses) seems to support modularity streams. The file looks like this
A full Dockerfile using modularity with microdnf looks like this:
FROM registry.fedoraproject.org/fedora-minimal:30 RUN \ echo -e "[nodejs]\nname=nodejs\nstream=8\nprofiles=\nstate=enabled\n" > /etc/dnf/modules.d/nodejs.module && \ microdnf install nodejs zopfli findutils busybox && \ microdnf clean all
Multi-staged builds
In many cases you might have tons of build-time dependencies that are not needed to run the software for example building a Go binary, which statically link dependencies. Multi-stage build are an efficient way to separate the application build and the application runtime.
For example the Dockerfile below builds confd a Go application.
# building container FROM registry.fedoraproject.org/fedora-minimal AS build RUN mkdir /go && microdnf install golang && microdnf clean all WORKDIR /go RUN export GOPATH=/go; CGO_ENABLED=0 go get github.com/kelseyhightower/confd
FROM registry.fedoraproject.org/fedora-minimal WORKDIR / COPY --from=build /go/bin/confd /usr/local/bin CMD ["confd"]
The multi-stage build is done by adding AS after the FROM instruction and by having another FROM from a base container image then using COPY –from= instruction to copy content from the build container to the second container.
This Dockerfile can then be built and run using podman
Business documents often require special handling. Enter Electronic Document Interchange, or EDI. EDI is more than simply transferring files using email or http (or ftp), because these are documents like orders and invoices. When you send an invoice, you want to be sure that:
1. It goes to the right destination, and is not intercepted by competitors. 2. Your invoice cannot be forged by a 3rd party. 3. Your customer can’t claim in court that they never got the invoice.
The first two goals can be accomplished by HTTPS or email with S/MIME, and in some situations, a simple HTTPS POST to a web API is sufficient. What EDI adds is the last part.
This article does not cover the messy topic of formats for the files exchanged. Even when using a standardized format like ANSI or EDIFACT, it is ultimately up to the business partners. It is not uncommon for business partners to use an ad-hoc CSV file format. This article shows you how to configure Fedora to send and receive in an EDI setup.
Centralized EDI
The traditional solution is to use a Value Added Network, or VAN. The VAN is a central hub that transfers documents between their customers. Most importantly, it keeps a secure record of the documents exchanged that can be used as evidence in disputes. The VAN can use different transfer protocols for each of its customers
AS Protocols and MDN
The AS protocols are a specification for adding a digital signature with optional encryption to an electronic document. What it adds over HTTPS or S/MIME is the Message Disposition Notification, or MDN. The MDN is a signed and dated response that says, in essence, “We got your invoice.” It uses a secure hash to identify the specific document received. This addresses point #3 without involving a third party.
The AS2 protocol uses HTTP or HTTPS for transport. Other AS protocols target FTP and SMTP. AS2 is used by companies big and small to avoid depending on (and paying) a VAN.
OpenAS2
OpenAS2 is an open source Java implemention of the AS2 protocol. It is available in Fedora since 28, and installed with:
$ sudo dnf install openas2 $ cd /etc/openas2
Configuration is done with a text editor, and the config files are in XML. The first order of business before starting OpenAS2 is to change the factory passwords.
Edit /etc/openas2/config.xml and search for ChangeMe. Change those passwords. The default password on the certificate store is testas2, but that doesn’t matter much as anyone who can read the certificate store can read config.xml and get the password.
What to share with AS2 partners
There are 3 things you will exchange with an AS2 peer.
AS2 ID
Don’t bother looking up the official AS2 standard for legal AS2 IDs. While OpenAS2 implements the standard, your partners will likely be using a proprietary product which doesn’t. While AS2 allows much longer IDs, many implementations break with more than 16 characters. Using otherwise legal AS2 ID chars like ‘:’ that can appear as path separators on a proprietary OS is also a problem. Restrict your AS2 ID to upper and lower case alpha, digits, and ‘_’ with no more than 16 characters.
SSL certificate
For real use, you will want to generate a certificate with SHA256 and RSA. OpenAS2 ships with two factory certs to play with. Don’t use these for anything real, obviously. The certificate file is in PKCS12 format. Java ships with keytool which can maintain your PKCS12 “keystore,” as Java calls it. This article skips using openssl to generate keys and certificates. Simply note that sudo keytool -list -keystore as2_certs.p12 will list the two factory practice certs.
AS2 URL
This is an HTTP URL that will access your OpenAS2 instance. HTTPS is also supported, but is redundant. To use it you have to uncomment the https module configuration in config.xml, and supply a certificate signed by a public CA. This requires another article and is entirely unnecessary here.
By default, OpenAS2 listens on 10080 for HTTP and 10443 for HTTPS. OpenAS2 can talk to itself, so it ships with two partnerships using http://localhost:10080 as the AS2 URL. If you don’t find this a convincing demo, and can install a second instance (on a VM, for instance), you can use private IPs for the AS2 URLs. Or install Cjdns to get IPv6 mesh addresses that can be used anywhere, resulting in AS2 URLs like http://[fcbf:fc54:e597:7354:8250:2b2e:95e6:d6ba]:10080.
Most businesses will also want a list of IPs to add to their firewall. This is actually bad practice. An AS2 server has the same security risk as a web server, meaning you should isolate it in a VM or container. Also, the difficulty of keeping mutual lists of IPs up to date grows with the list of partners. The AS2 server rejects requests not signed by a configured partner.
OpenAS2 Partners
With that in mind, open partnerships.xml in your editor. At the top is a list of “partners.” Each partner has a name (referenced by the partnerships below as “sender” or “receiver”), AS2 ID, certificate, and email. You need a partner definition for yourself and those you exchange documents with. You can define multiple partners for yourself. OpenAS2 ships with two partners, OpenAS2A and OpenAS2B, which you’ll use to send a test document.
OpenAS2 Partnerships
Next is a list of “partnerships,” one for each direction. Each partnership configuration includes the sender, receiver, and the AS2 URL used to send the documents. By default, partnerships use synchronous MDN. The MDN is returned on the same HTTP transaction. You could uncomment the as2_receipt_option for asynchronous MDN, which is sent some time later. Use synchronous MDN whenever possible, as tracking pending MDNs adds complexity to your application.
The other partnership options select encryption, signature hash, and other protocol options. A fully implemented AS2 receiver can handle any combination of options, but AS2 partners may have incomplete implementations or policy requirements. For example, DES3 is a comparatively weak encryption algorithm, and may not be acceptable. It is the default because it is almost universally implemented.
If you went to the trouble to set up a second physical or virtual machine for this test, designate one as OpenAS2A and the other as OpenAS2B. Modify the as2_url on the OpenAS2A-to-OpenAS2B partnership to use the IP (or hostname) of OpenAS2B, and vice versa for the OpenAS2B-to-OpenAS2A partnership. Unless they are using the FedoraWorkstation firewall profile, on both machines you’ll need:
Now start the openas2 service (on both machines if needed):
# sudo systemctl start openas2
Resetting the MDN password
This initializes the MDN log database with the factory password, not the one you changed it to. This is a packaging bug to be fixed in the next release. To avoid frustration, here’s how to change the h2 database password:
$ sudo systemctl stop openas2 $ cat >h2passwd <<'DONE' #!/bin/bash AS2DIR="/var/lib/openas2" java -cp "$AS2DIR"/lib/h2* org.h2.tools.Shell \ -url jdbc:h2:"$AS2DIR"/db/openas2 \ -user sa -password "$1" <<EOF alter user sa set password '$2'; exit EOF DONE $ sudo sh h2passwd ChangeMe yournewpasswordsetabove $ sudo systemctl start openas2
Testing the setup
With that out of the way, let’s send a document. Assuming you are on OpenAS2A machine:
$ cat >testdoc <<'DONE' This is not a real EDI format, but is nevertheless a document. DONE $ sudo chown openas2 testdoc $ sudo mv testdoc /var/spool/openas2/toOpenAS2B $ sudo journalctl -f -u openas2 ... log output of sending file, Control-C to stop following log ^C
OpenAS2 does not send a document until it is writable by the openas2 user or group. As a consequence, your actual business application will copy, or generate in place, the document. Then it changes the group or permissions to send it on its way, to avoid sending a partial document.
Now, on the OpenAS2B machine, /var/spool/openas2/OpenAS2A_OID-OpenAS2B_OID/inbox shows the message received. That should get you started!
The kernel team is working on final integration for kernel 5.1. This version was just recently released, and will arrive soon in Fedora. This version has many security fixes included. As a result, the Fedora kernel and QA teams have organized a test week from Monday, May 13, 2019 through Saturday, May 18, 2019. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.
How does a test week work?
A test day/week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.
To contribute, you only need to be able to do the following things:
Download test materials, which include some large files
Read and follow directions step by step
The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results.
Happy testing, and we hope to see you on test day.
This article includes some example commands to show you how to get a rough estimate of hard drive and RAID array performance using the dd command. Accurate measurements would have to take into account things like write amplification and system call overhead, which this guide does not. For a tool that might give more accurate results, you might want to consider using hdparm.
To factor out performance issues related to the file system, these examples show how to test the performance of your drives and arrays at the block level by reading and writing directly to/from their block devices. WARNING: The write tests will destroy any data on the block devices against which they are run. Do not run them against any device that contains data you want to keep!
Four tests
Below are four example dd commands that can be used to test the performance of a block device:
– The iflag=nocache and oflag=direct parameters are important when performing the read and write tests (respectively) because without them the dd command will sometimes show the resulting speed of transferring the data to/from RAM rather than the hard drive.
– The values for the bs and count parameters are somewhat arbitrary and what I have chosen should be large enough to provide a decent average in most cases for current hardware.
– The null and zero devices are used for the destination and source (respectively) in the read and write tests because they are fast enough that they will not be the limiting factor in the performance tests.
– The skip=200 parameter on the second dd command in the concurrent read and write tests is to ensure that the two copies of dd are operating on different areas of the hard drive.
16 examples
Below are demonstrations showing the results of running each of the above four tests against each of the following four block devices:
MY_DISK=/dev/sda2 (used in examples 1-X)
MY_DISK=/dev/sdb2 (used in examples 2-X)
MY_DISK=/dev/md/stripped (used in examples 3-X)
MY_DISK=/dev/md/mirrored (used in examples 4-X)
A video demonstration of the these tests being run on a PC is provided at the end of this guide.
Begin by putting your computer into rescue mode to reduce the chances that disk I/O from background services might randomly affect your test results. WARNING: This will shutdown all non-essential programs and services. Be sure to save your work before running these commands. You will need to know your root password to get into rescue mode. The passwd command, when run as the root user, will prompt you to (re)set your root account password.
200+0 records in 200+0 records out 209715200 bytes (210 MB, 200 MiB) copied, 3.42875 s, 61.2 MB/s 200+0 records in 200+0 records out 209715200 bytes (210 MB, 200 MiB) copied, 3.52614 s, 59.5 MB/s
200+0 records in 200+0 records out 209715200 bytes (210 MB, 200 MiB) copied, 3.52808 s, 59.4 MB/s 200+0 records in 200+0 records out 209715200 bytes (210 MB, 200 MiB) copied, 3.57736 s, 58.6 MB/s
200+0 records in 200+0 records out 209715200 bytes (210 MB, 200 MiB) copied, 3.7841 s, 55.4 MB/s 200+0 records in 200+0 records out 209715200 bytes (210 MB, 200 MiB) copied, 3.81475 s, 55.0 MB/s
200+0 records in 200+0 records out 209715200 bytes (210 MB, 200 MiB) copied, 1.31025 s, 160 MB/s 200+0 records in 200+0 records out 209715200 bytes (210 MB, 200 MiB) copied, 1.80016 s, 116 MB/s
200+0 records in 200+0 records out 209715200 bytes (210 MB, 200 MiB) copied, 1.65026 s, 127 MB/s 200+0 records in 200+0 records out 209715200 bytes (210 MB, 200 MiB) copied, 1.81323 s, 116 MB/s
200+0 records in 200+0 records out 209715200 bytes (210 MB, 200 MiB) copied, 1.67171 s, 125 MB/s 200+0 records in 200+0 records out 209715200 bytes (210 MB, 200 MiB) copied, 1.67685 s, 125 MB/s
200+0 records in 200+0 records out 209715200 bytes (210 MB, 200 MiB) copied, 4.09666 s, 51.2 MB/s 200+0 records in 200+0 records out 209715200 bytes (210 MB, 200 MiB) copied, 4.1067 s, 51.1 MB/s
Restore your swap device and journald configuration
Examples 1-1, 1-2, 2-1, and 2-2 show that each of my drives read and write at about 125 MB/s.
Examples 1-3, 1-4, 2-3, and 2-4 show that when two reads or two writes are done in parallel on the same drive, each process gets at about half the drive’s bandwidth (60 MB/s).
The 3-x examples show the performance benefit of putting the two drives together in a RAID0 (data stripping) array. The numbers, in all cases, show that the RAID0 array performs about twice as fast as either drive is able to perform on its own. The trade-off is that you are twice as likely to lose everything because each drive only contains half the data. A three-drive array would perform three times as fast as a single drive (all drives being equal) but it would be thrice as likely to suffer a catastrophic failure.
The 4-x examples show that the performance of the RAID1 (data mirroring) array is similar to that of a single disk except for the case where multiple processes are concurrently reading (example 4-3). In the case of multiple processes reading, the performance of the RAID1 array is similar to that of the RAID0 array. This means that you will see a performance benefit with RAID1, but only when processes are reading concurrently. For example, if a process tries to access a large number of files in the background while you are trying to use a web browser or email client in the foreground. The main benefit of RAID1 is that your data is unlikely to be lost if a drive fails.
If the above tests aren’t performing as you expect, you might have a bad or failing drive. Most modern hard drives have built-in Self-Monitoring, Analysis and Reporting Technology (SMART). If your drive supports it, the smartctl command can be used to query your hard drive for its internal statistics:
Another way that you might be able to tune your PC for better performance is by changing your I/O scheduler. Linux systems support several I/O schedulers and the current default for Fedora systems is the multiqueue variant of the deadline scheduler. The default performs very well overall and scales extremely well for large servers with many processors and large disk arrays. There are, however, a few more specialized schedulers that might perform better under certain conditions.
To view which I/O scheduler your drives are using, issue the following command:
$ for i in /sys/block/sd?/queue/scheduler; do echo "$i: $(<$i)"; done
You can change the scheduler for a drive by writing the name of the desired scheduler to the /sys/block/<device name>/queue/scheduler file:
# echo bfq > /sys/block/sda/queue/scheduler
You can make your changes permanent by creating a udev rule for your drive. The following example shows how to create a udev rule that will set all rotational drives to use the BFQ I/O scheduler:
# cat << END > /etc/udev/rules.d/60-ioscheduler-rotational.rules ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="1", ATTR{queue/scheduler}="bfq" END
# cat << END > /etc/udev/rules.d/60-ioscheduler-solid-state.rules ACTION=="add|change", KERNEL=="sd[a-z]", ATTR{queue/rotational}=="0", ATTR{queue/scheduler}="none" END
Changing your I/O scheduler won’t affect the raw throughput of your devices, but it might make your PC seem more responsive by prioritizing the bandwidth for the foreground tasks over the background tasks or by eliminating unnecessary block reordering.
If you’ve been reading the Community blog, you’ll already know: AskFedora has moved to Discourse! Read on for more information about this exciting platform.
Discourse? Why Discourse?
The new AskFedora is a Discourse instance hosted by Discourse, similar to discussion.fedoraproject.org. However, where discussion.fedoraproject.org is meant for development discussion within the community, AskFedora is meant for end-user troubleshooting.
The Discourse platform focuses on conversations. Not only can you ask questions and receive answers, you can have complete dialogues with others. This is especially fitting since troubleshooting includes lots of bits that are neither questions nor answers. Instead, there are lots of suggestions, ideas, thoughts, comments, musings, none of which necessarily are the one true answer, but all of which are required steps that together lead us to the solution.
Apart from this fresh take on discussions, Discourse comes with a full set of features that make interacting with each other very easy.
This decision was made mainly to combat the spam and security issues previously encountered with the various social media login services.
So, unlike the current Askbot setup where you could login using different social media services, you will need to create a Fedora Account to use the new Discourse based instance. Luckily, creating a Fedora Account is very easy!
Choose a username, enter your name, and a valid e-mail address, a security question.
Do the “captcha” to confirm that you are indeed a human, and confirm that you are older than 13 years of age.
That’s it! You now have a Fedora account.
Get started!
If you are using the platform for the first time, you should start with the “New users! Start here!” category. Here, we’ve put short summaries on how to use the platform effectively. This includes information on how to use Discourse, its many features that make it a great platform, notes on how to ask and respond to queries, subscribing and unsubscribing from categories, and lots more.
For the convenience of the global Fedora community, these summaries are available in all the languages that the community supports. So, please do take a minute to go over these introductory posts.
Discuss, learn, teach, have fun!
Please login, ask and discuss your queries and help each other out. As always, suggestions and feedback are always welcome. You can post these in the “Site feedback” category.
As a last note, please do remember to “be excellent to each other.” The Fedora Code of Conduct applies to all of us!
Acknowledgements
The Fedora community does everything together, so many volunteers joined forces and gave their resources to make this possible. We are most grateful to the Askbot developers who have hosted AskFedora till now, the Discourse team for hosting it now, and all the community members who helped set it up, and everyone that helps keep the Fedora community ticking along!