Posted on Leave a comment

Matthew Arnold: Why I switched to Fedora

To a veteran user of other distributions, Fedora can be a challenge. Many things are not where you expect them to be. The default LVM volume allocations are a bit tricky. And packages including the kernel are frequently upgraded. So why switch after years of using other distributions?

In my case, for a variety of technical and political reasons, Fedora was the best option if I wanted to continue using Linux as my daily driver. If you are making the transition from another distribution, here are some observations and tips to get you started.

Firm foundations

In Fedora you will find a community just as fiercely dedicated to its users and free software as Debian, as fanatical about polish and design as anyone in Ubuntu, and as passionate about learning and discovery as users of Arch or Slackware. Flowing under it all you will find a welcoming community dedicated to technical excellence. The form may change, but underneath all the trappings of systemd, dnf, rpm, and other differences, you will find a thriving healthy and growing community of people who have gathered together to make something awesome. Welcome to Fedora, and I hope you stay awhile.

The best way to get to know the Fedora community is to explore it for yourself. I hope a future article will highlight some of the more interesting aspects of Fedora for newcomers. Below are a few tips that I have put together to help you find your way around a new Fedora installation.

Install and explore

Installation proceeds as you would expect but be aware that you might want to adjust the LVM volume allocations in the install process or shortly afterwards or you might run low on space in a key place unexpectedly! Btrfs is also a supported option that is worth a look if you have lots of small disks.

Freedom matters

As stated above Fedora has a software freedom commitment similar in spirit to that of Debian. This means that you should be able to give Fedora to anyone, anywhere without violating intellectual property laws. Any software which is either not licensed in a way that Fedora finds acceptable or that bares US patent encumbrances can be found in the rpmfusion.org repository.

After the install your next concern is undoubtedly configuring things and installing new packages. Fedora’s command-line package manager is dnf. It works as you would expect.

Note also that since rpm uses file-based dependency tracking instead of package-based dependency tracking, as almost all others do, there are very few traditional metapackages. There are, however, package groups. To get a list of package groups, the command is:

$ dnf group list

To get a list of all installed packages on the system, the command is:

$ rpm -qa

All rpm commands are easily filterable using traditional Unix tools. So you should have no trouble adapting your workflow to the new environment. All the information gathered with the below commands can also be gathered through the dnf command. For information gathering, I prefer to use the rpm command because it presents information in a way that is easily parseable by commands like grep. But if you are making changes to the system, it is easier and safer to use dnf.

To get a package’s version, description, and other metainformation the command is:

$ rpm -qi <packagename>

To list the contents of an installed package the command is:

$ rpm -ql <packagename>

One way in which rpm is easier to use then dpkg or the slack package tools is that rpm stores change log information for each package in the package manager database itself so it is very easy to diagnose whether an update might have broken or changed something unexpectedly. This command is:

$ rpm -q --changes <packagname>

On the kernel

Perhaps one of the most exciting differences between Fedora and other projects, for newcomers at least, is Fedora’s policy on the kernel. Fedora’s policy is to align the distribution’s kernel package life cycle with the upstream mainline kernel life cycle. This means that every Fedora release will have multiple major kernel versions during its lifetime.

This offers several advantages for both users and developers. Primarily, Fedora users are among the first to receive all of the latest drivers, security fixes, new features, etc.

If you do not have an installation that uses out-of-tree modules or custom patches this should not be much of concern to you. However, if you rely on a kernel module like zfs, for example. Rebuilding the filesystem module every 2-3 months can get tedious and error prone after a while. This problem only compounds if you depend upon custom patches for your system to work correctly. There is good news and bad news on this issue.

The good news is that Fedora’s process for building a custom kernel is well documented

The bad news is, as with all things kernel related in all projects, going the custom route means you’re on your own in terms of support. The 2-3 month lifecycle means you’ll be building modules and kernels far more often then you are used to. This may be a deal breaker for some. But even this offers an advantage to the discerning or adventuress user. You will find that being encouraged to rebase your custom kernel setup every two to three months will give you far greater insight into what is going on upstream in mainline Linux and the various out of tree projects you rely on.

Conclusion

Hopefully these tips will get you started exploring and configuring your new Fedora system. Once you have done that. I urge you to explore the community. Like any other free software product of Fedora’s age and size, there are a plethora of communication channels available. You should read the code of conduct and then head over to the communication page on the wiki to get started. As with the distribution itself, for all the differences in culture you will find that much remains the same.

Posted on Leave a comment

Real-time noise suppression for video conferencing

With people doing video conferencing all day, good audio has recently become much more important. The best option is obviously a proper audio studio. Unfortunately, this is not something you will always have and you might need to make do with a much simpler setup.

In such situations, a noise reduction filter that keeps your voice but filters out ambient noises (street noise, keyboard, …) can be very helpful. In this article, we will take a look at how to integrate such a filter into PulseAudio so that it can easily be used in all applications with no additional requirements on their part.

Example of switching on noise reduction

The Idea

We set up PulseAudio for live noise-reduction using an LADSPA filter.

This creates a new PulseAudio source which can be used as a virtual microphone. Other applications will not even realize that they are not dealing with physical devices and you can select it as if you had an additional microphone connected.

Terminology

Before we start, it is good to know the following two PulseAudio terms to better understand what we are doing:

  • source – represents a source from which audio can be obtained. Like a microphone
  • sink – represents a consumer of audio like a speaker

Each PulseAudio sink also has a source called monitor which can be used to get the audio put into that sink. For example, you could have audio put out by your headphones while using the monitor of your headphone device to record the output.

Installation

While PulseAudio is usually pre-installed, we need to get the LADSPA filter for noise reduction. You can build and install the filter manually, but it is much easier to install the filter via Fedora Copr:

sudo dnf copr enable -y lkiesow/noise-suppression-for-voice
sudo dnf install -y ladspa-realtime-noise-suppression-plugin

Note that the Copr projects are not maintained and quality-controlled by Fedora directly.

Enable Noise Reduction Filter

First, you need to identify the name of the device you want to apply the noise reduction to. In this example, we’ll use the RODE NT-USB microphone as input.

$ pactl list sources short
0 alsa_input.usb-RODE_Microphones_RODE_NT-USB-00.iec958-stereo …
1 alsa_output.usb-0c76_USB_Headphone_Set-00.analog-stereo.monitor …

Next, we create a new PulseAudio sink, the filter and a loopback between microphone and filter. That way, the output from the microphone is used as input for the noise reduction filter. The output from this filter will then be available via the null sink monitor.

To visualize this, here is the path the audio will travel from the microphone to, for example, a browser:

mic → loopback → ladspa filter → null sink [monitor] → browser

While this sounds complicated, it is set up with just a few simple commands:

pacmd load-module module-null-sink \ sink_name=mic_denoised_out
pacmd load-module module-ladspa-sink \ sink_name=mic_raw_in \ sink_master=mic_denoised_out \ label=noise_suppressor_stereo \ plugin=librnnoise_ladspa \ control=50
pacmd load-module module-loopback \ source=alsa_input.usb-RODE_Microphones_RODE_NT-USB-00.iec958-stereo \ sink=mic_raw_in \ channels=2

That’s it. You should now be able to select the new device.

New recording devices in pavucontrol

Chromium

Unfortunately, browsers based on Chromium will hide monitor devices by default. This means, that we cannot select the newly created noise-reduction device in the browser. One workaround is to select another device first, then use pavucontrol to assign the noise-reduction device afterward.

But if you do this on a regular basis, you can work around the issue by using the remap-source module to convert the null sink monitor to a regular PulseAudio source. The module is actually meant for remapping audio channels – e.g. swapping left and right channel on stereo audio – but we can just ignore these additional capabilities and create a new source similar to the monitor:

pacmd load-module module-remap-source \ source_name=denoised \ master=mic_denoised_out.monitor \ channels=2

The remapped device delivers audio identical to the original one so that assigning this with PulseAudio will yield no difference. But this device does now show up in Chromium:

Remapped monitor device in Chrome

Improvements

While the guide above should help you with all the basics and will get you a working setup, there are a few things you can improve.

But while the commands above should generally work, you might need to experiment with the following suggestions.

Latency

By default, the loopback module will introduce a slight audio latency. You can hear this by running an echo test:

gst-launch-1.0 pulsesrc ! pulsesink

You might be able to reduce this latency by using the latency_msec option when loading the loopback module:

pacmd load-module module-loopback \ latency_msec=1 \ source=alsa_input.usb-RODE_Microphones_RODE_NT- USB-00.iec958-stereo \ sink=mic_raw_in \ channels=2

Voice Threshold

The noise reduction library provides controls for a voice threshold. The filter will return silence if the probability for sound being voice is lower than this threshold. In other words, the higher you set this value, the more aggressive the filter becomes.

You can pass different thresholds to the filter by supplying them as control argument when the ladspa-sink module is being loaded.

pacmd load-module module-ladspa-sink \ sink_name=mic_raw_in \ sink_master=mic_denoised_out \ label=noise_suppressor_stereo \ plugin=librnnoise_ladspa \ control=95

Mono vs Stereo

The example above will work with stereo audio. When working with a simple microphone, you may want to use a mono signal instead.

For switching to mono, use the following values instead when loading the different modules:

  • label=noise_suppressor_mono – when loading the ladspa-sink module
  • channels=1 – when loading the loopback and remap-source modules

Persistence

Using the pacmd command for the setup, settings are not persistent and will disappear if PulseAudio is restarted. You can add these commands to your PulseAudio configuration file if you want them to be persistent. For that, edit ~/.config/pulse/default.pa and add your commands like this:

.include /etc/pulse/default.pa load-module module-null-sink sink_name=mic_denoised_out
load-module module-ladspa-sink …
…

Limitations

If you listen to the example above, you will notice that the filter reliably reduces background noise. But unfortunately, depending on the situation, it can also cause a loss in voice quality.

The following example shows the results with some street noise. Activating the filter reliably removes the noise, but in this example, the voice quality noticeably drops as well:

Noise reduction of constant street noise

As a conclusion, we can say that this can help if you find yourself in less than ideal audio scenarios. It is also very effective if you are not the main speaker in a video conference and you do not want to constantly mute yourself.

Still, good audio equipment and a quiet environment will always be better.

Have fun.

Posted on Leave a comment

SCP user’s migration guide to rsync

As part of the 8.0 pre-release announcement, the OpenSSH project stated that they consider the scp protocol outdated, inflexible, and not readily fixed. They then go on to recommend the use of sftp or rsync for file transfer instead.

Many users grew up on the scp command, however, and so are not familiar with rsync. Additionally, rsync can do much more than just copy files, which can give a beginner the impression that it’s complicated and opaque. Especially when broadly the scp flags map directly to the cp flags while the rsync flags do not.

This article will provide an introduction and transition guide for anyone familiar with scp. Let’s jump into the most common scenarios: Copying Files and Copying Directories.

Copying files

For copying a single file, the scp and rsync commands are effectively equivalent. Let’s say you need to ship foo.txt to your home directory on a server named server.

$ scp foo.txt me@server:/home/me/

The equivalent rsync command requires only that you type rsync instead of scp:

$ rsync foo.txt me@server:/home/me/

Copying directories

For copying directories, things do diverge quite a bit and probably explains why rsync is seen as more complex than scp. If you want to copy the directory bar to server the corresponding scp command looks exactly like the cp command except for specifying ssh information:

$ scp -r bar/ me@server:/home/me/

With rsync, there are more considerations, as it’s a more powerful tool. First, let’s look at the simplest form:

$ rsync -r bar/ me@server:/home/me/

Looks simple right? For the simple case of a directory that contains only directories and regular files, this will work. However, rsync cares a lot about sending files exactly as they are on the host system. Let’s create a slightly more complex, but not uncommon, example.

# Create a multi-level directory structure
$ mkdir -p bar/baz
# Create a file at the root directory
$ touch bar/foo.txt
# Now create a symlink which points back up to this file
$ cd bar/baz
$ ln -s ../foo.txt link.txt
# Return to our original location
$ cd -

We now have a directory tree that looks like the following:

bar
├── baz
│   └── link.txt -> ../foo.txt
└── foo.txt 1 directory, 2 files

If we try the commands from above to copy bar, we’ll notice very different (and surprising) results. First, let’s give scp a go:

$ scp -r bar/ me@server:/home/me/

If you ssh into your server and look at the directory tree of bar you’ll notice an important and subtle difference from your host system:

bar
├── baz
│   └── link.txt
└── foo.txt 1 directory, 2 files

Note that link.txt is no longer a symlink. It is now a full-blown copy of foo.txt. This might be surprising behavior if you’re used to cp. If you did try to copy the bar directory using cp -r, you would get a new directory with the exact symlinks that bar had. Now if we try the same rsync command from before we’ll get a warning:

$ rsync -r bar/ me@server:/home/me/
skipping non-regular file "bar/baz/link.txt"

Rsync has warned us that it found a non-regular file and is skipping it. Because you didn’t tell it to copy symlinks, it’s ignoring them. Rsync has an extensive manual section titled “SYMBOLIC LINKS” that explains all of the possible behavior options available to you. For our example, we need to add the –links flag.

$ rsync -r --links bar/ me@server:/home/me/

On the remote server we see that the symlink was copied over as a symlink. Note that this is different from how scp copied the symlink.

bar/
├── baz
│   └── link.txt -> ../foo.txt
└── foo.txt 1 directory, 2 files

To save some typing and take advantage of more file-preserving options, use the –archive (-a for short) flag whenever copying a directory. The archive flag will do what most people expect as it enables recursive copy, symlink copy, and many other options.

$ rsync -a bar/ me@server:/home/me/

The rsync man page has in-depth explanations of what the archive flag enables if you’re curious.

Caveats

There is one caveat, however, to using rsync. It’s much easier to specify a non-standard ssh port with scp than with rsync. If server was using port 8022 SSH connections, for instance, then those commands would look like this:

$ scp -P 8022 foo.txt me@server:/home/me/

With rsync, you have to specify the “remote shell” command to use. This defaults to ssh. You do so using the -e flag.

$ rsync -e 'ssh -p 8022' foo.txt me@server:/home/me/

Rsync does use your ssh config; however, so if you are connecting to this server frequently, you can add the following snippet to your ~/.ssh/config file. Then you no longer need to specify the port for the rsync or ssh commands!

Host server Port 8022

Alternatively, if every server you connect to runs on the same non-standard port, you can configure the RSYNC_RSH environment variable.

Why else should you switch to rsync?

Now that we’ve covered the everyday use cases and caveats for switching from scp to rsync, let’s take some time to explore why you probably want to use rsync on its own merits. Many people have made the switch to rsync long before now on these merits alone.

In-flight compression

If you have a slow or otherwise limited network connection between you and your server, rsync can spend more CPU cycles to save network bandwidth. It does this by compressing data before sending it. Compression can be enabled with the -z flag.

Delta transfers

Rsync also only copies a file if the target file is different than the source file. This works recursively through directories. For instance, if you took our final bar example above and re-ran that rsync command multiple times, it would do no work after the initial transfer. Using rsync even for local copies is worth it if you know you will repeat them, such as backing up to a USB drive, for this feature alone as it can save a lot of time with large data sets.

Syncing

As the name implies, rsync can do more than just copy data. So far, we’ve only demonstrated how to copy files with rsync. If you instead want rsync to make the target directory look like your source directory, you can add the –delete flag to rsync. The delete flag makes it so rsync will copy files from the source directory which don’t exist on the target directory. Then it will remove files on the target directory which do not exist in the source directory. The result is the target directory is identical to the source directory. By contrast, scp will only ever add files to the target directory.

Conclusion

For simple use cases, rsync is not significantly more complicated than the venerable scp tool. The only significant difference being the use of -a instead of -r for recursive copying of directories. However, as we saw rsync’s -a flag behaves more like cp’s -r flag than scp’s -r flag does.

Hopefully, with these new commands, you can speed up your file transfer workflow!

Posted on Leave a comment

Fedora Classroom Session: Git 101 with Pagure

The Fedora Classroom is a project to help people by spreading knowledge on subjects related to Fedora for others, If you would like to propose a session, feel free to open a ticket here with the tag classroom. If you’re interested in taking a proposed session, kindly let us know and once you take it, you will be awarded the Sensei Badge too as a token of appreciation. Recordings from the previous sessions can be found here.

We’re back with another awesome classroom on Git 101 with Pagure led by Akashdeep Dhar (t0xic0der).

About the session

In short, the Git 101 with Pagure session will be a guide for newcomers on how to get started with Git with the git forge Pagure used by the Fedora community. After finishing the session you will have the knowledge to manage Git and Pagure and generate the first contributions on the Fedora Project.

When and where

The Classroom session will be organized on Jul 17th, 17:00 UTC. Here’s a link to see what time it is in your timezone. The session will be streamed on Fedora Project’s YouTube channel.

Topics covered in the session

  • Version Control Systems
  • Why Git?
  • VCS Hosting Sites
  • Fedora Pagure
  • Exploring Pagure
  • Git Fundamentals

About the instructor

Akashdeep Dhar is a cybersecurity enthusiast with keen interests in networking, cloud computing and operating systems. He is currently in the final year of his computer science major with cybersecurity minor bachelor degree. He has over five years of experience in using GNU/Linux systems and is new to the Fedora community with contributions made so far in infrastructure, classroom and documentation.

If you miss the session, the recording will also be uploaded in the Fedora Project‘s YouTube channel.

We hope you can attend and enjoy this experience from some of the awesome people that work in Fedora Project. We look forward to seeing you in the Classroom session.


Photograph used in feature image is San Simeon School House by Anita RitenourCC-BY 2.0.

Posted on Leave a comment

Automating Network Devices with Ansible

Ansible is a great automation tool for system and network engineers, with Ansible we can automate small network to a large scale enterprise network. I have been using Ansible to automate both Aruba, and Cisco switches from my Fedora powered laptops for a couple of years. This article covers the requirements and executing a couple of playbooks.

Configuring Ansible

If Ansible is not installed, it can be installed using the command below

$ sudo dnf -y install ansible

Once installed, create a folder in your home directory or a directory of your preference and copy the ansible configuration file. For this demonstration, I will be using the following.

$ mkdir -pv /home/$USER/network_automation
$ sudo cp -v /etc/ansible.cfg /home/$USER/network_automation
$ cd /home/$USER/network_automation
$ sudo chown $USER.$USER && chmod 0600 ansible.cfg

To prevent lengthy commands from failing, edit the ansible.cfg and append the following lines. We must add the persistent connection and set the desired time in seconds for the command_timeout as demonstrated below. A use case where this is useful is when you are performing backups of a network device that has a lengthy configuration.

$ vim ansible.cfg
[persistent_connection]
command_timeout = 300
connection_timeout = 30

Requirements

If SELinux is enabled, you will need to install SELinux binding, which is required when using the copy module.

# Install SELinux bindings
dnf -y install python3-libselinux python3-libsemanage

Creating the inventory

The inventory holds the names of the network assets, and grouping of the assets are in square brackets [], below is a  sample inventory.

[site_a]
Core_A ansible_host=192.168.122.200
Distro_A ansible_host=192.168.122.201
Distro_B ansible_host=192.168.122.202

Group vars can be used to address the common variables, for example, credentials, network operating system, and so on. Ansible document on inventory provides additional details.

Playbook

Playbooks are Ansible’s configuration, deployment, and orchestration language. They can describe a policy you want your remote systems to enforce, or a set of steps in a general IT process.
Ansible Playbook

Read Operations

Let us create a simple playbook to run a show command to read the configuration on a few switches.

 
  1 ---
  2 - name: Basic Playbook
  3   hosts: site_a
  4   connection: local
  5
  6   tasks:
  7   - name: Get Interface Brief
  8     ios_command:
  9       commands:
 10         - show ip interface brief | e una
 11     register: interfaces
 12
 13   - name: Print results
 14     debug:
 15       msg: "{{ interfaces.stdout[0] }}
Without Debug

With Debug

The above images show the differences without and with the debug module respectively.

Let’s break the playbook into three blocks, starting with lines 1 to 4.

  • The three dashes/hyphens starts the YAML document
  • The hosts defines the hosts or host groups, multiple groups are comma-separated
  • Connection defines the methodology to connect to the network devices. Another option is network_cli (recommended method) and will be used later in this article. See IOS Platform Options for more details.

Lines 6 to 11 starts the tasks, we will be using ios_command and ios_config. This play will execute the show command show ip interface brief | e una and save the output from the command into the interfaces variable, with the register key.

Lines 13 to 15, by default, when you execute a show command you will not see the output, though this is not used during automation. It is very useful for debugging; therefore, the debug module was used.

The below video shows the execution of the playbook. There are a couple of ways you can execute the playbook.

  • Passing arguments to the command line, for example, include -u <username> -k to prompt for the remote user credentials
 
ansible-playbook -i inventory show_demo.yaml -u admin -k
  • Include the credentials in the host or group vars
 
ansible-playbook -i inventory show_demo.yaml

Never store passwords in plain text. We recommend using SSH keys to authenticate SSH connections. Ansible supports ssh-agent to manage your SSH keys. If you must use passwords to authenticate SSH connections, we recommend encrypting them with
Using Vault in Playbooks

Passing arguments to the command line
Credentials in the inventory

If we want to save the output to a file, we will use the copy module as shown in the playbook below. In addition to using the copy module, we will include the backup_dir variable to specify the directory path.

 
---
- name: Get System Infomation
  hosts: site_a
  connection: network_cli
  gather_facts: no
 
  vars:
    backup_dir: /home/eramirez/dev/ansible/fedora_magazine
 
  tasks:
  - name: get system interfaces
    ios_command:
      commands:
        - show ip int br | e una
    register: interface
   
  - name: Save result to disk
    copy:
      content: "{{ interface.stdout[0] }}"
      dest: "{{ backup_dir }}/{{ inventory_hostname }}.txt"

To demonstrate the use of variables in the inventory, we will use plain text. This method Must not be used in production.

 
[site_a]
Core_A ansible_host=192.168.122.200
Distro_A ansible_host=192.168.122.201
Distro_B ansible_host=192.168.122.202
[all:vars]
ansible_connection=network_cli
ansible_network_os=ios
ansible_user=admin
ansible_password=fedora
ansible_become=yes
ansible_become_password=yes
ansible_become_method=enable

Write Operations

In the previous section, we saw that we could get information from the network devices; in this section, we will write (add/modify) the configuration on these network devices. To make changes to the network device, we will be using the ios config module.

Let us create a playbook to configure a couple of interfaces in all of the network devices in site_a. We will first take a backup of the current configuration of all devices in site_a. Lastly, we will save the configuration.

 
---
- name: Get System Infomation
  hosts: site_a
  connection: network_cli
  gather_facts: no
 
  vars:
    backup_dir: /home/eramirez/dev/ansible/fedora_magazine
 
  tasks:
  - name: Backup configs
    ios_config:
      backup: yes
      backup_options:
        filename: "{{ inventory_hostname }}_running_cfg.txt"
        dir_path: "{{ backup_dir }}"
   
  - name: get system interfaces
    ios_config:
      lines:
        - description Raspberry Pi
        - switchport mode access
        - switchport access vlan 100
        - spanning-tree portfast
        - logging event link-status
        - no shutdown
      parents: "{{ item }}"
    with_items:
      - interface FastEthernet1/12
      - interface FastEthernet1/13
     
  - name: Save switch configuration
    ios_config:
      save_when: modified

Before we execute the playbook, we will first validate the interface configuration. We will then run the playbook and confirm the changes as illustrated below.

Conclusion

This article is a basic introduction to whet your appetite that demonstrates how Ansible is used to manage network devices. Ansible is capable of automating a vast network, which includes MPLS routing and performing validation before executing the next task.

Posted on Leave a comment

Protect your system with fail2ban and firewalld blacklists

If you run a server with a public-facing SSH access, you might have experienced malicious login attempts. This article shows how to use two utilities to keep the intruder out of our systems.

To protect against repeated ssh login attempts, we’ll look at fail2ban. And if you don’t travel much, and perhaps stay in one or two countries, you can configure firewalld to only allow access from the countries you choose.

First let’s work through a little terminology for those not familiar with the various applications we’ll need to make this work:

fail2ban: Daemon to ban hosts that cause multiple authentication errors.

fail2ban will monitor the SystemD journal to look for failed authentication attempts for whichever jails have been enabled. After the number of failed attempts specified it will add a firewall rule to block that specific IP address for an amount of time configured.

firewalld: A firewall daemon with D-Bus interface providing a dynamic firewall.

Unless you’ve manually decided to use traditional iptables, you’re already using firewalld on all supported releases of Fedora and CentOS.

Assumptions

  • The host system has an internet connection and is either fully exposed directly, through a DMZ (both REALLY bad ideas unless you know what you’re doing), or has a port being forwarded to it from a router.
  • While most of this might apply to other systems, this article assumes a current version of Fedora (31 and up) or RHEL/CentOS 8. On CentOS you must enable the Fedora EPEL repo with
    sudo dnf install epel-release

Install & Configuration

Fail2Ban

More than likely whichever FirewallD zone is set already allows SSH access but the sshd service itself is not enabled by default. To start it manually and without permanently enabling on boot:

$ sudo systemctl start sshd

Or to start and enable on boot:

$ sudo systemctl enable --now sshd

The next step is to install, configure, and enable fail2ban. As usual the install can be done from the command line:

$ sudo dnf install fail2ban

Once installed the next step is to configure a jail (a service you want to monitor and ban at whatever thresholds you’ve set). By default IPs are banned for 1 hour (which is not near long enough). The best practice is to override the system defaults using *.local files instead of directly modifying the *.config files. If we look at my jail.local we see:

# cat /etc/fail2ban/jail.local
[DEFAULT] # "bantime" is the number of seconds that a host is banned.
bantime = 1d # A host is banned if it has generated "maxretry" during the last "findtime"
findtime = 1h # "maxretry" is the number of failures before a host get banned.
maxretry = 5

Turning this into plain language, after 5 attempts within the last hour the IP will be blocked for 1 day. There’s also options for increasing the ban time for IPs that get banned multiple times, but that’s the subject for another article.

The next step is to configure a jail. In this tutorial sshd is shown but the steps are more or less the same for other services. Create a configuration file inside /etc/fail2ban/jail.d. Here’s mine:

# cat /etc/fail2ban/jail.d/sshd.local
[sshd]
enabled = true

It’s that simple! A lot of the configuration is already handled within the package built for Fedora (Hint: I’m the current maintainer). Next enable and start the fail2ban service.

$ sudo systemctl enable --now fail2ban

Hopefully there were not any immediate errors, if not, check the status of fail2ban using the following command:

$ sudo systemctl status fail2ban

If it started without errors it should look something like this:

$ systemctl status fail2ban
● fail2ban.service - Fail2Ban Service
Loaded: loaded (/usr/lib/systemd/system/fail2ban.service; disabled; vendor preset: disabled)
Active: active (running) since Tue 2020-06-16 07:57:40 CDT; 5s ago
Docs: man:fail2ban(1)
Process: 11230 ExecStartPre=/bin/mkdir -p /run/fail2ban (code=exited, status=0/SUCCESS)
Main PID: 11235 (f2b/server)
Tasks: 5 (limit: 4630)
Memory: 12.7M
CPU: 109ms
CGroup: /system.slice/fail2ban.service
└─11235 /usr/bin/python3 -s /usr/bin/fail2ban-server -xf start
Jun 16 07:57:40 localhost.localdomain systemd[1]: Starting Fail2Ban Service…
Jun 16 07:57:40 localhost.localdomain systemd[1]: Started Fail2Ban Service.
Jun 16 07:57:41 localhost.localdomain fail2ban-server[11235]: Server ready

If recently started, fail2ban is unlikely to show anything interesting going on just yet but to check the status of fail2ban and make sure the jail is enabled enter:

$ sudo fail2ban-client status
Status
|- Number of jail:	1
`- Jail list:	sshd

And the high level status of the sshd jail is shown. If multiple jails were enabled they would show up here.

To check the detailed status a jail, just add the jail to the previous command. Here’s the output from my system which has been running for a while. I have removed the banned IPs from the output:

$ sudo fail2ban-client status sshd
Status for the jail: sshd
|- Filter
| |- Currently failed:	8
| |- Total failed:	4399
| `- Journal matches:	_SYSTEMD_UNIT=sshd.service + _COMM=sshd
`- Actions |- Currently banned:	101 |- Total banned:	684 `- Banned IP list: ...

Monitoring the fail2ban log file for intrusion attempts can be achieved by “tailing” the log:

$ sudo tail -f /var/log/fail2ban.log

Tail is a nice little command line utility which by default shows the last 10 lines of a file. Adding the “-f” tells it to follow the file which is a great way to watch a file that’s still being written to.

Since the output has real IPs in it, a sample won’t be provided but it’s pretty human readable. The INFO lines will usually be attempts at a login. If enough attempts are made from a specific IP address you will see a NOTICE line showing an IP address was banned. After the ban time has been reached you will see an NOTICE unban line.

Lookout for several WARNING lines. Most often this happens when a ban is added but fail2ban finds the IP address already in its ban database, which means banning may not be working correctly. If recently installed the fail2ban package it should be setup for FirewallD rich rules. The package was only switched from “ipset” to “rich rules” as of fail2ban-0.11.1-6 so if you have an older install of fail2ban it may still be trying to use the ipset method which utilizes legacy iptables and is not very reliable.

FirewallD Configuration

Reactive or Proactive?

There are two strategies that can be used either separately or together. Reactive or proactive permanent blacklisting of individual IP address or subnets based on country of origin.

For the reactive approach once fail2ban has been running for a while it’s a good idea to take a look at how “bad is bad” by running sudo fail2ban-client status sshd again. There most likely will be many banned IP addresses. Just pick one and try running whois on it. There can be quite a bit of interesting information in the output but for this method, only the country of origin is of importance. To keep things simple, let’s filter out everything but the country.

For this example a few well known domain names will be used:

$ whois google.com | grep -i country
Registrant Country: US
Admin Country: US
Tech Country: US
$ whois rpmfusion.org | grep -i country
Registrant Country: FR
$ whois aliexpress.com | grep -i country
Registrant Country: CN

The reason for the grep -i is to make grep non-case sensitive while most entries use “Country”, some are in all lower case so this method matches regardless.

Now that the country of origin of an intrusion attempt is known the question is, “Does anyone from that country have a legitimate reason to connect to this computer?” If the answer is NO, then it should be acceptable to block the entire country.

Functionally the proactive approach it not very different from the reactive approach, however, there are countries from which intrusion attempts are very common. If the system neither resides in one of those countries, nor has any customers originating from them, then why not add them to the blacklist now rather than waiting?

Blacklisting Script and Configuration

So how do you do that? With FirewallD ipsets. I developed the following script to automate the process as much as possible:

#!/bin/bash
# Based on the below article
# https://www.linode.com/community/questions/11143/top-tip-firewalld-and-ipset-country-blacklist # Source the blacklisted countries from the configuration file
. /etc/blacklist-by-country # Create a temporary working directory
ipdeny_tmp_dir=$(mktemp -d -t blacklist-XXXXXXXXXX)
pushd $ipdeny_tmp_dir # Download the latest network addresses by country file
curl -LO http://www.ipdeny.com/ipblocks/data/countries/all-zones.tar.gz
tar xf all-zones.tar.gz # For updates, remove the ipset blacklist and recreate
if firewall-cmd -q --zone=drop --query-source=ipset:blacklist; then firewall-cmd -q --permanent --delete-ipset=blacklist
fi # Create the ipset blacklist which accepts both IP addresses and networks
firewall-cmd -q --permanent --new-ipset=blacklist --type=hash:net \ --option=family=inet --option=hashsize=4096 --option=maxelem=200000 \ --set-description="An ipset list of networks or ips to be dropped." # Add the address ranges by country per ipdeny.com to the blacklist
for country in $countries; do firewall-cmd -q --permanent --ipset=blacklist \ --add-entries-from-file=./$country.zone && \ echo "Added $country to blacklist ipset."
done # Block individual IPs if the configuration file exists and is not empty
if [ -s "/etc/blacklist-by-ip" ]; then echo "Adding IPs blacklists." firewall-cmd -q --permanent --ipset=blacklist \ --add-entries-from-file=/etc/blacklist-by-ip && \ echo "Added IPs to blacklist ipset."
fi # Add the blacklist ipset to the drop zone if not already setup
if firewall-cmd -q --zone=drop --query-source=ipset:blacklist; then echo "Blacklist already in firewalld drop zone."
else echo "Adding ipset blacklist to firewalld drop zone." firewall-cmd --permanent --zone=drop --add-source=ipset:blacklist
fi firewall-cmd -q --reload popd
rm -rf $ipdeny_tmp_dir

This should be installed to /usr/local/sbin and don’t forget to make it executable!

$ sudo chmod +x /usr/local/sbin/firewalld-blacklist

Then create a configure file: /etc/blacklist-by-country:

# Which countries should be blocked?
# Use the two letter designation separated by a space.
countries=""

And another configuration file /etc/blacklist-by-ip, which is just one IP per line without any additional formatting.

For this example 10 random countries were selected from the ipdeny zones:

# ls | shuf -n 10 | sed "s/\.zone//g" | tr '\n' ' '
nl ee ie pk is sv na om gp bn

Now as long as at least one country has been added to the config file it’s ready to run!

$ sudo firewalld-blacklist % Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed
100 142 100 142 0 0 1014 0 --:--:-- --:--:-- --:--:-- 1014
100 662k 100 662k 0 0 989k 0 --:--:-- --:--:-- --:--:-- 989k
Added nl to blacklist ipset.
Added ee to blacklist ipset.
Added ie to blacklist ipset.
Added pk to blacklist ipset.
Added is to blacklist ipset.
Added sv to blacklist ipset.
Added na to blacklist ipset.
Added om to blacklist ipset.
Added gp to blacklist ipset.
Added bn to blacklist ipset.
Adding ipset blacklist to firewalld drop zone.
success

To verify that the firewalld blacklist was successful, check the drop zone and blacklist ipset:

$ sudo firewall-cmd --info-zone=drop
drop (active) target: DROP icmp-block-inversion: no interfaces: sources: ipset:blacklist services: ports: protocols: masquerade: no forward-ports: source-ports: icmp-blocks: rich rules: $ sudo firewall-cmd --info-ipset=blacklist | less
blacklist type: hash:net options: family=inet hashsize=4096 maxelem=200000 entries: 

The second command will output all of the subnets that were added based on the countries blocked and can be quite lengthy.

So now what do I do?

While it will be a good idea to monitor things more frequently at the beginning, over time the number of intrusion attempts should decline as the blacklist grows. Then the goal should be maintenance rather than active monitoring.

To this end I created a SystemD service file and timer so that on a monthly basis the by country subnets maintained by ipdeny are refreshed. In fact everything discussed here can be downloaded from my pagure.io project:

https://pagure.io/firewalld-blacklist

Aren’t you glad you read the whole article? Now just download the service file and timer to /etc/systemd/system/ and enable the timer:

$ sudo systemctl daemon-reload
$ sudo systemctl enable --now firewalld-blacklist.timer

Posted on Leave a comment

Contribute at the Fedora Test Week for Kernel 5.7

The kernel team is working on final integration for kernel 5.7. This version was just recently released, and will arrive soon in Fedora. As a result, the Fedora kernel and QA teams have organized a test week from Monday, June 22, 2020 through Monday, June 29, 2020. Refer to the wiki page for links to the test images you’ll need to participate. Read below for details.

How does a test week work?

A test week is an event where anyone can help make sure changes in Fedora work well in an upcoming release. Fedora community members often participate, and the public is welcome at these events. If you’ve never contributed before, this is a perfect way to get started.

To contribute, you only need to be able to do the following things:

  • Download test materials, which include some large files
  • Read and follow directions step by step

The wiki page for the kernel test day has a lot of good information on what and how to test. After you’ve done some testing, you can log your results in the test day web application. If you’re available on or around the day of the event, please do some testing and report your results. We have a document which provides all the steps written.

Happy testing, and we hope to see you on test day.

Posted on Leave a comment

Fedora Silverblue, an introduction for developers

The Fedora Silverblue project takes Fedora workstation, libostree and podman, puts them in a blender, and creates a new Immutable Fedora Workstation. Fedora Silverblue is an OS that stops you from changing the core system files arbitrarily, and readily allows you to change the environment system files. The article What is Silverblue describes the big picture, and this article drills down into details for the developer.

Fedora Silverblue ties together a few different projects to make a system that is a git-like object, capable of layering packages, and has a container focused work flow. Silverblue is not the only distribution going down this road. It is the desktop equivalent of CoreOS, the server OS used by Red Hat Openshift.

Silverblue’s idea of ‘immutable’ has nothing to do with immutable layers in a container. Silverblue keeps system files immutable by making them read-only.

Why immutable?

Has an upgrade left your system in an unusable state? Have you wondered why one server in a pool of identical machines is being weird? These problems can happen when one system library – one tiny little file out of hundreds – is corrupted, badly configured or the wrong version. Or maybe your upgrade works fine but it’s not what you’d hoped for, and you want to roll back to the previous state.

An immutable OS is intended to stop problems like these biting you. This is not an easy thing to achieve – simple changes, like flipping the file system between read-write and read-only, may only change a fault-finding headache to a maintenance headache.

Freezing the system is good news for sysadmins, but what about developers? Setting up a development environment means heavily customizing the system, and filling it with living code that changes over time. The answer is partly a case of combining components, and partly the ability to swap between OS versions.

How it works

So how do you get the benefits of immutability without losing the ability to do your work? If you’re thinking ‘containers’, good guess – part of the solution uses podman. But much of the work happens underneath the container layer, at the OS level.

Fedora Silverblue ties together a few different projects to turn an immutable OS into a usable workstation. Silverblue uses libostree to provide the base system, lets you edit config files in /etc/, and provides three different ways to install packages.

  • rpm-ostree installs RPM packages, similar to DNF in the traditional Fedora workstation. Use this for things that shouldn’t go in containers, like KVM/libvirt.
  • flatpak installs packages from a central flathub repo. This is the one-stop shop for graphical desktop apps like LibreOffice.
  • The traditional dnf install still works, but only inside a toolbox (a Fedora container). A developer’s workbench goes in a toolbox.

If you want to know more about these components, check out Pieces of Silverblue.

Rolling back and pinning upgrades

All operating systems need upgrades. Features are added, security holes are plugged and bugs are squashed. But sometimes an upgrade is not a developer’s friend.

A developer depends on many things to get the job done. A good development environment is stuffed with libraries, editors, toolchains and apps that are controlled by the OS, not the developer. An upgrade may cause trouble. Have any of these situations happened to you?

  • A new encryption library is too strict, and an upgrade stopped an API working.
  • Code works well, but has deprecated syntax. An upgrade brought error-throwing misery.
  • The development environment is lovingly hand-crafted. An upgrade broke dependencies and added conflicts.

In a traditional environment, unpicking a troublesome upgrade is hard. In Silverblue, it’s easy. Silverblue keeps two copies of the OS – your current upgrade and your previous version. Point the OS at the previous version, reboot, and you’ve got your old system files back.

You aren’t limited to two copies of your file system – you can keep more by pinning your favorite versions. Dusty Mabe, one of the engineers who has been working on the system since the Project Atomic days, describes how to pin extra copies of the OS in his article Pinning Deployments in OSTree Based Systems.

Your home directory is not affected by rolling back. Rpm-ostree does not touch /etc/ and /var/.

System updates and package installs

Silverblue’s rpm-ostree treats all the files as one object, stored in a repository. The working file system is a checked-out copy of this object. After a system update, you get two objects in that repository – one current object and one updated object. The updated object is checked out and becomes the new file system.

You install your workhorse applications in toolboxes, which provide container isolation. And you install your desktop applications using Flatpak.

This new OS requires a shift in approach. For instance, you don’t have to keep only one copy of your system files – you can store a few and select which one you use. That means you can swap back and forth between an old Fedora release and the rawhide (development) version in a matter of minutes.

Build your own Silverblue VM

You can safely install Fedora Silverblue in a VM on your workstation. If you’ve got a hypervisor and half an hour to spare (10 minutes for ISO download, and 20 minutes for the build), you can see for yourself.

  1. Download Fedora Silverblue ISO from
  2. https://silverblue.fedoraproject.org/download (not Fedora workstation from https://getfedora.org/).
  3. Boot a VM with the Fedora Silverblue ISO. You can squeeze Fedora into compute resources of 1 CPU, 1024MiB of memory and 12GiB of storage, but bigger is better.
  4. Answer Anaconda’s questions.
  5. Wait for the Gnome desktop to appear.
  6. Answer Initial Setup’s questions.

Then you’re ready to set up your developer’s tools. If you’re looking for an IDE, check these out. Use flatpak on the desktop to install them.

Finally, use the CLI to create your first toolbox. Load it with modules using npm, gem, pip, git or your other favorite tools.

Help!

If you get stuck, ask questions at the forum.

If you’re looking for ideas about how to use Silverblue, read articles in the magazine.

Is Silverblue for you?

Silverblue is full of shiny new tech. That in itself is enough to attract the cool kids, like moths to a flame. But this OS is not for everyone. It’s a young system, so some bugs will still be lurking in there. And pioneering tech requires a change of habit – that’s extra cognitive load that the new user may not want to take on.

The OS brings immutable benefits, like keeping your system files safe. It also brings some drawbacks, like the need to reboot after adding system packages. Silverblue also enables new ways of working. If you want to explore new directions in the OS, find out if Silverblue brings benefits to your work.

Posted on Leave a comment

How to rebase to Fedora 32 on Silverblue

Silverblue is an operating system for your desktop built on Fedora. It’s excellent for daily use, development, and container-based workflows. It offers numerous advantages such as being able to roll back in case of any problems. If you want to update to Fedora 32 on your Silverblue system, this article tells you how. It not only shows you what to do, but also how to revert back things if anything unforeseen happens.

Prior to actually doing the rebase to Fedora 32, it is recommended to perform any pending updates. This is accomplished by entering the following at the terminal

rpm-ostree update

or installing updates through GNOME Software and following with a system reboot.

Rebasing using GNOME Software

The GNOME Software shows you that there is new version of Fedora available on the Updates screen.

Fedora 32 is available

First thing you need to do is to download the new image, so click on the Download button. This will take some time and after it’s done you will see that update is ready for install.

Fedora 32 is ready for installation

Click on the Install button. This step will take only a few moments and then you will be prompted to restart your computer.

Restart is needed to rebase to Fedora 32 Silverblue

Click on Restart button and you are done. After restart you will end up in new and shiny release of Fedora 32. Easy, isn’t it?

Rebasing using terminal

If you prefer to do everything in a terminal, than this next guide is for you.

Rebasing to Fedora 32 using terminal is easy. First, check if the 32 branch is available, which should be true now:

$ ostree remote refs fedora

You should see the following in the output:

fedora:fedora/32/x86_64/silverblue

Next, rebase your system to the Fedora 32 branch.

$ rpm-ostree rebase fedora:fedora/32/x86_64/silverblue

Finally, the last thing to do is restart your computer and boot to Fedora 32.

How to revert things back

If anything bad happens — for instance, if you can’t boot to Fedora 32 at all — it’s easy to go back. Just pick the previous entry in GRUB, and your system will start in its previous state before switching to Fedora 32. To make this change permanent, use the following command:

$ rpm-ostree rollback

That’s it. Now you know how to rebase Silverblue to Fedora 32 and back. So why not do it today?

Posted on Leave a comment

Fedora Classroom Session: IRC 101

The Fedora Classroom is a project to help people by spreading knowledge on subjects related to Fedora for others, If you would like to propose a session, feel free to open a ticket here with the tag “classroom.” If you’re interested in taking a proposed session, kindly let us know and once you take it, you will be awarded the Sensei Badge too as a token of appreciation. Recordings from the previous sessions can be found here.

We’re back with another awesome classroom on IRC 101 led by Pac23.

About the Session: A Beginners Guide to Internet Relay Chat

In short, the IRC 101 session will be a guide for newcomers on how to get started with IRC with the Fedora community and hang out with other contributors in IRC. After finishing the session you will have the knowledge to setup your IRC client and start communicating with other Fedora people.

When and where

The Classroom session will be organized on May 9th, 16:00 UTC. Here’s a link to see what time it is in your timezone. The session will be streamed on Fedora Project’s YouTube channel.

Topics covered in the session

  • Why IRC & How it works?
  • How to install an IRC Client.
  • Registering your nick in IRC
  • Some basic commands, modes & access controls
  • Joining fedora channels
  • Brownie Topic: Fedora bots in IRC.

About the instructor

Pac23’s been around in the Fedora community and contributing to the project for around a year. He’s started with volunteering to package a custom kernel. He’s also a Computer Engineering undergrad at the University of Mumbai. His interests mostly reside in DevOps, IoT & system design. Outside computer science, he loves traveling, airplanes and history. He can be found as pac23 in IRC channels including #fedora-neuro, #fedora-devel, and #fedora-kernel.

If you miss the session, no worries. The recording will also be uploaded in the Fedora Project‘s YouTube channel.

We hope you can attend and enjoy this experience from some of the awesome people that work in Fedora Project. We look forward to seeing you in the Classroom session.


Photograph used in feature image is San Simeon School House by Anita RitenourCC-BY 2.0.